Merge branch 'master' into 1775-support-log-config-json

This commit is contained in:
Benoit Chesneau 2023-05-07 16:16:56 +02:00 committed by GitHub
commit db9de0175d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
117 changed files with 2786 additions and 4217 deletions

24
.github/workflows/lint.yml vendored Normal file
View File

@ -0,0 +1,24 @@
name: lint
on: [push, pull_request]
permissions:
contents: read # to fetch code (actions/checkout)
jobs:
lint:
name: tox-${{ matrix.toxenv }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
toxenv: [lint, docs-lint, pycodestyle]
python-version: [ "3.10" ]
steps:
- uses: actions/checkout@v3
- name: Using Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install Dependencies
run: |
python -m pip install --upgrade pip
python -m pip install tox
- run: tox -e ${{ matrix.toxenv }}

24
.github/workflows/tox.yml vendored Normal file
View File

@ -0,0 +1,24 @@
name: tox
on: [push, pull_request]
permissions:
contents: read # to fetch code (actions/checkout)
jobs:
tox:
name: ${{ matrix.os }} / ${{ matrix.python-version }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest] # All OSes pass except Windows because tests need Unix-only fcntl, grp, pwd, etc.
python-version: [ "3.7", "3.8", "3.9", "3.10", "3.11", "pypy-3.7", "pypy-3.8" ]
steps:
- uses: actions/checkout@v3
- name: Using Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install Dependencies
run: |
python -m pip install --upgrade pip
python -m pip install tox
- run: tox -e py

View File

@ -21,6 +21,7 @@ disable=
eval-used, eval-used,
fixme, fixme,
import-error, import-error,
import-outside-toplevel,
import-self, import-self,
inconsistent-return-statements, inconsistent-return-statements,
invalid-name, invalid-name,
@ -33,6 +34,7 @@ disable=
no-staticmethod-decorator, no-staticmethod-decorator,
not-callable, not-callable,
protected-access, protected-access,
raise-missing-from,
redefined-outer-name, redefined-outer-name,
too-few-public-methods, too-few-public-methods,
too-many-arguments, too-many-arguments,

View File

@ -1,31 +0,0 @@
sudo: false
language: python
matrix:
include:
- python: 3.7
env: TOXENV=lint
dist: xenial
sudo: true
- python: 3.4
env: TOXENV=py34
- python: 3.5
env: TOXENV=py35
- python: 3.6
env: TOXENV=py36
- python: 3.7
env: TOXENV=py37
dist: xenial
sudo: true
- python: 3.8-dev
env: TOXENV=py38-dev
dist: xenial
sudo: true
allow_failures:
- env: TOXENV=py38-dev
install: pip install tox
# TODO: https://github.com/tox-dev/tox/issues/149
script: tox --recreate
cache:
directories:
- .tox
- $HOME/.cache/pip

View File

@ -141,7 +141,7 @@ The relevant maintainer for a pull request is assigned in 3 steps:
* Step 2: Find the MAINTAINERS file which affects this directory. If the directory itself does not have a MAINTAINERS file, work your way up the the repo hierarchy until you find one. * Step 2: Find the MAINTAINERS file which affects this directory. If the directory itself does not have a MAINTAINERS file, work your way up the the repo hierarchy until you find one.
* Step 3: The first maintainer listed is the primary maintainer. The pull request is assigned to him. He may assign it to other listed maintainers, at his discretion. * Step 3: The first maintainer listed is the primary maintainer who is assigned the Pull Request. The primary maintainer can reassign a Pull Request to other listed maintainers.
### I'm a maintainer, should I make pull requests too? ### I'm a maintainer, should I make pull requests too?

View File

@ -1,9 +1,23 @@
Core maintainers
================
Benoit Chesneau <benoitc@gunicorn.org> Benoit Chesneau <benoitc@gunicorn.org>
Paul J. Davis <paul.joseph.davis@gmail.com>
Randall Leeds <randall.leeds@gmail.com>
Konstantin Kapustin <sirkonst@gmail.com> Konstantin Kapustin <sirkonst@gmail.com>
Randall Leeds <randall.leeds@gmail.com>
Berker Peksağ <berker.peksag@gmail.com>
Jason Madden <jason@nextthought.com>
Brett Randall <javabrett@gmail.com>
Alumni
======
This list contains maintainers that are no longer active on the project.
It is thanks to these people that the project has become what it is today.
Thank you!
Paul J. Davis <paul.joseph.davis@gmail.com>
Kenneth Reitz <me@kennethreitz.com> Kenneth Reitz <me@kennethreitz.com>
Nikolay Kim <fafhrd91@gmail.com> Nikolay Kim <fafhrd91@gmail.com>
Andrew Svetlov <andrew.svetlov@gmail.com> Andrew Svetlov <andrew.svetlov@gmail.com>
Stéphane Wirtel <stephane@wirtel.be> Stéphane Wirtel <stephane@wirtel.be>
Berker Peksağ <berker.peksag@gmail.com>

41
NOTICE
View File

@ -1,6 +1,6 @@
Gunicorn Gunicorn
2009-2018 (c) Benoît Chesneau <benoitc@e-engura.org> 2009-2023 (c) Benoît Chesneau <benoitc@gunicorn.org>
2009-2015 (c) Paul J. Davis <paul.joseph.davis@gmail.com> 2009-2015 (c) Paul J. Davis <paul.joseph.davis@gmail.com>
Gunicorn is released under the MIT license. See the LICENSE Gunicorn is released under the MIT license. See the LICENSE
@ -19,7 +19,7 @@ not be used in advertising or publicity pertaining to distribution
of the software without specific, written prior permission. of the software without specific, written prior permission.
VINAY SAJIP DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, VINAY SAJIP DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
INCLUDINGALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
VINAY SAJIP BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR VINAY SAJIP BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR
ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER
IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
@ -82,43 +82,8 @@ WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE. OTHER DEALINGS IN THE SOFTWARE.
doc/sitemap_gen.py
------------------
Under BSD License :
Copyright (c) 2004, 2005, Google Inc.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
* Neither the name of Google Inc. nor the names of its contributors
may be used to endorse or promote products derived from this
software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
util/unlink.py util/unlink.py
-------------- --------------
backport frop python3 Lib/test/support.py backport from python3 Lib/test/support.py

View File

@ -9,26 +9,30 @@ Gunicorn
:alt: Supported Python versions :alt: Supported Python versions
:target: https://pypi.python.org/pypi/gunicorn :target: https://pypi.python.org/pypi/gunicorn
.. image:: https://travis-ci.org/benoitc/gunicorn.svg?branch=master .. image:: https://github.com/benoitc/gunicorn/actions/workflows/tox.yml/badge.svg
:alt: Build Status :alt: Build Status
:target: https://travis-ci.org/benoitc/gunicorn :target: https://github.com/benoitc/gunicorn/actions/workflows/tox.yml
.. image:: https://github.com/benoitc/gunicorn/actions/workflows/lint.yml/badge.svg
:alt: Lint Status
:target: https://github.com/benoitc/gunicorn/actions/workflows/lint.yml
Gunicorn 'Green Unicorn' is a Python WSGI HTTP Server for UNIX. It's a pre-fork Gunicorn 'Green Unicorn' is a Python WSGI HTTP Server for UNIX. It's a pre-fork
worker model ported from Ruby's Unicorn_ project. The Gunicorn server is broadly worker model ported from Ruby's Unicorn_ project. The Gunicorn server is broadly
compatible with various web frameworks, simply implemented, light on server compatible with various web frameworks, simply implemented, light on server
resource usage, and fairly speedy. resource usage, and fairly speedy.
Feel free to join us in `#gunicorn`_ on Freenode_. Feel free to join us in `#gunicorn`_ on `Libera.chat`_.
Documentation Documentation
------------- -------------
The documentation is hosted at http://docs.gunicorn.org. The documentation is hosted at https://docs.gunicorn.org.
Installation Installation
------------ ------------
Gunicorn requires **Python **Python 3.x >= 3.4**. Gunicorn requires **Python 3.x >= 3.5**.
Install from PyPI:: Install from PyPI::
@ -52,6 +56,12 @@ Example with test app::
$ gunicorn --workers=2 test:app $ gunicorn --workers=2 test:app
Contributing
------------
See `our complete contributor's guide <CONTRIBUTING.md>`_ for more details.
License License
------- -------
@ -59,6 +69,6 @@ Gunicorn is released under the MIT License. See the LICENSE_ file for more
details. details.
.. _Unicorn: https://bogomips.org/unicorn/ .. _Unicorn: https://bogomips.org/unicorn/
.. _`#gunicorn`: https://webchat.freenode.net/?channels=gunicorn .. _`#gunicorn`: https://web.libera.chat/?channels=#gunicorn
.. _Freenode: https://freenode.net/ .. _`Libera.chat`: https://libera.chat/
.. _LICENSE: https://github.com/benoitc/gunicorn/blob/master/LICENSE .. _LICENSE: https://github.com/benoitc/gunicorn/blob/master/LICENSE

10
THANKS
View File

@ -26,6 +26,7 @@ Bartosz Oler <bartosz@bzimage.us>
Ben Cochran <bcochran@gmail.com> Ben Cochran <bcochran@gmail.com>
Ben Oswald <ben.oswald@root-space.de> Ben Oswald <ben.oswald@root-space.de>
Benjamin Gilbert <bgilbert@backtick.net> Benjamin Gilbert <bgilbert@backtick.net>
Benny Mei <meibenny@gmail.com>
Benoit Chesneau <bchesneau@gmail.com> Benoit Chesneau <bchesneau@gmail.com>
Berker Peksag <berker.peksag@gmail.com> Berker Peksag <berker.peksag@gmail.com>
bninja <andrew@poundpay.com> bninja <andrew@poundpay.com>
@ -39,6 +40,7 @@ Chris Adams <chris@improbable.org>
Chris Forbes <chrisf@ijw.co.nz> Chris Forbes <chrisf@ijw.co.nz>
Chris Lamb <lamby@debian.org> Chris Lamb <lamby@debian.org>
Chris Streeter <chris@chrisstreeter.com> Chris Streeter <chris@chrisstreeter.com>
Christian Clauss <cclauss@me.com>
Christoph Heer <Christoph.Heer@gmail.com> Christoph Heer <Christoph.Heer@gmail.com>
Christos Stavrakakis <cstavr@grnet.gr> Christos Stavrakakis <cstavr@grnet.gr>
CMGS <ilskdw@mspil.edu.cn> CMGS <ilskdw@mspil.edu.cn>
@ -47,6 +49,7 @@ Dan Callaghan <dcallagh@redhat.com>
Dan Sully <daniel-github@electricrain.com> Dan Sully <daniel-github@electricrain.com>
Daniel Quinn <code@danielquinn.org> Daniel Quinn <code@danielquinn.org>
Dariusz Suchojad <dsuch-github@m.zato.io> Dariusz Suchojad <dsuch-github@m.zato.io>
David Black <github@dhb.is>
David Vincelli <david@freshbooks.com> David Vincelli <david@freshbooks.com>
David Wolever <david@wolever.net> David Wolever <david@wolever.net>
Denis Bilenko <denis.bilenko@gmail.com> Denis Bilenko <denis.bilenko@gmail.com>
@ -60,6 +63,7 @@ Eric Florenzano <floguy@gmail.com>
Eric Shull <eric@elevenbasetwo.com> Eric Shull <eric@elevenbasetwo.com>
Eugene Obukhov <irvind25@gmail.com> Eugene Obukhov <irvind25@gmail.com>
Evan Mezeske <evan@meebo-inc.com> Evan Mezeske <evan@meebo-inc.com>
Florian Apolloner <florian@apolloner.eu>
Gaurav Kumar <gauravkumar37@gmail.com> Gaurav Kumar <gauravkumar37@gmail.com>
George Kollias <georgioskollias@gmail.com> George Kollias <georgioskollias@gmail.com>
George Notaras <gnot@g-loaded.eu> George Notaras <gnot@g-loaded.eu>
@ -101,12 +105,14 @@ Konstantin Kapustin <sirkonst@gmail.com>
kracekumar <kracethekingmaker@gmail.com> kracekumar <kracethekingmaker@gmail.com>
Kristian Glass <git@doismellburning.co.uk> Kristian Glass <git@doismellburning.co.uk>
Kristian Øllegaard <kristian.ollegaard@divio.ch> Kristian Øllegaard <kristian.ollegaard@divio.ch>
Krystian <chrisjozwik@outlook.com>
Krzysztof Urbaniak <urban@fail.pl> Krzysztof Urbaniak <urban@fail.pl>
Kyle Kelley <rgbkrk@gmail.com> Kyle Kelley <rgbkrk@gmail.com>
Kyle Mulka <repalviglator@yahoo.com> Kyle Mulka <repalviglator@yahoo.com>
Lars Hansson <romabysen@gmail.com> Lars Hansson <romabysen@gmail.com>
Leonardo Santagada <santagada@gmail.com> Leonardo Santagada <santagada@gmail.com>
Levi Gross <levi@levigross.com> Levi Gross <levi@levigross.com>
licunlong <shenxiaogll@163.com>
Łukasz Kucharski <lkucharski@leon.pl> Łukasz Kucharski <lkucharski@leon.pl>
Mahmoud Hashemi <mahmoudrhashemi@gmail.com> Mahmoud Hashemi <mahmoudrhashemi@gmail.com>
Malthe Borch <mborch@gmail.com> Malthe Borch <mborch@gmail.com>
@ -151,6 +157,7 @@ Rik <rvachterberg@gmail.com>
Ronan Amicel <ronan.amicel@gmail.com> Ronan Amicel <ronan.amicel@gmail.com>
Ryan Peck <ryan@rypeck.com> Ryan Peck <ryan@rypeck.com>
Saeed Gharedaghi <saeed.ghx68@gmail.com> Saeed Gharedaghi <saeed.ghx68@gmail.com>
Samuel Matos <samypr100@users.noreply.github.com>
Sergey Rublev <narma.nsk@gmail.com> Sergey Rublev <narma.nsk@gmail.com>
Shane Reustle <me@shanereustle.com> Shane Reustle <me@shanereustle.com>
shouse-cars <shouse@cars.com> shouse-cars <shouse@cars.com>
@ -161,7 +168,9 @@ Stephen DiCato <Locker537@gmail.com>
Stephen Holsapple <sholsapp@gmail.com> Stephen Holsapple <sholsapp@gmail.com>
Steven Cummings <estebistec@gmail.com> Steven Cummings <estebistec@gmail.com>
Sébastien Fievet <zyegfryed@gmail.com> Sébastien Fievet <zyegfryed@gmail.com>
Talha Malik <talham7391@hotmail.com>
TedWantsMore <TedWantsMore@gmx.com> TedWantsMore <TedWantsMore@gmx.com>
Teko012 <112829523+Teko012@users.noreply.github.com>
Thomas Grainger <tagrain@gmail.com> Thomas Grainger <tagrain@gmail.com>
Thomas Steinacher <tom@eggdrop.ch> Thomas Steinacher <tom@eggdrop.ch>
Travis Cline <travis.cline@gmail.com> Travis Cline <travis.cline@gmail.com>
@ -176,3 +185,4 @@ WooParadog <guohaochuan@gmail.com>
Xie Shi <xieshi@douban.com> Xie Shi <xieshi@douban.com>
Yue Du <ifduyue@gmail.com> Yue Du <ifduyue@gmail.com>
zakdances <zakdances@gmail.com> zakdances <zakdances@gmail.com>
Emile Fugulin <emilefugulin@hotmail.com>

View File

@ -2,23 +2,37 @@ version: '{branch}.{build}'
environment: environment:
matrix: matrix:
- TOXENV: lint - TOXENV: lint
PYTHON: "C:\\Python37-x64" PYTHON: "C:\\Python38-x64"
- TOXENV: py35 - TOXENV: docs-lint
PYTHON: "C:\\Python35-x64" PYTHON: "C:\\Python38-x64"
- TOXENV: py36 - TOXENV: pycodestyle
PYTHON: "C:\\Python36-x64" PYTHON: "C:\\Python38-x64"
- TOXENV: py37 # Windows is not ready for testing!!!
PYTHON: "C:\\Python37-x64" # Python's fcntl, grp, pwd, os.geteuid(), and socket.AF_UNIX are all Unix-only.
#- TOXENV: py35
# PYTHON: "C:\\Python35-x64"
#- TOXENV: py36
# PYTHON: "C:\\Python36-x64"
#- TOXENV: py37
# PYTHON: "C:\\Python37-x64"
#- TOXENV: py38
# PYTHON: "C:\\Python38-x64"
#- TOXENV: py39
# PYTHON: "C:\\Python39-x64"
matrix: matrix:
allow_failures: allow_failures:
- TOXENV: py35 - TOXENV: py35
- TOXENV: py36 - TOXENV: py36
- TOXENV: py37 - TOXENV: py37
init: SET "PATH=%PYTHON%;%PYTHON%\\Scripts;%PATH%" - TOXENV: py38
- TOXENV: py39
init:
- SET "PATH=%PYTHON%;%PYTHON%\\Scripts;%PATH%"
install: install:
- pip install tox - pip install tox
build: off build: false
test_script: tox test_script:
- tox
cache: cache:
# Not including the .tox directory since it takes longer to download/extract # Not including the .tox directory since it takes longer to download/extract
# the cache archive than for tox to clean install from the pip cache. # the cache archive than for tox to clean install from the pip cache.

View File

@ -50,27 +50,29 @@ def format_settings(app):
def fmt_setting(s): def fmt_setting(s):
if callable(s.default): if callable(s.default):
val = inspect.getsource(s.default) val = inspect.getsource(s.default)
val = "\n".join(" %s" % l for l in val.splitlines()) val = "\n".join(" %s" % line for line in val.splitlines())
val = " ::\n\n" + val val = "\n\n.. code-block:: python\n\n" + val
elif s.default == '': elif s.default == '':
val = "``(empty string)``" val = "``''``"
else: else:
val = "``%s``" % s.default val = "``%r``" % s.default
if s.cli and s.meta: if s.cli and s.meta:
args = ["%s %s" % (arg, s.meta) for arg in s.cli] cli = " or ".join("``%s %s``" % (arg, s.meta) for arg in s.cli)
cli = ', '.join(args)
elif s.cli: elif s.cli:
cli = ", ".join(s.cli) cli = " or ".join("``%s``" % arg for arg in s.cli)
else:
cli = ""
out = [] out = []
out.append(".. _%s:\n" % s.name.replace("_", "-")) out.append(".. _%s:\n" % s.name.replace("_", "-"))
out.append("%s" % s.name) out.append("``%s``" % s.name)
out.append("~" * len(s.name)) out.append("~" * (len(s.name) + 4))
out.append("") out.append("")
if s.cli: if s.cli:
out.append("* ``%s``" % cli) out.append("**Command line:** %s" % cli)
out.append("* %s" % val) out.append("")
out.append("**Default:** %s" % val)
out.append("") out.append("")
out.append(s.desc) out.append(s.desc)
out.append("") out.append("")

View File

@ -16,7 +16,7 @@
<div class="logo-div"> <div class="logo-div">
<div class="latest"> <div class="latest">
Latest version: <strong><a Latest version: <strong><a
href="https://docs.gunicorn.org/en/stable/">19.9.0</a></strong> href="https://docs.gunicorn.org/en/stable/">20.1.0</a></strong>
</div> </div>
<div class="logo"><img src="images/logo.jpg" ></div> <div class="logo"><img src="images/logo.jpg" ></div>
@ -118,11 +118,11 @@
<li><a href="https://github.com/benoitc/gunicorn/projects/4">Forum</a></li> <li><a href="https://github.com/benoitc/gunicorn/projects/4">Forum</a></li>
<li><a href="https://github.com/benoitc/gunicorn/projects/3">Mailing list</a> <li><a href="https://github.com/benoitc/gunicorn/projects/3">Mailing list</a>
</ul> </ul>
<p>Project maintenance guidelines are avaible on the <a href="https://github.com/benoitc/gunicorn/wiki/Project-management">wiki</a></p> <p>Project maintenance guidelines are available on the <a href="https://github.com/benoitc/gunicorn/wiki/Project-management">wiki</a></p>
<h1>Irc</h1> <h1>IRC</h1>
<p>The Gunicorn channel is on the <a href="http://freenode.net/">Freenode</a> IRC <p>The Gunicorn channel is on the <a href="https://libera.chat/">Libera Chat</a> IRC
network. You can chat with the community on the <a href="http://webchat.freenode.net/?channels=gunicorn">#gunicorn channel</a>.</p> network. You can chat with the community on the <a href="https://web.libera.chat/?channels=#gunicorn">#gunicorn channel</a>.</p>
<h1>Issue Tracking</h1> <h1>Issue Tracking</h1>
<p>Bug reports, enhancement requests and tasks generally go in the <a href="http://github.com/benoitc/gunicorn/issues">Github <p>Bug reports, enhancement requests and tasks generally go in the <a href="http://github.com/benoitc/gunicorn/issues">Github

View File

@ -1,112 +1,73 @@
<?xml version="1.0" encoding="UTF-8"?> <?xml version='1.0' encoding='UTF-8'?>
<urlset <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
xmlns="http://www.google.com/schemas/sitemap/0.84"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.google.com/schemas/sitemap/0.84
http://www.google.com/schemas/sitemap/0.84/sitemap.xsd">
<url> <url>
<loc>http://gunicorn.org/</loc> <loc>http://gunicorn.org/</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod> <lastmod>2019-11-27T00:02:48+01:00</lastmod>
<priority>0.5000</priority> <priority>1.0</priority>
</url>
<url>
<loc>http://gunicorn.org/community.html</loc>
<lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5</priority>
</url> </url>
<url> <url>
<loc>http://gunicorn.org/configuration.html</loc> <loc>http://gunicorn.org/configuration.html</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod> <lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5000</priority> <priority>0.5</priority>
</url> </url>
<url> <url>
<loc>http://gunicorn.org/configure.html</loc> <loc>http://gunicorn.org/configure.html</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod> <lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5000</priority> <priority>0.5</priority>
</url>
<url>
<loc>http://gunicorn.org/css/</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod>
<priority>0.5000</priority>
</url>
<url>
<loc>http://gunicorn.org/css/index.css</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod>
<priority>0.5000</priority>
</url>
<url>
<loc>http://gunicorn.org/css/style.css</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod>
<priority>0.5000</priority>
</url> </url>
<url> <url>
<loc>http://gunicorn.org/deploy.html</loc> <loc>http://gunicorn.org/deploy.html</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod> <lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5000</priority> <priority>0.5</priority>
</url> </url>
<url> <url>
<loc>http://gunicorn.org/deployment.html</loc> <loc>http://gunicorn.org/deployment.html</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod> <lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5000</priority> <priority>0.5</priority>
</url> </url>
<url> <url>
<loc>http://gunicorn.org/design.html</loc> <loc>http://gunicorn.org/design.html</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod> <lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5000</priority> <priority>0.5</priority>
</url> </url>
<url> <url>
<loc>http://gunicorn.org/faq.html</loc> <loc>http://gunicorn.org/faq.html</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod> <lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5000</priority> <priority>0.5</priority>
</url>
<url>
<loc>http://gunicorn.org/images/</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod>
<priority>0.5000</priority>
</url>
<url>
<loc>http://gunicorn.org/images/gunicorn.png</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod>
<priority>0.5000</priority>
</url>
<url>
<loc>http://gunicorn.org/images/large_gunicorn.png</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod>
<priority>0.5000</priority>
</url>
<url>
<loc>http://gunicorn.org/images/logo.png</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod>
<priority>0.5000</priority>
</url>
<url>
<loc>http://gunicorn.org/index.html</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod>
<priority>0.5000</priority>
</url> </url>
<url> <url>
<loc>http://gunicorn.org/install.html</loc> <loc>http://gunicorn.org/install.html</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod> <lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5000</priority> <priority>0.5</priority>
</url> </url>
<url> <url>
<loc>http://gunicorn.org/installation.html</loc> <loc>http://gunicorn.org/installation.html</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod> <lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5000</priority> <priority>0.5</priority>
</url> </url>
<url> <url>
<loc>http://gunicorn.org/news.html</loc> <loc>http://gunicorn.org/news.html</loc>
<lastmod>2010-07-08T19:57:19Z</lastmod> <lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5000</priority> <priority>0.5</priority>
</url> </url>
<url> <url>
<loc>http://gunicorn.org/run.html</loc> <loc>http://gunicorn.org/run.html</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod> <lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5000</priority> <priority>0.5</priority>
</url> </url>
<url> <url>
<loc>http://gunicorn.org/tuning.html</loc> <loc>http://gunicorn.org/tuning.html</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod> <lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5000</priority> <priority>0.5</priority>
</url> </url>
<url> <url>
<loc>http://gunicorn.org/usage.html</loc> <loc>http://gunicorn.org/usage.html</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod> <lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5000</priority> <priority>0.5</priority>
</url> </url>
</urlset> </urlset>

View File

@ -1,19 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<site
base_url="http://gunicorn.org"
store_into="htdocs/sitemap.xml"
verbose="1"
>
<directory path="htdocs/" url="http://gunicorn.org/" />
<!-- Exclude URLs that end with a '~' (IE: emacs backup files) -->
<filter action="drop" type="wildcard" pattern="*~" />
<!-- Exclude URLs within UNIX-style hidden files or directories -->
<filter action="drop" type="regexp" pattern="/\.[^/]*" />
<!-- Exclude github CNAME file -->
<filter action="drop" type="wildcard" pattern="*CNAME" />
</site>

2221
docs/sitemap_gen.py Executable file → Normal file

File diff suppressed because it is too large Load Diff

View File

@ -75,7 +75,7 @@ Changelog - 2012
- fix tornado.wsgi.WSGIApplication calling error - fix tornado.wsgi.WSGIApplication calling error
- **breaking change**: take the control on graceful reload back. - **breaking change**: take the control on graceful reload back.
graceful can't be overrided anymore using the on_reload function. graceful can't be overridden anymore using the on_reload function.
0.14.3 / 2012-05-15 0.14.3 / 2012-05-15
------------------- -------------------

View File

@ -38,10 +38,10 @@ Changelog - 2013
- fix: give the initial global_conf to paster application - fix: give the initial global_conf to paster application
- fix: fix 'Expect: 100-continue' support on Python 3 - fix: fix 'Expect: 100-continue' support on Python 3
New versionning: New versioning:
++++++++++++++++ ++++++++++++++++
With this release, the versionning of Gunicorn is changing. Gunicorn is With this release, the versioning of Gunicorn is changing. Gunicorn is
stable since a long time and there is no point to release a "1.0" now. stable since a long time and there is no point to release a "1.0" now.
It should have been done since a long time. 0.17 really meant it was the It should have been done since a long time. 0.17 really meant it was the
17th stable version. From the beginning we have only 2 kind of 17th stable version. From the beginning we have only 2 kind of
@ -49,7 +49,7 @@ releases:
major release: releases with major changes or huge features added major release: releases with major changes or huge features added
services releases: fixes and minor features added So from now we will services releases: fixes and minor features added So from now we will
apply the following versionning ``<major>.<service>``. For example ``17.5`` is a apply the following versioning ``<major>.<service>``. For example ``17.5`` is a
service release. service release.
0.17.4 / 2013-04-24 0.17.4 / 2013-04-24

View File

@ -71,7 +71,7 @@ AioHttp worker
Async worker Async worker
++++++++++++ ++++++++++++
- fix :issue:`790`: StopIteration shouldn't be catched at this level. - fix :issue:`790`: StopIteration shouldn't be caught at this level.
Logging Logging
@ -180,7 +180,7 @@ core
- add: syslog logging can now be done to a unix socket - add: syslog logging can now be done to a unix socket
- fix logging: don't try to redirect stdout/stderr to the logfile. - fix logging: don't try to redirect stdout/stderr to the logfile.
- fix logging: don't propagate log - fix logging: don't propagate log
- improve logging: file option can be overriden by the gunicorn options - improve logging: file option can be overridden by the gunicorn options
`--error-logfile` and `--access-logfile` if they are given. `--error-logfile` and `--access-logfile` if they are given.
- fix: don't override SERVER_* by the Host header - fix: don't override SERVER_* by the Host header
- fix: handle_error - fix: handle_error

68
docs/source/2018-news.rst Normal file
View File

@ -0,0 +1,68 @@
================
Changelog - 2018
================
.. note::
Please see :doc:`news` for the latest changes
19.9.0 / 2018/07/03
===================
- fix: address a regression that prevented syslog support from working
(:issue:`1668`, :pr:`1773`)
- fix: correctly set `REMOTE_ADDR` on versions of Python 3 affected by
`Python Issue 30205 <https://bugs.python.org/issue30205>`_
(:issue:`1755`, :pr:`1796`)
- fix: show zero response length correctly in access log (:pr:`1787`)
- fix: prevent raising :exc:`AttributeError` when ``--reload`` is not passed
in case of a :exc:`SyntaxError` raised from the WSGI application.
(:issue:`1805`, :pr:`1806`)
- The internal module ``gunicorn.workers.async`` was renamed to ``gunicorn.workers.base_async``
since ``async`` is now a reserved word in Python 3.7.
(:pr:`1527`)
19.8.1 / 2018/04/30
===================
- fix: secure scheme headers when bound to a unix socket
(:issue:`1766`, :pr:`1767`)
19.8.0 / 2018/04/28
===================
- Eventlet 0.21.0 support (:issue:`1584`)
- Tornado 5 support (:issue:`1728`, :pr:`1752`)
- support watching additional files with ``--reload-extra-file``
(:pr:`1527`)
- support configuring logging with a dictionary with ``--logging-config-dict``
(:issue:`1087`, :pr:`1110`, :pr:`1602`)
- add support for the ``--config`` flag in the ``GUNICORN_CMD_ARGS`` environment
variable (:issue:`1576`, :pr:`1581`)
- disable ``SO_REUSEPORT`` by default and add the ``--reuse-port`` setting
(:issue:`1553`, :issue:`1603`, :pr:`1669`)
- fix: installing `inotify` on MacOS no longer breaks the reloader
(:issue:`1540`, :pr:`1541`)
- fix: do not throw ``TypeError`` when ``SO_REUSEPORT`` is not available
(:issue:`1501`, :pr:`1491`)
- fix: properly decode HTTP paths containing certain non-ASCII characters
(:issue:`1577`, :pr:`1578`)
- fix: remove whitespace when logging header values under gevent (:pr:`1607`)
- fix: close unlinked temporary files (:issue:`1327`, :pr:`1428`)
- fix: parse ``--umask=0`` correctly (:issue:`1622`, :pr:`1632`)
- fix: allow loading applications using relative file paths
(:issue:`1349`, :pr:`1481`)
- fix: force blocking mode on the gevent sockets (:issue:`880`, :pr:`1616`)
- fix: preserve leading `/` in request path (:issue:`1512`, :pr:`1511`)
- fix: forbid contradictory secure scheme headers
- fix: handle malformed basic authentication headers in access log
(:issue:`1683`, :pr:`1684`)
- fix: defer handling of ``USR1`` signal to a new greenlet under gevent
(:issue:`1645`, :pr:`1651`)
- fix: the threaded worker would sometimes close the wrong keep-alive
connection under Python 2 (:issue:`1698`, :pr:`1699`)
- fix: re-open log files on ``USR1`` signal using ``handler._open`` to
support subclasses of ``FileHandler`` (:issue:`1739`, :pr:`1742`)
- deprecation: the ``gaiohttp`` worker is deprecated, see the
:ref:`worker-class` documentation for more information
(:issue:`1338`, :pr:`1418`, :pr:`1569`)

121
docs/source/2019-news.rst Normal file
View File

@ -0,0 +1,121 @@
================
Changelog - 2019
================
.. note::
Please see :doc:`news` for the latest changes
20.0.4 / 2019/11/26
===================
- fix binding a socket using the file descriptor
- remove support for the `bdist_rpm` build
20.0.3 / 2019/11/24
===================
- fixed load of a config file without a Python extension
- fixed `socketfromfd.fromfd` when defaults are not set
.. note:: we now warn when we load a config file without Python Extension
20.0.2 / 2019/11/23
===================
- fix changelog
20.0.1 / 2019/11/23
===================
- fixed the way the config module is loaded. `__file__` is now available
- fixed `wsgi.input_terminated`. It is always true.
- use the highest protocol version of openssl by default
- only support Python >= 3.5
- added `__repr__` method to `Config` instance
- fixed support of AIX platform and musl libc in `socketfromfd.fromfd` function
- fixed support of applications loaded from a factory function
- fixed chunked encoding support to prevent any `request smuggling <https://portswigger.net/research/http-desync-attacks-request-smuggling-reborn>`_
- Capture os.sendfile before patching in gevent and eventlet workers.
fix `RecursionError`.
- removed locking in reloader when adding new files
- load the WSGI application before the loader to pick up all files
.. note:: this release add official support for applications loaded from a factory function
as documented in Flask and other places.
19.10.0 / 2019/11/23
====================
- unblock select loop during reload of a sync worker
- security fix: http desync attack
- handle `wsgi.input_terminated`
- added support for str and bytes in unix socket addresses
- fixed `max_requests` setting
- headers values are now encoded as LATN1, not ASCII
- fixed `InotifyReloadeder`: handle `module.__file__` is None
- fixed compatibility with tornado 6
- fixed root logging
- Prevent removalof unix sockets from `reuse_port`
- Clear tornado ioloop before os.fork
- Miscellaneous fixes and improvement for linting using Pylint
20.0 / 2019/10/30
=================
- Fixed `fdopen` `RuntimeWarning` in Python 3.8
- Added check and exception for str type on value in Response process_headers method.
- Ensure WSGI header value is string before conducting regex search on it.
- Added pypy3 to list of tested environments
- Grouped `StopIteration` and `KeyboardInterrupt` exceptions with same body together in Arbiter.run()
- Added `setproctitle` module to `extras_require` in setup.py
- Avoid unnecessary chown of temporary files
- Logging: Handle auth type case insensitively
- Removed `util.import_module`
- Removed fallback for `types.SimpleNamespace` in tests utils
- Use `SourceFileLoader` instead instead of `execfile_`
- Use `importlib` instead of `__import__` and eval`
- Fixed eventlet patching
- Added optional `datadog <https://www.datadoghq.com>`_ tags for statsd metrics
- Header values now are encoded using latin-1, not ascii.
- Rewritten `parse_address` util added test
- Removed redundant super() arguments
- Simplify `futures` import in gthread module
- Fixed worker_connections` setting to also affects the Gthread worker type
- Fixed setting max_requests
- Bump minimum Eventlet and Gevent versions to 0.24 and 1.4
- Use Python default SSL cipher list by default
- handle `wsgi.input_terminated` extension
- Simplify Paste Deployment documentation
- Fix root logging: root and logger are same level.
- Fixed typo in ssl_version documentation
- Documented systemd deployment unit examples
- Added systemd sd_notify support
- Fixed typo in gthread.py
- Added `tornado <https://www.tornadoweb.org/>`_ 5 and 6 support
- Declare our setuptools dependency
- Added support to `--bind` to open file descriptors
- Document how to serve WSGI app modules from Gunicorn
- Provide guidance on X-Forwarded-For access log in documentation
- Add support for named constants in the `--ssl-version` flag
- Clarify log format usage of header & environment in documentation
- Fixed systemd documentation to properly setup gunicorn unix socket
- Prevent removal unix socket for reuse_port
- Fix `ResourceWarning` when reading a Python config module
- Remove unnecessary call to dict keys method
- Support str and bytes for UNIX socket addresses
- fixed `InotifyReloadeder`: handle `module.__file__` is None
- `/dev/shm` as a convenient alternative to making your own tmpfs mount in fchmod FAQ
- fix examples to work on python3
- Fix typo in `--max-requests` documentation
- Clear tornado ioloop before os.fork
- Miscellaneous fixes and improvement for linting using Pylint
Breaking Change
+++++++++++++++
- Removed gaiohttp worker
- Drop support for Python 2.x
- Drop support for EOL Python 3.2 and 3.3
- Drop support for Paste Deploy server blocks

View File

@ -0,0 +1,7 @@
================
Changelog - 2020
================
.. note::
Please see :doc:`news` for the latest changes

54
docs/source/2021-news.rst Normal file
View File

@ -0,0 +1,54 @@
================
Changelog - 2021
================
.. note::
Please see :doc:`news` for the latest changes
20.1.0 - 2021-02-12
===================
- document WEB_CONCURRENCY is set by, at least, Heroku
- capture peername from accept: Avoid calls to getpeername by capturing the peer name returned by
accept
- log a warning when a worker was terminated due to a signal
- fix tornado usage with latest versions of Django
- add support for python -m gunicorn
- fix systemd socket activation example
- allows to set wsgi application in configg file using `wsgi_app`
- document `--timeout = 0`
- always close a connection when the number of requests exceeds the max requests
- Disable keepalive during graceful shutdown
- kill tasks in the gthread workers during upgrade
- fix latency in gevent worker when accepting new requests
- fix file watcher: handle errors when new worker reboot and ensure the list of files is kept
- document the default name and path of the configuration file
- document how variable impact configuration
- document the `$PORT` environment variable
- added milliseconds option to request_time in access_log
- added PIP requirements to be used for example
- remove version from the Server header
- fix sendfile: use `socket.sendfile` instead of `os.sendfile`
- reloader: use absolute path to prevent empty to prevent0 `InotifyError` when a file
is added to the working directory
- Add --print-config option to print the resolved settings at startup.
- remove the `--log-dict-config` CLI flag because it never had a working format
(the `logconfig_dict` setting in configuration files continues to work)
** Breaking changes **
- minimum version is Python 3.5
- remove version from the Server header
** Documentation **
** Others **
- miscellaneous changes in the code base to be a better citizen with Python 3
- remove dead code
- fix documentation generation

View File

@ -15,7 +15,7 @@ for 3 different purposes:
* `Mailing list <https://github.com/benoitc/gunicorn/projects/3>`_ : Discussion of Gunicorn development, new features * `Mailing list <https://github.com/benoitc/gunicorn/projects/3>`_ : Discussion of Gunicorn development, new features
and project management. and project management.
Project maintenance guidelines are avaible on the `wiki <https://github.com/benoitc/gunicorn/wiki/Project-management>`_ Project maintenance guidelines are available on the `wiki <https://github.com/benoitc/gunicorn/wiki/Project-management>`_
. .
IRC IRC

View File

@ -4,28 +4,46 @@
Configuration Overview Configuration Overview
====================== ======================
Gunicorn pulls configuration information from three distinct places. Gunicorn reads configuration information from five places.
The first place that Gunicorn will read configuration from is the framework Gunicorn first reads environment variables for some configuration
specific configuration file. Currently this only affects Paster applications. :ref:`settings <settings>`.
The second source of configuration information is a configuration file that is Gunicorn then reads configuration from a framework specific configuration
optionally specified on the command line. Anything specified in the Gunicorn file. Currently this only affects Paster applications.
config file will override any framework specific settings.
The third source of configuration information is an optional configuration file
``gunicorn.conf.py`` searched in the current working directory or specified
using a command line argument. Anything specified in this configuration file
will override any framework specific settings.
The fourth place of configuration information are command line arguments
stored in an environment variable named ``GUNICORN_CMD_ARGS``.
Lastly, the command line arguments used to invoke Gunicorn are the final place Lastly, the command line arguments used to invoke Gunicorn are the final place
considered for configuration settings. If an option is specified on the command considered for configuration settings. If an option is specified on the command
line, this is the value that will be used. line, this is the value that will be used.
When a configuration file is specified in the command line arguments and in the
``GUNICORN_CMD_ARGS`` environment variable, only the configuration
file specified on the command line is used.
Once again, in order of least to most authoritative: Once again, in order of least to most authoritative:
1. Framework Settings 1. Environment Variables
2. Configuration File 2. Framework Settings
3. Command Line 3. Configuration File
4. ``GUNICORN_CMD_ARGS``
5. Command Line
.. note:: .. note::
To check your configuration when using the command line or the To print your resolved configuration when using the command line or the
configuration file you can run the following command::
$ gunicorn --print-config APP_MODULE
To check your resolved configuration when using the command line or the
configuration file you can run the following command:: configuration file you can run the following command::
$ gunicorn --check-config APP_MODULE $ gunicorn --check-config APP_MODULE
@ -47,14 +65,16 @@ usual::
There is also a ``--version`` flag available to the command line scripts that There is also a ``--version`` flag available to the command line scripts that
isn't mentioned in the list of :ref:`settings <settings>`. isn't mentioned in the list of :ref:`settings <settings>`.
.. _configuration_file:
Configuration File Configuration File
================== ==================
The configuration file should be a valid Python source file. It only needs to The configuration file should be a valid Python source file with a **python
be readable from the file system. More specifically, it does not need to be extension** (e.g. `gunicorn.conf.py`). It only needs to be readable from the
importable. Any Python is valid. Just consider that this will be run every time file system. More specifically, it does not have to be on the module path
you start Gunicorn (including when you signal Gunicorn to reload). (sys.path, PYTHONPATH). Any Python is valid. Just consider that this will be
run every time you start Gunicorn (including when you signal Gunicorn to reload).
To set a parameter, just assign to it. There's no special syntax. The values To set a parameter, just assign to it. There's no special syntax. The values
you provide will be used for the configuration values. you provide will be used for the configuration values.

View File

@ -13,4 +13,41 @@ Here is a small example where we create a very small WSGI app and load it with
a custom Application: a custom Application:
.. literalinclude:: ../../examples/standalone_app.py .. literalinclude:: ../../examples/standalone_app.py
:lines: 11-60 :start-after: # See the NOTICE for more information
:lines: 2-
Direct Usage of Existing WSGI Apps
----------------------------------
If necessary, you can run Gunicorn straight from Python, allowing you to
specify a WSGI-compatible application at runtime. This can be handy for
rolling deploys or in the case of using PEX files to deploy your application,
as the app and Gunicorn can be bundled in the same PEX file. Gunicorn has
this functionality built-in as a first class citizen known as
:class:`gunicorn.app.wsgiapp`. This can be used to run WSGI-compatible app
instances such as those produced by Flask or Django. Assuming your WSGI API
package is *exampleapi*, and your application instance is *app*, this is all
you need to get going::
gunicorn.app.wsgiapp exampleapi:app
This command will work with any Gunicorn CLI parameters or a config file - just
pass them along as if you're directly giving them to Gunicorn:
.. code-block:: bash
# Custom parameters
$ python gunicorn.app.wsgiapp exampleapi:app --bind=0.0.0.0:8081 --workers=4
# Using a config file
$ python gunicorn.app.wsgiapp exampleapi:app -c config.py
Note for those using PEX: use ``-c gunicorn`` as your entry at build
time, and your compiled app should work with the entry point passed to it at
run time.
.. code-block:: bash
# Generic pex build command via bash from root of exampleapi project
$ pex . -v -c gunicorn -o compiledapp.pex
# Running it
./compiledapp.pex exampleapi:app -c gunicorn_config.py

View File

@ -2,7 +2,7 @@
Deploying Gunicorn Deploying Gunicorn
================== ==================
We strongly recommend to use Gunicorn behind a proxy server. We strongly recommend using Gunicorn behind a proxy server.
Nginx Configuration Nginx Configuration
=================== ===================
@ -67,13 +67,13 @@ Gunicorn 19 introduced a breaking change concerning how ``REMOTE_ADDR`` is
handled. Previous to Gunicorn 19 this was set to the value of handled. Previous to Gunicorn 19 this was set to the value of
``X-Forwarded-For`` if received from a trusted proxy. However, this was not in ``X-Forwarded-For`` if received from a trusted proxy. However, this was not in
compliance with :rfc:`3875` which is why the ``REMOTE_ADDR`` is now the IP compliance with :rfc:`3875` which is why the ``REMOTE_ADDR`` is now the IP
address of **the proxy** and **not the actual user**. You should instead address of **the proxy** and **not the actual user**.
configure Nginx to send the user's IP address through the ``X-Forwarded-For``
header like this::
... To have access logs indicate **the actual user** IP when proxied, set
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; :ref:`access-log-format` with a format which includes ``X-Forwarded-For``. For
... example, this format uses ``X-Forwarded-For`` in place of ``REMOTE_ADDR``::
%({x-forwarded-for}i)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"
It is also worth noting that the ``REMOTE_ADDR`` will be completely empty if It is also worth noting that the ``REMOTE_ADDR`` will be completely empty if
you bind Gunicorn to a UNIX socket and not a TCP ``host:port`` tuple. you bind Gunicorn to a UNIX socket and not a TCP ``host:port`` tuple.
@ -212,12 +212,15 @@ Using Gunicorn with upstart is simple. In this example we will run the app
Systemd Systemd
------- -------
A tool that is starting to be common on linux systems is Systemd_. Below are A tool that is starting to be common on linux systems is Systemd_. It is a
configurations files and instructions for using systemd to create a unix socket system services manager that allows for strict process management, resources
for incoming Gunicorn requests. Systemd will listen on this socket and start and permissions control.
gunicorn automatically in response to traffic. Later in this section are
instructions for configuring Nginx to forward web traffic to the newly created Below are configuration files and instructions for using systemd to create
unix socket: a unix socket for incoming Gunicorn requests. Systemd will listen on this
socket and start gunicorn automatically in response to traffic. Later in
this section are instructions for configuring Nginx to forward web traffic
to the newly created unix socket:
**/etc/systemd/system/gunicorn.service**:: **/etc/systemd/system/gunicorn.service**::
@ -227,15 +230,19 @@ unix socket:
After=network.target After=network.target
[Service] [Service]
PIDFile=/run/gunicorn/pid Type=notify
# the specific user that our service will run as
User=someuser User=someuser
Group=someuser Group=someuser
# another option for an even more restricted service is
# DynamicUser=yes
# see http://0pointer.net/blog/dynamic-users-with-systemd.html
RuntimeDirectory=gunicorn RuntimeDirectory=gunicorn
WorkingDirectory=/home/someuser/applicationroot WorkingDirectory=/home/someuser/applicationroot
ExecStart=/usr/bin/gunicorn --pid /run/gunicorn/pid \ ExecStart=/usr/bin/gunicorn applicationname.wsgi
--bind unix:/run/gunicorn.sock applicationname.wsgi
ExecReload=/bin/kill -s HUP $MAINPID ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID KillMode=mixed
TimeoutStopSec=5
PrivateTmp=true PrivateTmp=true
[Install] [Install]
@ -248,33 +255,47 @@ unix socket:
[Socket] [Socket]
ListenStream=/run/gunicorn.sock ListenStream=/run/gunicorn.sock
User=someuser # Our service won't need permissions for the socket, since it
Group=someuser # inherits the file descriptor by socket activation
# only the nginx daemon will need access to the socket
SocketUser=www-data
# Optionally restrict the socket permissions even more.
# SocketMode=600
[Install] [Install]
WantedBy=sockets.target WantedBy=sockets.target
**/etc/tmpfiles.d/gunicorn.conf**::
d /run/gunicorn 0755 someuser somegroup - Next enable and start the socket (it will autostart at boot too)::
Next enable the socket so it autostarts at boot:: systemctl enable --now gunicorn.socket
systemctl enable gunicorn.socket
Either reboot, or start the services manually::
systemctl start gunicorn.socket
After running ``curl --unix-socket /run/gunicorn.sock http``, Gunicorn Now let's see if the nginx daemon will be able to connect to the socket.
should start and you should see some HTML from your server in the terminal. Running ``sudo -u www-data curl --unix-socket /run/gunicorn.sock http``,
our Gunicorn service will be automatically started and you should see some
HTML from your server in the terminal.
.. note::
systemd employs cgroups to track the processes of a service, so it doesn't
need pid files. In the rare case that you need to find out the service main
pid, you can use ``systemctl show --value -p MainPID gunicorn.service``, but
if you only want to send a signal an even better option is
``systemctl kill -s HUP gunicorn.service``.
.. note::
``www-data`` is the default nginx user in debian, other distributions use
different users (for example: ``http`` or ``nginx``). Check your distro to
know what to put for the socket user, and for the sudo command.
You must now configure your web proxy to send traffic to the new Gunicorn You must now configure your web proxy to send traffic to the new Gunicorn
socket. Edit your ``nginx.conf`` to include the following: socket. Edit your ``nginx.conf`` to include the following:
**/etc/nginx/nginx.conf**:: **/etc/nginx/nginx.conf**::
user www-data;
... ...
http { http {
server { server {
@ -292,15 +313,15 @@ socket. Edit your ``nginx.conf`` to include the following:
The listen and server_name used here are configured for a local machine. The listen and server_name used here are configured for a local machine.
In a production server you will most likely listen on port 80, In a production server you will most likely listen on port 80,
and use your URL as the server_name. and use your URL as the server_name.
Now make sure you enable the nginx service so it automatically starts at boot:: Now make sure you enable the nginx service so it automatically starts at boot::
systemctl enable nginx.service systemctl enable nginx.service
Either reboot, or start Nginx with the following command:: Either reboot, or start Nginx with the following command::
systemctl start nginx systemctl start nginx
Now you should be able to test Nginx with Gunicorn by visiting Now you should be able to test Nginx with Gunicorn by visiting
http://127.0.0.1:8000/ in any web browser. Systemd is now set up. http://127.0.0.1:8000/ in any web browser. Systemd is now set up.

View File

@ -46,6 +46,22 @@ Gevent_). Greenlets are an implementation of cooperative multi-threading for
Python. In general, an application should be able to make use of these worker Python. In general, an application should be able to make use of these worker
classes with no changes. classes with no changes.
For full greenlet support applications might need to be adapted.
When using, e.g., Gevent_ and Psycopg_ it makes sense to ensure psycogreen_ is
installed and `setup <http://www.gevent.org/api/gevent.monkey.html#plugins>`_.
Other applications might not be compatible at all as they, e.g., rely on
the original unpatched behavior.
Gthread Workers
---------------
The worker `gthread` is a threaded worker. It accepts connections in the
main loop. Accepted connections are added to the thread pool as a
connection job. On keepalive connections are put back in the loop
waiting for an event. If no event happens after the keepalive timeout,
the connection is closed.
Tornado Workers Tornado Workers
--------------- ---------------
@ -59,32 +75,10 @@ WSGI application, this is not a recommended configuration.
AsyncIO Workers AsyncIO Workers
--------------- ---------------
These workers are compatible with python3. You have two kind of workers. These workers are compatible with Python 3.
The worker `gthread` is a threaded worker. It accepts connections in the You can port also your application to use aiohttp_'s ``web.Application`` API and use the
main loop, accepted connections are added to the thread pool as a ``aiohttp.worker.GunicornWebWorker`` worker.
connection job. On keepalive connections are put back in the loop
waiting for an event. If no event happen after the keep alive timeout,
the connection is closed.
The worker `gaiohttp` is a full asyncio worker using aiohttp_.
.. note::
The ``gaiohttp`` worker requires the aiohttp_ module to be installed.
aiohttp_ has removed its native WSGI application support in version 2.
If you want to continue to use the ``gaiohttp`` worker with your WSGI
application (e.g. an application that uses Flask or Django), there are
three options available:
#. Install aiohttp_ version 1.3.5 instead of version 2::
$ pip install aiohttp==1.3.5
#. Use aiohttp_wsgi_ to wrap your WSGI application. You can take a look
at the `example`_ in the Gunicorn repository.
#. Port your application to use aiohttp_'s ``web.Application`` API.
#. Use the ``aiohttp.worker.GunicornWebWorker`` worker instead of the
deprecated ``gaiohttp`` worker.
Choosing a Worker Type Choosing a Worker Type
====================== ======================
@ -150,13 +144,14 @@ the worker processes (unlike when using the preload setting, which loads the
code in the master process). code in the master process).
.. note:: .. note::
Under Python 2.x, you need to install the 'futures' package to use this Under Python 2.x, you need to install the 'futures' package to use this
feature. feature.
.. _Greenlets: https://github.com/python-greenlet/greenlet .. _Greenlets: https://github.com/python-greenlet/greenlet
.. _Eventlet: http://eventlet.net/ .. _Eventlet: http://eventlet.net/
.. _Gevent: http://www.gevent.org/ .. _Gevent: http://www.gevent.org/
.. _Hey: https://github.com/rakyll/hey .. _Hey: https://github.com/rakyll/hey
.. _aiohttp: https://aiohttp.readthedocs.io/en/stable/ .. _aiohttp: https://docs.aiohttp.org/en/stable/deployment.html#nginx-gunicorn
.. _aiohttp_wsgi: https://aiohttp-wsgi.readthedocs.io/en/stable/index.html
.. _`example`: https://github.com/benoitc/gunicorn/blob/master/examples/frameworks/flaskapp_aiohttp_wsgi.py .. _`example`: https://github.com/benoitc/gunicorn/blob/master/examples/frameworks/flaskapp_aiohttp_wsgi.py
.. _Psycopg: http://initd.org/psycopg/
.. _psycogreen: https://github.com/psycopg/psycogreen/

View File

@ -106,9 +106,9 @@ threads. However `a work has been started
Why I don't see any logs in the console? Why I don't see any logs in the console?
---------------------------------------- ----------------------------------------
In version R19, Gunicorn doesn't log by default in the console. In version 19.0, Gunicorn doesn't log by default in the console.
To watch the logs in the console you need to use the option ``--log-file=-``. To watch the logs in the console you need to use the option ``--log-file=-``.
In version R20, Gunicorn logs to the console by default again. In version 19.2, Gunicorn logs to the console by default again.
Kernel Parameters Kernel Parameters
================= =================
@ -129,9 +129,13 @@ One of the first settings that usually needs to be bumped is the maximum number
of open file descriptors for a given process. For the confused out there, of open file descriptors for a given process. For the confused out there,
remember that Unices treat sockets as files. remember that Unices treat sockets as files.
:: .. warning:: ``sudo ulimit`` may not work
$ sudo ulimit -n 2048 Considering non-privileged users are not able to relax the limit, you should
firstly switch to root user, increase the limit, then run gunicorn. Using ``sudo
ulimit`` would not take effect.
Try systemd's service unit file, or an initscript which runs as root.
How can I increase the maximum socket backlog? How can I increase the maximum socket backlog?
---------------------------------------------- ----------------------------------------------
@ -205,3 +209,30 @@ Check the result::
tmpfs 65536 0 65536 0% /mem tmpfs 65536 0 65536 0% /mem
Now you can set ``--worker-tmp-dir /mem``. Now you can set ``--worker-tmp-dir /mem``.
Why are Workers Silently Killed?
--------------------------------------------------------------
A sometimes subtle problem to debug is when a worker process is killed and there
is little logging information about what happened.
If you use a reverse proxy like NGINX you might see 502 returned to a client.
In the gunicorn logs you might simply see ``[35] [INFO] Booting worker with pid: 35``
It's completely normal for workers to be stop and start, for example due to
max-requests setting. Ordinarily gunicorn will capture any signals and log something.
This particular failure case is usually due to a SIGKILL being received, as it's
not possible to catch this signal silence is usually a common side effect! A common
cause of SIGKILL is when OOM killer terminates a process due to low memory condition.
This is increasingly common in container deployments where memory limits are enforced
by cgroups, you'll usually see evidence of this from dmesg::
dmesg | grep gunicorn
Memory cgroup out of memory: Kill process 24534 (gunicorn) score 1506 or sacrifice child
Killed process 24534 (gunicorn) total-vm:1016648kB, anon-rss:550160kB, file-rss:25824kB, shmem-rss:0kB
In these instances adjusting the memory limit is usually your best bet, it's also possible
to configure OOM not to send SIGKILL by default.

View File

@ -7,7 +7,7 @@ Gunicorn - WSGI server
:Website: http://gunicorn.org :Website: http://gunicorn.org
:Source code: https://github.com/benoitc/gunicorn :Source code: https://github.com/benoitc/gunicorn
:Issue tracker: https://github.com/benoitc/gunicorn/issues :Issue tracker: https://github.com/benoitc/gunicorn/issues
:IRC: ``#gunicorn`` on Freenode :IRC: ``#gunicorn`` on Libera Chat
:Usage questions: https://github.com/benoitc/gunicorn/issues :Usage questions: https://github.com/benoitc/gunicorn/issues
Gunicorn 'Green Unicorn' is a Python WSGI HTTP Server for UNIX. It's a pre-fork Gunicorn 'Green Unicorn' is a Python WSGI HTTP Server for UNIX. It's a pre-fork
@ -23,7 +23,7 @@ Features
* Simple Python configuration * Simple Python configuration
* Multiple worker configurations * Multiple worker configurations
* Various server hooks for extensibility * Various server hooks for extensibility
* Compatible with Python 3.x >= 3.4 * Compatible with Python 3.x >= 3.5
Contents Contents

View File

@ -4,7 +4,7 @@ Installation
.. highlight:: bash .. highlight:: bash
:Requirements: **Python 3.x >= 3.4** :Requirements: **Python 3.x >= 3.5**
To install the latest released version of Gunicorn:: To install the latest released version of Gunicorn::
@ -40,7 +40,7 @@ want to consider one of the alternate worker types.
$ pip install gunicorn[gevent] # Or, using extra $ pip install gunicorn[gevent] # Or, using extra
.. note:: .. note::
Both require ``greenlet``, which should get installed automatically, Both require ``greenlet``, which should get installed automatically.
If its installation fails, you probably need to install If its installation fails, you probably need to install
the Python headers. These headers are available in most package the Python headers. These headers are available in most package
managers. On Ubuntu the package name for ``apt-get`` is managers. On Ubuntu the package name for ``apt-get`` is
@ -52,10 +52,32 @@ want to consider one of the alternate worker types.
installed, this is the most likely reason. installed, this is the most likely reason.
Extra Packages
==============
Some Gunicorn options require additional packages. You can use the ``[extra]``
syntax to install these at the same time as Gunicorn.
Most extra packages are needed for alternate worker types. See the
`design docs`_ for more information on when you'll want to consider an
alternate worker type.
* ``gunicorn[eventlet]`` - Eventlet-based greenlets workers
* ``gunicorn[gevent]`` - Gevent-based greenlets workers
* ``gunicorn[gthread]`` - Threaded workers
* ``gunicorn[tornado]`` - Tornado-based workers, not recommended
If you are running more than one instance of Gunicorn, the :ref:`proc-name`
setting will help distinguish between them in tools like ``ps`` and ``top``.
* ``gunicorn[setproctitle]`` - Enables setting the process name
Multiple extras can be combined, like
``pip install gunicorn[gevent,setproctitle]``.
Debian GNU/Linux Debian GNU/Linux
================ ================
If you are using Debian GNU/Linux and it is recommended that you use If you are using Debian GNU/Linux it is recommended that you use
system packages to install Gunicorn except maybe when you want to use system packages to install Gunicorn except maybe when you want to use
different versions of Gunicorn with virtualenv. This has a number of different versions of Gunicorn with virtualenv. This has a number of
advantages: advantages:
@ -74,16 +96,43 @@ advantages:
rolled back in case of incompatibility. The package can also be purged rolled back in case of incompatibility. The package can also be purged
entirely from the system in seconds. entirely from the system in seconds.
stable ("stretch") stable ("buster")
------------------ ------------------
The version of Gunicorn in the Debian_ "stable" distribution is 19.6.0 (June The version of Gunicorn in the Debian_ "stable" distribution is 19.9.0
2017). You can install it using:: (December 2020). You can install it using::
$ sudo apt-get install gunicorn $ sudo apt-get install gunicorn3
You can also use the most recent version by using `Debian Backports`_. You can also use the most recent version 20.0.4 (December 2020) by using
First, copy the following line to your ``/etc/apt/sources.list``:: `Debian Backports`_. First, copy the following line to your
``/etc/apt/sources.list``::
deb http://ftp.debian.org/debian buster-backports main
Then, update your local package lists::
$ sudo apt-get update
You can then install the latest version using::
$ sudo apt-get -t buster-backports install gunicorn
oldstable ("stretch")
---------------------
While Debian releases newer than Stretch will give you gunicorn with Python 3
support no matter if you install the gunicorn or gunicorn3 package for Stretch
you specifically have to install gunicorn3 to get Python 3 support.
The version of Gunicorn in the Debian_ "oldstable" distribution is 19.6.0
(December 2020). You can install it using::
$ sudo apt-get install gunicorn3
You can also use the most recent version 19.7.1 (December 2020) by using
`Debian Backports`_. First, copy the following line to your
``/etc/apt/sources.list``::
deb http://ftp.debian.org/debian stretch-backports main deb http://ftp.debian.org/debian stretch-backports main
@ -93,34 +142,13 @@ Then, update your local package lists::
You can then install the latest version using:: You can then install the latest version using::
$ sudo apt-get -t stretch-backports install gunicorn $ sudo apt-get -t stretch-backports install gunicorn3
oldstable ("jessie") Testing ("bullseye") / Unstable ("sid")
-------------------- ---------------------------------------
The version of Gunicorn in the Debian_ "oldstable" distribution is 19.0 (June "bullseye" and "sid" contain the latest released version of Gunicorn 20.0.4
2014). you can install it using:: (December 2020). You can install it in the usual way::
$ sudo apt-get install gunicorn
You can also use the most recent version by using `Debian Backports`_.
First, copy the following line to your ``/etc/apt/sources.list``::
deb http://ftp.debian.org/debian jessie-backports main
Then, update your local package lists::
$ sudo apt-get update
You can then install the latest version using::
$ sudo apt-get -t jessie-backports install gunicorn
Testing ("buster") / Unstable ("sid")
-------------------------------------
"buster" and "sid" contain the latest released version of Gunicorn. You can
install it in the usual way::
$ sudo apt-get install gunicorn $ sudo apt-get install gunicorn
@ -128,8 +156,8 @@ install it in the usual way::
Ubuntu Ubuntu
====== ======
Ubuntu_ 12.04 (trusty) or later contains Gunicorn package by default so that Ubuntu_ 20.04 LTS (Focal Fossa) or later contains the Gunicorn package by
you can install it in the usual way:: default 20.0.4 (December 2020) so that you can install it in the usual way::
$ sudo apt-get update $ sudo apt-get update
$ sudo apt-get install gunicorn $ sudo apt-get install gunicorn

View File

@ -2,66 +2,45 @@
Changelog Changelog
========= =========
19.9.0 / 2018/07/03 20.1.0 - 2021-02-12
=================== ===================
- fix: address a regression that prevented syslog support from working - document WEB_CONCURRENCY is set by, at least, Heroku
(:issue:`1668`, :pr:`1773`) - capture peername from accept: Avoid calls to getpeername by capturing the peer name returned by accept
- fix: correctly set `REMOTE_ADDR` on versions of Python 3 affected by - log a warning when a worker was terminated due to a signal
`Python Issue 30205 <https://bugs.python.org/issue30205>`_ - fix tornado usage with latest versions of Django
(:issue:`1755`, :pr:`1796`) - add support for python -m gunicorn
- fix: show zero response length correctly in access log (:pr:`1787`) - fix systemd socket activation example
- fix: prevent raising :exc:`AttributeError` when ``--reload`` is not passed - allows to set wsgi application in configg file using ``wsgi_app``
in case of a :exc:`SyntaxError` raised from the WSGI application. - document ``--timeout = 0``
(:issue:`1805`, :pr:`1806`) - always close a connection when the number of requests exceeds the max requests
- The internal module ``gunicorn.workers.async`` was renamed to ``gunicorn.workers.base_async`` - Disable keepalive during graceful shutdown
since ``async`` is now a reserved word in Python 3.7. - kill tasks in the gthread workers during upgrade
(:pr:`1527`) - fix latency in gevent worker when accepting new requests
- fix file watcher: handle errors when new worker reboot and ensure the list of files is kept
- document the default name and path of the configuration file
- document how variable impact configuration
- document the ``$PORT`` environment variable
- added milliseconds option to request_time in access_log
- added PIP requirements to be used for example
- remove version from the Server header
- fix sendfile: use ``socket.sendfile`` instead of ``os.sendfile``
- reloader: use absolute path to prevent empty to prevent0 ``InotifyError`` when a file
is added to the working directory
- Add --print-config option to print the resolved settings at startup.
- remove the ``--log-dict-config`` CLI flag because it never had a working format
(the ``logconfig_dict`` setting in configuration files continues to work)
19.8.1 / 2018/04/30 ** Breaking changes **
===================
- fix: secure scheme headers when bound to a unix socket - minimum version is Python 3.5
(:issue:`1766`, :pr:`1767`) - remove version from the Server header
19.8.0 / 2018/04/28 ** Others **
===================
- Eventlet 0.21.0 support (:issue:`1584`) - miscellaneous changes in the code base to be a better citizen with Python 3
- Tornado 5 support (:issue:`1728`, :pr:`1752`) - remove dead code
- support watching additional files with ``--reload-extra-file`` - fix documentation generation
(:pr:`1527`)
- support configuring logging with a dictionary with ``--logging-config-dict``
(:issue:`1087`, :pr:`1110`, :pr:`1602`)
- add support for the ``--config`` flag in the ``GUNICORN_CMD_ARGS`` environment
variable (:issue:`1576`, :pr:`1581`)
- disable ``SO_REUSEPORT`` by default and add the ``--reuse-port`` setting
(:issue:`1553`, :issue:`1603`, :pr:`1669`)
- fix: installing `inotify` on MacOS no longer breaks the reloader
(:issue:`1540`, :pr:`1541`)
- fix: do not throw ``TypeError`` when ``SO_REUSEPORT`` is not available
(:issue:`1501`, :pr:`1491`)
- fix: properly decode HTTP paths containing certain non-ASCII characters
(:issue:`1577`, :pr:`1578`)
- fix: remove whitespace when logging header values under gevent (:pr:`1607`)
- fix: close unlinked temporary files (:issue:`1327`, :pr:`1428`)
- fix: parse ``--umask=0`` correctly (:issue:`1622`, :pr:`1632`)
- fix: allow loading applications using relative file paths
(:issue:`1349`, :pr:`1481`)
- fix: force blocking mode on the gevent sockets (:issue:`880`, :pr:`1616`)
- fix: preserve leading `/` in request path (:issue:`1512`, :pr:`1511`)
- fix: forbid contradictory secure scheme headers
- fix: handle malformed basic authentication headers in access log
(:issue:`1683`, :pr:`1684`)
- fix: defer handling of ``USR1`` signal to a new greenlet under gevent
(:issue:`1645`, :pr:`1651`)
- fix: the threaded worker would sometimes close the wrong keep-alive
connection under Python 2 (:issue:`1698`, :pr:`1699`)
- fix: re-open log files on ``USR1`` signal using ``handler._open`` to
support subclasses of ``FileHandler`` (:issue:`1739`, :pr:`1742`)
- deprecation: the ``gaiohttp`` worker is deprecated, see the
:ref:`worker-class` documentation for more information
(:issue:`1338`, :pr:`1418`, :pr:`1569`)
History History
@ -70,6 +49,10 @@ History
.. toctree:: .. toctree::
:titlesonly: :titlesonly:
2021-news
2020-news
2019-news
2018-news
2017-news 2017-news
2016-news 2016-news
2015-news 2015-news
@ -78,3 +61,4 @@ History
2012-news 2012-news
2011-news 2011-news
2010-news 2010-news

View File

@ -4,8 +4,9 @@ Running Gunicorn
.. highlight:: bash .. highlight:: bash
You can run Gunicorn by using commands or integrate with Django or Paster. For You can run Gunicorn by using commands or integrate with popular frameworks
deploying Gunicorn in production see :doc:`deploy`. like Django, Pyramid, or TurboGears. For deploying Gunicorn in production see
:doc:`deploy`.
Commands Commands
======== ========
@ -20,12 +21,15 @@ gunicorn
Basic usage:: Basic usage::
$ gunicorn [OPTIONS] APP_MODULE $ gunicorn [OPTIONS] [WSGI_APP]
Where ``APP_MODULE`` is of the pattern ``$(MODULE_NAME):$(VARIABLE_NAME)``. The Where ``WSGI_APP`` is of the pattern ``$(MODULE_NAME):$(VARIABLE_NAME)``. The
module name can be a full dotted path. The variable name refers to a WSGI module name can be a full dotted path. The variable name refers to a WSGI
callable that should be found in the specified module. callable that should be found in the specified module.
.. versionchanged:: 20.1.0
``WSGI_APP`` is optional if it is defined in a :ref:`config` file.
Example with the test app: Example with the test app:
.. code-block:: python .. code-block:: python
@ -41,10 +45,31 @@ Example with the test app:
start_response(status, response_headers) start_response(status, response_headers)
return iter([data]) return iter([data])
You can now run the app with the following command:: You can now run the app with the following command:
.. code-block:: text
$ gunicorn --workers=2 test:app $ gunicorn --workers=2 test:app
The variable name can also be a function call. In that case the name
will be imported from the module, then called to get the application
object. This is commonly referred to as the "application factory"
pattern.
.. code-block:: python
def create_app():
app = FrameworkApp()
...
return app
.. code-block:: text
$ gunicorn --workers=2 'test:create_app()'
Positional and keyword arguments can also be passed, but it is
recommended to load configuration from environment variables rather than
the command line.
Commonly Used Arguments Commonly Used Arguments
^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^
@ -52,8 +77,8 @@ Commonly Used Arguments
* ``-c CONFIG, --config=CONFIG`` - Specify a config file in the form * ``-c CONFIG, --config=CONFIG`` - Specify a config file in the form
``$(PATH)``, ``file:$(PATH)``, or ``python:$(MODULE_NAME)``. ``$(PATH)``, ``file:$(PATH)``, or ``python:$(MODULE_NAME)``.
* ``-b BIND, --bind=BIND`` - Specify a server socket to bind. Server sockets * ``-b BIND, --bind=BIND`` - Specify a server socket to bind. Server sockets
can be any of ``$(HOST)``, ``$(HOST):$(PORT)``, or ``unix:$(PATH)``. can be any of ``$(HOST)``, ``$(HOST):$(PORT)``, ``fd://$(FD)``, or
An IP is a valid ``$(HOST)``. ``unix:$(PATH)``. An IP is a valid ``$(HOST)``.
* ``-w WORKERS, --workers=WORKERS`` - The number of worker processes. This * ``-w WORKERS, --workers=WORKERS`` - The number of worker processes. This
number should generally be between 2-4 workers per core in the server. number should generally be between 2-4 workers per core in the server.
Check the :ref:`faq` for ideas on tuning this parameter. Check the :ref:`faq` for ideas on tuning this parameter.
@ -61,7 +86,7 @@ Commonly Used Arguments
to run. You'll definitely want to read the production page for the to run. You'll definitely want to read the production page for the
implications of this parameter. You can set this to ``$(NAME)`` implications of this parameter. You can set this to ``$(NAME)``
where ``$(NAME)`` is one of ``sync``, ``eventlet``, ``gevent``, where ``$(NAME)`` is one of ``sync``, ``eventlet``, ``gevent``,
``tornado``, ``gthread``, ``gaiohttp`` (deprecated). ``tornado``, ``gthread``.
``sync`` is the default. See the :ref:`worker-class` documentation for more ``sync`` is the default. See the :ref:`worker-class` documentation for more
information. information.
* ``-n APP_NAME, --name=APP_NAME`` - If setproctitle_ is installed you can * ``-n APP_NAME, --name=APP_NAME`` - If setproctitle_ is installed you can
@ -78,7 +103,7 @@ See :ref:`configuration` and :ref:`settings` for detailed usage.
Integration Integration
=========== ===========
We also provide integration for both Django and Paster applications. Gunicorn also provides integration for Django and Paste Deploy applications.
Django Django
------ ------
@ -104,13 +129,40 @@ option::
$ gunicorn --env DJANGO_SETTINGS_MODULE=myproject.settings myproject.wsgi $ gunicorn --env DJANGO_SETTINGS_MODULE=myproject.settings myproject.wsgi
Paste Paste Deployment
----- ----------------
If you are a user/developer of a paste-compatible framework/app (as Frameworks such as Pyramid and Turbogears are typically configured using Paste
Pyramid, Pylons and Turbogears) you can use the Deployment configuration files. If you would like to use these files with
`--paste <http://docs.gunicorn.org/en/latest/settings.html#paste>`_ option Gunicorn, there are two approaches.
to run your application.
As a server runner, Gunicorn can serve your application using the commands from
your framework, such as ``pserve`` or ``gearbox``. To use Gunicorn with these
commands, specify it as a server in your configuration file:
.. code-block:: ini
[server:main]
use = egg:gunicorn#main
host = 127.0.0.1
port = 8080
workers = 3
This approach is the quickest way to get started with Gunicorn, but there are
some limitations. Gunicorn will have no control over how the application is
loaded, so settings such as reload_ will have no effect and Gunicorn will be
unable to hot upgrade a running application. Using the daemon_ option may
confuse your command line tool. Instead, use the built-in support for these
features provided by that tool. For example, run ``pserve --reload`` instead of
specifying ``reload = True`` in the server configuration block. For advanced
configuration of Gunicorn, such as `Server Hooks`_ specifying a Gunicorn
configuration file using the ``config`` key is supported.
To use the full power of Gunicorn's reloading and hot code upgrades, use the
`paste option`_ to run your application instead. When used this way, Gunicorn
will use the application defined by the PasteDeploy configuration file, but
Gunicorn will not use any server configuration defined in the file. Instead,
`configure gunicorn`_.
For example:: For example::
@ -120,4 +172,13 @@ Or use a different application::
$ gunicorn --paste development.ini#admin -b :8080 --chdir /path/to/project $ gunicorn --paste development.ini#admin -b :8080 --chdir /path/to/project
It is all here. No configuration files nor additional Python modules to write! With both approaches, Gunicorn will use any loggers section found in Paste
Deployment configuration file, unless instructed otherwise by specifying
additional `logging settings`_.
.. _reload: http://docs.gunicorn.org/en/latest/settings.html#reload
.. _daemon: http://docs.gunicorn.org/en/latest/settings.html#daemon
.. _Server Hooks: http://docs.gunicorn.org/en/latest/settings.html#server-hooks
.. _paste option: http://docs.gunicorn.org/en/latest/settings.html#paste
.. _configure gunicorn: http://docs.gunicorn.org/en/latest/configure.html
.. _logging settings: http://docs.gunicorn.org/en/latest/settings.html#logging

File diff suppressed because it is too large Load Diff

View File

27
examples/deep/test.py Normal file
View File

@ -0,0 +1,27 @@
# -*- coding: utf-8 -
#
# This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information.
#
# Example code from Eventlet sources
from wsgiref.validate import validator
from gunicorn import __version__
@validator
def app(environ, start_response):
"""Simplest possible application object"""
data = b'Hello, World!\n'
status = '200 OK'
response_headers = [
('Content-type', 'text/plain'),
('Content-Length', str(len(data))),
('X-Gunicorn-Version', __version__),
('Foo', 'B\u00e5r'), # Foo: Bår
]
start_response(status, response_headers)
return iter([data])

View File

@ -5,12 +5,9 @@
# #
# Example code from Eventlet sources # Example code from Eventlet sources
from wsgiref.validate import validator
from gunicorn import __version__ from gunicorn import __version__
@validator
def app(environ, start_response): def app(environ, start_response):
"""Simplest possible application object""" """Simplest possible application object"""
@ -24,8 +21,7 @@ def app(environ, start_response):
response_headers = [ response_headers = [
('Content-type', 'text/plain'), ('Content-type', 'text/plain'),
('Content-Length', str(len(data))), ('Content-Length', str(len(data))),
('X-Gunicorn-Version', __version__), ('X-Gunicorn-Version', __version__)
("Test", "test тест"),
] ]
start_response(status, response_headers) start_response(status, response_headers)
return iter([data]) return iter([data])

View File

@ -10,7 +10,7 @@ def child_process(queue):
class GunicornSubProcessTestMiddleware(object): class GunicornSubProcessTestMiddleware(object):
def __init__(self): def __init__(self):
super(GunicornSubProcessTestMiddleware, self).__init__() super().__init__()
self.queue = Queue() self.queue = Queue()
self.process = Process(target=child_process, args=(self.queue,)) self.process = Process(target=child_process, args=(self.queue,))
self.process.start() self.process.start()

View File

@ -12,7 +12,7 @@ class SimpleTest(TestCase):
""" """
Tests that 1 + 1 always equals 2. Tests that 1 + 1 always equals 2.
""" """
self.failUnlessEqual(1 + 1, 2) self.assertEqual(1 + 1, 2)
__test__ = {"doctest": """ __test__ = {"doctest": """
Another way to test that 1 + 1 is equal to 2. Another way to test that 1 + 1 is equal to 2.

View File

@ -0,0 +1,5 @@
-r requirements_flaskapp.txt
-r requirements_cherryapp.txt
-r requirements_pyramidapp.txt
-r requirements_tornadoapp.txt
-r requirements_webpyapp.txt

View File

@ -0,0 +1 @@
cherrypy

View File

@ -0,0 +1 @@
flask

View File

@ -0,0 +1 @@
pyramid

View File

@ -0,0 +1 @@
tornado<6

View File

@ -0,0 +1 @@
web-py

View File

@ -13,4 +13,4 @@ def app(environ, start_response):
log.info("Hello Info!") log.info("Hello Info!")
log.warn("Hello Warn!") log.warn("Hello Warn!")
log.error("Hello Error!") log.error("Hello Error!")
return ["Hello World!\n"] return [b"Hello World!\n"]

View File

@ -9,7 +9,7 @@
# #
# Launch a server with the app in a terminal # Launch a server with the app in a terminal
# #
# $ gunicorn -w3 readline:app # $ gunicorn -w3 readline_app:app
# #
# Then in another terminal launch the following command: # Then in another terminal launch the following command:
# #
@ -27,8 +27,7 @@ def app(environ, start_response):
response_headers = [ response_headers = [
('Content-type', 'text/plain'), ('Content-type', 'text/plain'),
('Transfer-Encoding', "chunked"), ('Transfer-Encoding', "chunked"),
('X-Gunicorn-Version', __version__), ('X-Gunicorn-Version', __version__)
#("Test", "test тест"),
] ]
start_response(status, response_headers) start_response(status, response_headers)
@ -42,4 +41,4 @@ def app(environ, start_response):
print(line) print(line)
lines.append(line) lines.append(line)
return iter(lines) return iter(lines)

View File

@ -35,7 +35,7 @@ class StandaloneApplication(gunicorn.app.base.BaseApplication):
def __init__(self, app, options=None): def __init__(self, app, options=None):
self.options = options or {} self.options = options or {}
self.application = app self.application = app
super(StandaloneApplication, self).__init__() super().__init__()
def load_config(self): def load_config(self):
config = {key: value for key, value in self.options.items() config = {key: value for key, value in self.options.items()

View File

@ -21,7 +21,7 @@ def app(environ, start_response):
('Content-type', 'text/plain'), ('Content-type', 'text/plain'),
('Content-Length', str(len(data))), ('Content-Length', str(len(data))),
('X-Gunicorn-Version', __version__), ('X-Gunicorn-Version', __version__),
#("Test", "test тест"), ('Foo', 'B\u00e5r'), # Foo: Bår
] ]
start_response(status, response_headers) start_response(status, response_headers)
return iter([data]) return iter([data])

View File

@ -250,7 +250,7 @@ class WebSocket(object):
data = struct.unpack('<I', buf[f['hlen']:f['hlen']+4])[0] data = struct.unpack('<I', buf[f['hlen']:f['hlen']+4])[0]
of1 = f['hlen']+4 of1 = f['hlen']+4
b = '' b = ''
for i in xrange(0, int(f['length']/4)): for i in range(0, int(f['length']/4)):
mask = struct.unpack('<I', buf[of1+4*i:of1+4*(i+1)])[0] mask = struct.unpack('<I', buf[of1+4*i:of1+4*(i+1)])[0]
b += struct.pack('I', data ^ mask) b += struct.pack('I', data ^ mask)
@ -292,10 +292,8 @@ class WebSocket(object):
As per the dataframing section (5.3) for the websocket spec As per the dataframing section (5.3) for the websocket spec
""" """
if isinstance(message, unicode): if isinstance(message, str):
message = message.encode('utf-8') message = message.encode('utf-8')
elif not isinstance(message, str):
message = str(message)
packed = "\x00%s\xFF" % message packed = "\x00%s\xFF" % message
return packed return packed
@ -353,7 +351,7 @@ class WebSocket(object):
def send(self, message): def send(self, message):
"""Send a message to the browser. """Send a message to the browser.
*message* should be convertable to a string; unicode objects should be *message* should be convertible to a string; unicode objects should be
encodable as utf-8. Raises socket.error with errno of 32 encodable as utf-8. Raises socket.error with errno of 32
(broken pipe) if the socket has already been closed by the client.""" (broken pipe) if the socket has already been closed by the client."""
if self.version in ['7', '8', '13']: if self.version in ['7', '8', '13']:

View File

@ -251,7 +251,7 @@ class WebSocket(object):
data = struct.unpack('<I', buf[f['hlen']:f['hlen']+4])[0] data = struct.unpack('<I', buf[f['hlen']:f['hlen']+4])[0]
of1 = f['hlen']+4 of1 = f['hlen']+4
b = '' b = ''
for i in xrange(0, int(f['length']/4)): for i in range(0, int(f['length']/4)):
mask = struct.unpack('<I', buf[of1+4*i:of1+4*(i+1)])[0] mask = struct.unpack('<I', buf[of1+4*i:of1+4*(i+1)])[0]
b += struct.pack('I', data ^ mask) b += struct.pack('I', data ^ mask)
@ -293,10 +293,8 @@ class WebSocket(object):
As per the dataframing section (5.3) for the websocket spec As per the dataframing section (5.3) for the websocket spec
""" """
if isinstance(message, unicode): if isinstance(message, str):
message = message.encode('utf-8') message = message.encode('utf-8')
elif not isinstance(message, str):
message = str(message)
packed = "\x00%s\xFF" % message packed = "\x00%s\xFF" % message
return packed return packed
@ -354,7 +352,7 @@ class WebSocket(object):
def send(self, message): def send(self, message):
"""Send a message to the browser. """Send a message to the browser.
*message* should be convertable to a string; unicode objects should be *message* should be convertible to a string; unicode objects should be
encodable as utf-8. Raises socket.error with errno of 32 encodable as utf-8. Raises socket.error with errno of 32
(broken pipe) if the socket has already been closed by the client.""" (broken pipe) if the socket has already been closed by the client."""
if self.version in ['7', '8', '13']: if self.version in ['7', '8', '13']:

View File

@ -8,7 +8,7 @@ max_mem = 100000
class MemoryWatch(threading.Thread): class MemoryWatch(threading.Thread):
def __init__(self, server, max_mem): def __init__(self, server, max_mem):
super(MemoryWatch, self).__init__() super().__init__()
self.daemon = True self.daemon = True
self.server = server self.server = server
self.max_mem = max_mem self.max_mem = max_mem

View File

@ -3,6 +3,7 @@
# This file is part of gunicorn released under the MIT license. # This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information. # See the NOTICE for more information.
version_info = (19, 9, 0) version_info = (20, 1, 0)
__version__ = ".".join([str(v) for v in version_info]) __version__ = ".".join([str(v) for v in version_info])
SERVER_SOFTWARE = "gunicorn/%s" % __version__ SERVER = "gunicorn"
SERVER_SOFTWARE = "%s/%s" % (SERVER, __version__)

7
gunicorn/__main__.py Normal file
View File

@ -0,0 +1,7 @@
# -*- coding: utf-8 -
#
# This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information.
from gunicorn.app.wsgiapp import run
run()

View File

@ -1,65 +0,0 @@
def _check_if_pyc(fname):
"""Return True if the extension is .pyc, False if .py
and None if otherwise"""
from imp import find_module
from os.path import realpath, dirname, basename, splitext
# Normalize the file-path for the find_module()
filepath = realpath(fname)
dirpath = dirname(filepath)
module_name = splitext(basename(filepath))[0]
# Validate and fetch
try:
fileobj, fullpath, (_, _, pytype) = find_module(module_name, [dirpath])
except ImportError:
raise IOError("Cannot find config file. "
"Path maybe incorrect! : {0}".format(filepath))
return pytype, fileobj, fullpath
def _get_codeobj(pyfile):
""" Returns the code object, given a python file """
from imp import PY_COMPILED, PY_SOURCE
result, fileobj, fullpath = _check_if_pyc(pyfile)
# WARNING:
# fp.read() can blowup if the module is extremely large file.
# Lookout for overflow errors.
try:
data = fileobj.read()
finally:
fileobj.close()
# This is a .pyc file. Treat accordingly.
if result is PY_COMPILED:
# .pyc format is as follows:
# 0 - 4 bytes: Magic number, which changes with each create of .pyc file.
# First 2 bytes change with each marshal of .pyc file. Last 2 bytes is "\r\n".
# 4 - 8 bytes: Datetime value, when the .py was last changed.
# 8 - EOF: Marshalled code object data.
# So to get code object, just read the 8th byte onwards till EOF, and
# UN-marshal it.
import marshal
code_obj = marshal.loads(data[8:])
elif result is PY_SOURCE:
# This is a .py file.
code_obj = compile(data, fullpath, 'exec')
else:
# Unsupported extension
raise Exception("Input file is unknown format: {0}".format(fullpath))
# Return code object
return code_obj
def execfile_(fname, *args):
if fname.endswith(".pyc"):
code = _get_codeobj(fname)
else:
with open(fname, 'rb') as file:
code = compile(file.read(), fname, 'exec')
return exec(code, *args)

View File

@ -2,16 +2,18 @@
# #
# This file is part of gunicorn released under the MIT license. # This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information. # See the NOTICE for more information.
import importlib.util
import importlib.machinery
import os import os
import sys import sys
import traceback import traceback
from gunicorn._compat import execfile_
from gunicorn import util from gunicorn import util
from gunicorn.arbiter import Arbiter from gunicorn.arbiter import Arbiter
from gunicorn.config import Config, get_default_config_file from gunicorn.config import Config, get_default_config_file
from gunicorn import debug from gunicorn import debug
class BaseApplication(object): class BaseApplication(object):
""" """
An application interface for configuring and loading An application interface for configuring and loading
@ -93,25 +95,30 @@ class Application(BaseApplication):
if not os.path.exists(filename): if not os.path.exists(filename):
raise RuntimeError("%r doesn't exist" % filename) raise RuntimeError("%r doesn't exist" % filename)
cfg = { ext = os.path.splitext(filename)[1]
"__builtins__": __builtins__,
"__name__": "__config__",
"__file__": filename,
"__doc__": None,
"__package__": None
}
try: try:
execfile_(filename, cfg, cfg) module_name = '__config__'
if ext in [".py", ".pyc"]:
spec = importlib.util.spec_from_file_location(module_name, filename)
else:
msg = "configuration file should have a valid Python extension.\n"
util.warn(msg)
loader_ = importlib.machinery.SourceFileLoader(module_name, filename)
spec = importlib.util.spec_from_file_location(module_name, filename, loader=loader_)
mod = importlib.util.module_from_spec(spec)
sys.modules[module_name] = mod
spec.loader.exec_module(mod)
except Exception: except Exception:
print("Failed to read config file: %s" % filename, file=sys.stderr) print("Failed to read config file: %s" % filename, file=sys.stderr)
traceback.print_exc() traceback.print_exc()
sys.stderr.flush() sys.stderr.flush()
sys.exit(1) sys.exit(1)
return cfg return vars(mod)
def get_config_from_module_name(self, module_name): def get_config_from_module_name(self, module_name):
return vars(util.import_module(module_name)) return vars(importlib.import_module(module_name))
def load_config_from_module_name_or_filename(self, location): def load_config_from_module_name_or_filename(self, location):
""" """
@ -135,7 +142,7 @@ class Application(BaseApplication):
continue continue
try: try:
self.cfg.set(k.lower(), v) self.cfg.set(k.lower(), v)
except: except Exception:
print("Invalid value for %s: %s\n" % (k, v), file=sys.stderr) print("Invalid value for %s: %s\n" % (k, v), file=sys.stderr)
sys.stderr.flush() sys.stderr.flush()
raise raise
@ -193,10 +200,13 @@ class Application(BaseApplication):
self.chdir() self.chdir()
def run(self): def run(self):
if self.cfg.check_config: if self.cfg.print_config:
print(self.cfg)
if self.cfg.print_config or self.cfg.check_config:
try: try:
self.load() self.load()
except: except Exception:
msg = "\nError while loading the application:\n" msg = "\nError while loading the application:\n"
print(msg, file=sys.stderr) print(msg, file=sys.stderr)
traceback.print_exc() traceback.print_exc()
@ -208,6 +218,11 @@ class Application(BaseApplication):
debug.spew() debug.spew()
if self.cfg.daemon: if self.cfg.daemon:
if os.environ.get('NOTIFY_SOCKET'):
msg = "Warning: you shouldn't specify `daemon = True`" \
" when launching by systemd with `Type = notify`"
print(msg, file=sys.stderr, flush=True)
util.daemonize(self.cfg.enable_stdio_inheritance) util.daemonize(self.cfg.enable_stdio_inheritance)
# set python paths # set python paths
@ -218,4 +233,4 @@ class Application(BaseApplication):
if pythonpath not in sys.path: if pythonpath not in sys.path:
sys.path.insert(0, pythonpath) sys.path.insert(0, pythonpath)
super(Application, self).run() super().run()

View File

@ -3,206 +3,73 @@
# This file is part of gunicorn released under the MIT license. # This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information. # See the NOTICE for more information.
# pylint: skip-file import configparser
import os import os
import pkg_resources
import sys
try: from paste.deploy import loadapp
import configparser as ConfigParser
except ImportError:
import ConfigParser
from paste.deploy import loadapp, loadwsgi from gunicorn.app.wsgiapp import WSGIApplication
SERVER = loadwsgi.SERVER from gunicorn.config import get_default_config_file
from gunicorn.app.base import Application
from gunicorn.config import Config, get_default_config_file
from gunicorn import util
def _has_logging_config(paste_file): def get_wsgi_app(config_uri, name=None, defaults=None):
cfg_parser = ConfigParser.ConfigParser() if ':' not in config_uri:
cfg_parser.read([paste_file]) config_uri = "config:%s" % config_uri
return cfg_parser.has_section('loggers')
return loadapp(
config_uri,
name=name,
relative_to=os.getcwd(),
global_conf=defaults,
)
def paste_config(gconfig, config_url, relative_to, global_conf=None): def has_logging_config(config_file):
# add entry to pkg_resources parser = configparser.ConfigParser()
sys.path.insert(0, relative_to) parser.read([config_file])
pkg_resources.working_set.add_entry(relative_to) return parser.has_section('loggers')
config_url = config_url.split('#')[0]
cx = loadwsgi.loadcontext(SERVER, config_url, relative_to=relative_to,
global_conf=global_conf)
gc, lc = cx.global_conf.copy(), cx.local_conf.copy()
cfg = {}
host, port = lc.pop('host', ''), lc.pop('port', '') def serve(app, global_conf, **local_conf):
"""\
A Paste Deployment server runner.
Example configuration:
[server:main]
use = egg:gunicorn#main
host = 127.0.0.1
port = 5000
"""
config_file = global_conf['__file__']
gunicorn_config_file = local_conf.pop('config', None)
host = local_conf.pop('host', '')
port = local_conf.pop('port', '')
if host and port: if host and port:
cfg['bind'] = '%s:%s' % (host, port) local_conf['bind'] = '%s:%s' % (host, port)
elif host: elif host:
cfg['bind'] = host.split(',') local_conf['bind'] = host.split(',')
cfg['default_proc_name'] = gc.get('__file__') class PasterServerApplication(WSGIApplication):
def load_config(self):
self.cfg.set("default_proc_name", config_file)
# init logging configuration if has_logging_config(config_file):
config_file = config_url.split(':')[1] self.cfg.set("logconfig", config_file)
if _has_logging_config(config_file):
cfg.setdefault('logconfig', config_file)
for k, v in gc.items(): if gunicorn_config_file:
if k not in gconfig.settings: self.load_config_from_file(gunicorn_config_file)
continue else:
cfg[k] = v default_gunicorn_config_file = get_default_config_file()
if default_gunicorn_config_file is not None:
self.load_config_from_file(default_gunicorn_config_file)
for k, v in lc.items(): for k, v in local_conf.items():
if k not in gconfig.settings: if v is not None:
continue
cfg[k] = v
return cfg
def load_pasteapp(config_url, relative_to, global_conf=None):
return loadapp(config_url, relative_to=relative_to,
global_conf=global_conf)
class PasterBaseApplication(Application):
gcfg = None
def app_config(self):
return paste_config(self.cfg, self.cfgurl, self.relpath,
global_conf=self.gcfg)
def load_config(self):
super(PasterBaseApplication, self).load_config()
# reload logging conf
if hasattr(self, "cfgfname"):
parser = ConfigParser.ConfigParser()
parser.read([self.cfgfname])
if parser.has_section('loggers'):
from logging.config import fileConfig
config_file = os.path.abspath(self.cfgfname)
fileConfig(config_file, dict(__file__=config_file,
here=os.path.dirname(config_file)))
class PasterApplication(PasterBaseApplication):
def init(self, parser, opts, args):
if len(args) != 1:
parser.error("No application name specified.")
cwd = util.getcwd()
cfgfname = os.path.normpath(os.path.join(cwd, args[0]))
cfgfname = os.path.abspath(cfgfname)
if not os.path.exists(cfgfname):
parser.error("Config file not found: %s" % cfgfname)
self.cfgurl = 'config:%s' % cfgfname
self.relpath = os.path.dirname(cfgfname)
self.cfgfname = cfgfname
sys.path.insert(0, self.relpath)
pkg_resources.working_set.add_entry(self.relpath)
return self.app_config()
def load(self):
# chdir to the configured path before loading,
# default is the current dir
os.chdir(self.cfg.chdir)
return load_pasteapp(self.cfgurl, self.relpath, global_conf=self.gcfg)
class PasterServerApplication(PasterBaseApplication):
def __init__(self, app, gcfg=None, host="127.0.0.1", port=None, **kwargs):
# pylint: disable=super-init-not-called
self.cfg = Config()
self.gcfg = gcfg # need to hold this for app_config
self.app = app
self.callable = None
gcfg = gcfg or {}
cfgfname = gcfg.get("__file__")
if cfgfname is not None:
self.cfgurl = 'config:%s' % cfgfname
self.relpath = os.path.dirname(cfgfname)
self.cfgfname = cfgfname
cfg = kwargs.copy()
if port and not host.startswith("unix:"):
bind = "%s:%s" % (host, port)
else:
bind = host
cfg["bind"] = bind.split(',')
if gcfg:
for k, v in gcfg.items():
cfg[k] = v
cfg["default_proc_name"] = cfg['__file__']
try:
for k, v in cfg.items():
if k.lower() in self.cfg.settings and v is not None:
self.cfg.set(k.lower(), v) self.cfg.set(k.lower(), v)
except Exception as e:
print("\nConfig error: %s" % str(e), file=sys.stderr)
sys.stderr.flush()
sys.exit(1)
if cfg.get("config"): def load(self):
self.load_config_from_file(cfg["config"]) return app
else:
default_config = get_default_config_file()
if default_config is not None:
self.load_config_from_file(default_config)
def load(self): PasterServerApplication().run()
return self.app
def run():
"""\
The ``gunicorn_paster`` command for launching Paster compatible
applications like Pylons or Turbogears2
"""
util.warn("""This command is deprecated.
You should now use the `--paste` option. Ex.:
gunicorn --paste development.ini
""")
from gunicorn.app.pasterapp import PasterApplication
PasterApplication("%(prog)s [OPTIONS] pasteconfig.ini").run()
def paste_server(app, gcfg=None, host="127.0.0.1", port=None, **kwargs):
"""\
A paster server.
Then entry point in your paster ini file should looks like this:
[server:main]
use = egg:gunicorn#main
host = 127.0.0.1
port = 5000
"""
util.warn("""This command is deprecated.
You should now use the `--paste` option. Ex.:
gunicorn --paste development.ini
""")
from gunicorn.app.pasterapp import PasterServerApplication
PasterServerApplication(app, gcfg=gcfg, host=host, port=port, **kwargs).run()

View File

@ -12,38 +12,44 @@ from gunicorn import util
class WSGIApplication(Application): class WSGIApplication(Application):
def init(self, parser, opts, args): def init(self, parser, opts, args):
self.app_uri = None
if opts.paste: if opts.paste:
app_name = 'main' from .pasterapp import has_logging_config
path = opts.paste
if '#' in path:
path, app_name = path.split('#')
path = os.path.abspath(os.path.normpath(
os.path.join(util.getcwd(), path)))
if not os.path.exists(path): config_uri = os.path.abspath(opts.paste)
raise ConfigError("%r not found" % path) config_file = config_uri.split('#')[0]
# paste application, load the config if not os.path.exists(config_file):
self.cfgurl = 'config:%s#%s' % (path, app_name) raise ConfigError("%r not found" % config_file)
self.relpath = os.path.dirname(path)
from .pasterapp import paste_config self.cfg.set("default_proc_name", config_file)
return paste_config(self.cfg, self.cfgurl, self.relpath) self.app_uri = config_uri
if not args: if has_logging_config(config_file):
parser.error("No application module specified.") self.cfg.set("logconfig", config_file)
self.cfg.set("default_proc_name", args[0]) return
self.app_uri = args[0]
if len(args) > 0:
self.cfg.set("default_proc_name", args[0])
self.app_uri = args[0]
def load_config(self):
super().load_config()
if self.app_uri is None:
if self.cfg.wsgi_app is not None:
self.app_uri = self.cfg.wsgi_app
else:
raise ConfigError("No application module specified.")
def load_wsgiapp(self): def load_wsgiapp(self):
# load the app
return util.import_app(self.app_uri) return util.import_app(self.app_uri)
def load_pasteapp(self): def load_pasteapp(self):
# load the paste app from .pasterapp import get_wsgi_app
from .pasterapp import load_pasteapp return get_wsgi_app(self.app_uri, defaults=self.cfg.paste_global_conf)
return load_pasteapp(self.cfgurl, self.relpath, global_conf=self.cfg.paste_global_conf)
def load(self): def load(self):
if self.cfg.paste is not None: if self.cfg.paste is not None:

View File

@ -154,10 +154,11 @@ class Arbiter(object):
self.LISTENERS = sock.create_sockets(self.cfg, self.log, fds) self.LISTENERS = sock.create_sockets(self.cfg, self.log, fds)
listeners_str = ",".join([str(l) for l in self.LISTENERS]) listeners_str = ",".join([str(lnr) for lnr in self.LISTENERS])
self.log.debug("Arbiter booted") self.log.debug("Arbiter booted")
self.log.info("Listening at: %s (%s)", listeners_str, self.pid) self.log.info("Listening at: %s (%s)", listeners_str, self.pid)
self.log.info("Using worker: %s", self.cfg.worker_class_str) self.log.info("Using worker: %s", self.cfg.worker_class_str)
systemd.sd_notify("READY=1\nSTATUS=Gunicorn arbiter booted", self.log)
# check worker class requirements # check worker class requirements
if hasattr(self.worker_class, "check_config"): if hasattr(self.worker_class, "check_config"):
@ -222,17 +223,15 @@ class Arbiter(object):
self.log.info("Handling signal: %s", signame) self.log.info("Handling signal: %s", signame)
handler() handler()
self.wakeup() self.wakeup()
except StopIteration: except (StopIteration, KeyboardInterrupt):
self.halt()
except KeyboardInterrupt:
self.halt() self.halt()
except HaltServer as inst: except HaltServer as inst:
self.halt(reason=inst.reason, exit_status=inst.exit_status) self.halt(reason=inst.reason, exit_status=inst.exit_status)
except SystemExit: except SystemExit:
raise raise
except Exception: except Exception:
self.log.info("Unhandled exception in main loop", self.log.error("Unhandled exception in main loop",
exc_info=True) exc_info=True)
self.stop(False) self.stop(False)
if self.pidfile is not None: if self.pidfile is not None:
self.pidfile.unlink() self.pidfile.unlink()
@ -296,8 +295,8 @@ class Arbiter(object):
def handle_usr2(self): def handle_usr2(self):
"""\ """\
SIGUSR2 handling. SIGUSR2 handling.
Creates a new master/worker set as a slave of the current Creates a new arbiter/worker set as a fork of the current
master without affecting old workers. Use this to do live arbiter without affecting old workers. Use this to do live
deployment with the ability to backout a change. deployment with the ability to backout a change.
""" """
self.reexec() self.reexec()
@ -422,7 +421,7 @@ class Arbiter(object):
environ['LISTEN_FDS'] = str(len(self.LISTENERS)) environ['LISTEN_FDS'] = str(len(self.LISTENERS))
else: else:
environ['GUNICORN_FD'] = ','.join( environ['GUNICORN_FD'] = ','.join(
str(l.fileno()) for l in self.LISTENERS) str(lnr.fileno()) for lnr in self.LISTENERS)
os.chdir(self.START_CTX['cwd']) os.chdir(self.START_CTX['cwd'])
@ -455,11 +454,11 @@ class Arbiter(object):
# do we need to change listener ? # do we need to change listener ?
if old_address != self.cfg.address: if old_address != self.cfg.address:
# close all listeners # close all listeners
for l in self.LISTENERS: for lnr in self.LISTENERS:
l.close() lnr.close()
# init new listeners # init new listeners
self.LISTENERS = sock.create_sockets(self.cfg, self.log) self.LISTENERS = sock.create_sockets(self.cfg, self.log)
listeners_str = ",".join([str(l) for l in self.LISTENERS]) listeners_str = ",".join([str(lnr) for lnr in self.LISTENERS])
self.log.info("Listening at: %s", listeners_str) self.log.info("Listening at: %s", listeners_str)
# do some actions on reload # do some actions on reload
@ -591,7 +590,7 @@ class Arbiter(object):
print("%s" % e, file=sys.stderr) print("%s" % e, file=sys.stderr)
sys.stderr.flush() sys.stderr.flush()
sys.exit(self.APP_LOAD_ERROR) sys.exit(self.APP_LOAD_ERROR)
except: except Exception:
self.log.exception("Exception in worker process") self.log.exception("Exception in worker process")
if not worker.booted: if not worker.booted:
sys.exit(self.WORKER_BOOT_ERROR) sys.exit(self.WORKER_BOOT_ERROR)
@ -601,9 +600,9 @@ class Arbiter(object):
try: try:
worker.tmp.close() worker.tmp.close()
self.cfg.worker_exit(self, worker) self.cfg.worker_exit(self, worker)
except: except Exception:
self.log.warning("Exception during worker exit:\n%s", self.log.warning("Exception during worker exit:\n%s",
traceback.format_exc()) traceback.format_exc())
def spawn_workers(self): def spawn_workers(self):
"""\ """\

View File

@ -51,6 +51,16 @@ class Config(object):
self.prog = prog or os.path.basename(sys.argv[0]) self.prog = prog or os.path.basename(sys.argv[0])
self.env_orig = os.environ.copy() self.env_orig = os.environ.copy()
def __str__(self):
lines = []
kmax = max(len(k) for k in self.settings)
for k in sorted(self.settings):
v = self.settings[k].value
if callable(v):
v = "<{}()>".format(v.__qualname__)
lines.append("{k:{kmax}} = {v}".format(k=k, v=v, kmax=kmax))
return "\n".join(lines)
def __getattr__(self, name): def __getattr__(self, name):
if name not in self.settings: if name not in self.settings:
raise AttributeError("No configuration setting for: %s" % name) raise AttributeError("No configuration setting for: %s" % name)
@ -59,7 +69,7 @@ class Config(object):
def __setattr__(self, name, value): def __setattr__(self, name, value):
if name != "settings" and name in self.settings: if name != "settings" and name in self.settings:
raise AttributeError("Invalid access!") raise AttributeError("Invalid access!")
super(Config, self).__setattr__(name, value) super().__setattr__(name, value)
def set(self, name, value): def set(self, name, value):
if name not in self.settings: if name not in self.settings:
@ -78,9 +88,9 @@ class Config(object):
} }
parser = argparse.ArgumentParser(**kwargs) parser = argparse.ArgumentParser(**kwargs)
parser.add_argument("-v", "--version", parser.add_argument("-v", "--version",
action="version", default=argparse.SUPPRESS, action="version", default=argparse.SUPPRESS,
version="%(prog)s (version " + __version__ + ")\n", version="%(prog)s (version " + __version__ + ")\n",
help="show program's version number and exit") help="show program's version number and exit")
parser.add_argument("args", nargs="*", help=argparse.SUPPRESS) parser.add_argument("args", nargs="*", help=argparse.SUPPRESS)
keys = sorted(self.settings, key=self.settings.__getitem__) keys = sorted(self.settings, key=self.settings.__getitem__)
@ -93,17 +103,17 @@ class Config(object):
def worker_class_str(self): def worker_class_str(self):
uri = self.settings['worker_class'].get() uri = self.settings['worker_class'].get()
## are we using a threaded worker? # are we using a threaded worker?
is_sync = uri.endswith('SyncWorker') or uri == 'sync' is_sync = uri.endswith('SyncWorker') or uri == 'sync'
if is_sync and self.threads > 1: if is_sync and self.threads > 1:
return "threads" return "gthread"
return uri return uri
@property @property
def worker_class(self): def worker_class(self):
uri = self.settings['worker_class'].get() uri = self.settings['worker_class'].get()
## are we using a threaded worker? # are we using a threaded worker?
is_sync = uri.endswith('SyncWorker') or uri == 'sync' is_sync = uri.endswith('SyncWorker') or uri == 'sync'
if is_sync and self.threads > 1: if is_sync and self.threads > 1:
uri = "gunicorn.workers.gthread.ThreadWorker" uri = "gunicorn.workers.gthread.ThreadWorker"
@ -224,7 +234,7 @@ class Config(object):
class SettingMeta(type): class SettingMeta(type):
def __new__(cls, name, bases, attrs): def __new__(cls, name, bases, attrs):
super_new = super(SettingMeta, cls).__new__ super_new = super().__new__
parents = [b for b in bases if isinstance(b, SettingMeta)] parents = [b for b in bases if isinstance(b, SettingMeta)]
if not parents: if not parents:
return super_new(cls, name, bases, attrs) return super_new(cls, name, bases, attrs)
@ -308,6 +318,15 @@ class Setting(object):
self.order < other.order) self.order < other.order)
__cmp__ = __lt__ __cmp__ = __lt__
def __repr__(self):
return "<%s.%s object at %x with value %r>" % (
self.__class__.__module__,
self.__class__.__name__,
id(self),
self.value,
)
Setting = SettingMeta('Setting', (Setting,), {}) Setting = SettingMeta('Setting', (Setting,), {})
@ -429,7 +448,7 @@ def validate_callable(arity):
raise TypeError(str(e)) raise TypeError(str(e))
except AttributeError: except AttributeError:
raise TypeError("Can not load '%s' from '%s'" raise TypeError("Can not load '%s' from '%s'"
"" % (obj_name, mod_name)) "" % (obj_name, mod_name))
if not callable(val): if not callable(val):
raise TypeError("Value is not callable: %s" % val) raise TypeError("Value is not callable: %s" % val)
if arity != -1 and arity != util.get_arity(val): if arity != -1 and arity != util.get_arity(val):
@ -515,7 +534,7 @@ def validate_reload_engine(val):
def get_default_config_file(): def get_default_config_file():
config_path = os.path.join(os.path.abspath(os.getcwd()), config_path = os.path.join(os.path.abspath(os.getcwd()),
'gunicorn.conf.py') 'gunicorn.conf.py')
if os.path.exists(config_path): if os.path.exists(config_path):
return config_path return config_path
return None return None
@ -527,20 +546,37 @@ class ConfigFile(Setting):
cli = ["-c", "--config"] cli = ["-c", "--config"]
meta = "CONFIG" meta = "CONFIG"
validator = validate_string validator = validate_string
default = None default = "./gunicorn.conf.py"
desc = """\ desc = """\
The Gunicorn config file. :ref:`The Gunicorn config file<configuration_file>`.
A string of the form ``PATH``, ``file:PATH``, or ``python:MODULE_NAME``. A string of the form ``PATH``, ``file:PATH``, or ``python:MODULE_NAME``.
Only has an effect when specified on the command line or as part of an Only has an effect when specified on the command line or as part of an
application specific configuration. application specific configuration.
By default, a file named ``gunicorn.conf.py`` will be read from the same
directory where gunicorn is being run.
.. versionchanged:: 19.4 .. versionchanged:: 19.4
Loading the config from a Python module requires the ``python:`` Loading the config from a Python module requires the ``python:``
prefix. prefix.
""" """
class WSGIApp(Setting):
name = "wsgi_app"
section = "Config File"
meta = "STRING"
validator = validate_string
default = None
desc = """\
A WSGI application path in pattern ``$(MODULE_NAME):$(VARIABLE_NAME)``.
.. versionadded:: 20.1.0
"""
class Bind(Setting): class Bind(Setting):
name = "bind" name = "bind"
action = "append" action = "append"
@ -557,8 +593,11 @@ class Bind(Setting):
desc = """\ desc = """\
The socket to bind. The socket to bind.
A string of the form: ``HOST``, ``HOST:PORT``, ``unix:PATH``. An IP is A string of the form: ``HOST``, ``HOST:PORT``, ``unix:PATH``,
a valid ``HOST``. ``fd://FD``. An IP is a valid ``HOST``.
.. versionchanged:: 20.0
Support for ``fd://FD`` got added.
Multiple addresses can be bound. ex.:: Multiple addresses can be bound. ex.::
@ -566,6 +605,10 @@ class Bind(Setting):
will bind the `test:app` application on localhost both on ipv6 will bind the `test:app` application on localhost both on ipv6
and ipv4 interfaces. and ipv4 interfaces.
If the ``PORT`` environment variable is defined, the default
is ``['0.0.0.0:$PORT']``. If it is not defined, the default
is ``['127.0.0.1:8000']``.
""" """
@ -604,8 +647,9 @@ class Workers(Setting):
You'll want to vary this a bit to find the best for your particular You'll want to vary this a bit to find the best for your particular
application's work load. application's work load.
By default, the value of the ``WEB_CONCURRENCY`` environment variable. By default, the value of the ``WEB_CONCURRENCY`` environment variable,
If it is not defined, the default is ``1``. which is set by some Platform-as-a-Service providers such as Heroku. If
it is not defined, the default is ``1``.
""" """
@ -622,32 +666,27 @@ class WorkerClass(Setting):
The default class (``sync``) should handle most "normal" types of The default class (``sync``) should handle most "normal" types of
workloads. You'll want to read :doc:`design` for information on when workloads. You'll want to read :doc:`design` for information on when
you might want to choose one of the other worker classes. Required you might want to choose one of the other worker classes. Required
libraries may be installed using setuptools' ``extra_require`` feature. libraries may be installed using setuptools' ``extras_require`` feature.
A string referring to one of the following bundled classes: A string referring to one of the following bundled classes:
* ``sync`` * ``sync``
* ``eventlet`` - Requires eventlet >= 0.9.7 (or install it via * ``eventlet`` - Requires eventlet >= 0.24.1 (or install it via
``pip install gunicorn[eventlet]``) ``pip install gunicorn[eventlet]``)
* ``gevent`` - Requires gevent >= 0.13 (or install it via * ``gevent`` - Requires gevent >= 1.4 (or install it via
``pip install gunicorn[gevent]``) ``pip install gunicorn[gevent]``)
* ``tornado`` - Requires tornado >= 0.2 (or install it via * ``tornado`` - Requires tornado >= 0.2 (or install it via
``pip install gunicorn[tornado]``) ``pip install gunicorn[tornado]``)
* ``gthread`` - Python 2 requires the futures package to be installed * ``gthread`` - Python 2 requires the futures package to be installed
(or install it via ``pip install gunicorn[gthread]``) (or install it via ``pip install gunicorn[gthread]``)
* ``gaiohttp`` - Deprecated.
Optionally, you can provide your own worker by giving Gunicorn a Optionally, you can provide your own worker by giving Gunicorn a
Python path to a subclass of ``gunicorn.workers.base.Worker``. Python path to a subclass of ``gunicorn.workers.base.Worker``.
This alternative syntax will load the gevent class: This alternative syntax will load the gevent class:
``gunicorn.workers.ggevent.GeventWorker``. ``gunicorn.workers.ggevent.GeventWorker``.
.. deprecated:: 19.8
The ``gaiohttp`` worker is deprecated. Please use
``aiohttp.worker.GunicornWebWorker`` instead. See
:ref:`asyncio-workers` for more information on how to use it.
""" """
class WorkerThreads(Setting): class WorkerThreads(Setting):
name = "threads" name = "threads"
section = "Worker Processes" section = "Worker Processes"
@ -668,7 +707,7 @@ class WorkerThreads(Setting):
If it is not defined, the default is ``1``. If it is not defined, the default is ``1``.
This setting only affects the Gthread worker type. This setting only affects the Gthread worker type.
.. note:: .. note::
If you try to use the ``sync`` worker type and set the ``threads`` If you try to use the ``sync`` worker type and set the ``threads``
setting to more than 1, the ``gthread`` worker type will be used setting to more than 1, the ``gthread`` worker type will be used
@ -741,10 +780,14 @@ class Timeout(Setting):
desc = """\ desc = """\
Workers silent for more than this many seconds are killed and restarted. Workers silent for more than this many seconds are killed and restarted.
Generally set to thirty seconds. Only set this noticeably higher if Value is a positive number or 0. Setting it to 0 has the effect of
you're sure of the repercussions for sync workers. For the non sync infinite timeouts by disabling timeouts for all workers entirely.
workers it just means that the worker process is still communicating and
is not tied to the length of time required to handle a single request. Generally, the default of thirty seconds should suffice. Only set this
noticeably higher if you're sure of the repercussions for sync workers.
For the non sync workers it just means that the worker process is still
communicating and is not tied to the length of time required to handle a
single request.
""" """
@ -889,9 +932,9 @@ class ReloadEngine(Setting):
Valid engines are: Valid engines are:
* 'auto' * ``'auto'``
* 'poll' * ``'poll'``
* 'inotify' (requires inotify) * ``'inotify'`` (requires inotify)
.. versionadded:: 19.7 .. versionadded:: 19.7
""" """
@ -935,7 +978,20 @@ class ConfigCheck(Setting):
action = "store_true" action = "store_true"
default = False default = False
desc = """\ desc = """\
Check the configuration. Check the configuration and exit. The exit status is 0 if the
configuration is correct, and 1 if the configuration is incorrect.
"""
class PrintConfig(Setting):
name = "print_config"
section = "Debugging"
cli = ["--print-config"]
validator = validate_bool
action = "store_true"
default = False
desc = """\
Print the configuration settings as fully resolved. Implies :ref:`check-config`.
""" """
@ -1001,7 +1057,7 @@ class Chdir(Setting):
validator = validate_chdir validator = validate_chdir
default = util.getcwd() default = util.getcwd()
desc = """\ desc = """\
Chdir to specified directory before apps loading. Change directory to specified directory before loading apps.
""" """
@ -1019,6 +1075,7 @@ class Daemon(Setting):
background. background.
""" """
class Env(Setting): class Env(Setting):
name = "raw_env" name = "raw_env"
action = "append" action = "append"
@ -1029,13 +1086,21 @@ class Env(Setting):
default = [] default = []
desc = """\ desc = """\
Set environment variable (key=value). Set environment variables in the execution environment.
Pass variables to the execution environment. Ex.:: Should be a list of strings in the ``key=value`` format.
For example on the command line:
.. code-block:: console
$ gunicorn -b 127.0.0.1:8000 --env FOO=1 test:app $ gunicorn -b 127.0.0.1:8000 --env FOO=1 test:app
and test for the foo variable environment in your application. Or in the configuration file:
.. code-block:: python
raw_env = ["FOO=1"]
""" """
@ -1052,6 +1117,7 @@ class Pidfile(Setting):
If not set, no PID file will be written. If not set, no PID file will be written.
""" """
class WorkerTmpDir(Setting): class WorkerTmpDir(Setting):
name = "worker_tmp_dir" name = "worker_tmp_dir"
section = "Server Mechanics" section = "Server Mechanics"
@ -1105,6 +1171,7 @@ class Group(Setting):
change the worker processes group. change the worker processes group.
""" """
class Umask(Setting): class Umask(Setting):
name = "umask" name = "umask"
section = "Server Mechanics" section = "Server Mechanics"
@ -1171,10 +1238,16 @@ class SecureSchemeHeader(Setting):
desc = """\ desc = """\
A dictionary containing headers and values that the front-end proxy A dictionary containing headers and values that the front-end proxy
uses to indicate HTTPS requests. These tell Gunicorn to set uses to indicate HTTPS requests. If the source IP is permitted by
``forwarded-allow-ips`` (below), *and* at least one request header matches
a key-value pair listed in this dictionary, then Gunicorn will set
``wsgi.url_scheme`` to ``https``, so your application can tell that the ``wsgi.url_scheme`` to ``https``, so your application can tell that the
request is secure. request is secure.
If the other headers listed in this dictionary are not present in the request, they will be ignored,
but if the other headers are present and do not match the provided values, then
the request will fail to parse. See the note below for more detailed examples of this behaviour.
The dictionary should map upper-case header names to exact string The dictionary should map upper-case header names to exact string
values. The value comparisons are case-sensitive, unlike the header values. The value comparisons are case-sensitive, unlike the header
names, so make sure they're exactly what your front-end proxy sends names, so make sure they're exactly what your front-end proxy sends
@ -1202,6 +1275,71 @@ class ForwardedAllowIPS(Setting):
By default, the value of the ``FORWARDED_ALLOW_IPS`` environment By default, the value of the ``FORWARDED_ALLOW_IPS`` environment
variable. If it is not defined, the default is ``"127.0.0.1"``. variable. If it is not defined, the default is ``"127.0.0.1"``.
.. note::
The interplay between the request headers, the value of ``forwarded_allow_ips``, and the value of
``secure_scheme_headers`` is complex. Various scenarios are documented below to further elaborate.
In each case, we have a request from the remote address 134.213.44.18, and the default value of
``secure_scheme_headers``:
.. code::
secure_scheme_headers = {
'X-FORWARDED-PROTOCOL': 'ssl',
'X-FORWARDED-PROTO': 'https',
'X-FORWARDED-SSL': 'on'
}
.. list-table::
:header-rows: 1
:align: center
:widths: auto
* - ``forwarded-allow-ips``
- Secure Request Headers
- Result
- Explanation
* - .. code::
["127.0.0.1"]
- .. code::
X-Forwarded-Proto: https
- .. code::
wsgi.url_scheme = "http"
- IP address was not allowed
* - .. code::
"*"
- <none>
- .. code::
wsgi.url_scheme = "http"
- IP address allowed, but no secure headers provided
* - .. code::
"*"
- .. code::
X-Forwarded-Proto: https
- .. code::
wsgi.url_scheme = "https"
- IP address allowed, one request header matched
* - .. code::
["134.213.44.18"]
- .. code::
X-Forwarded-Ssl: on
X-Forwarded-Proto: http
- ``InvalidSchemeHeaders()`` raised
- IP address allowed, but the two secure headers disagreed on if HTTPS was used
""" """
@ -1218,6 +1356,7 @@ class AccessLog(Setting):
``'-'`` means log to stdout. ``'-'`` means log to stdout.
""" """
class DisableRedirectAccessToSyslog(Setting): class DisableRedirectAccessToSyslog(Setting):
name = "disable_redirect_access_to_syslog" name = "disable_redirect_access_to_syslog"
section = "Logging" section = "Logging"
@ -1260,6 +1399,7 @@ class AccessLogFormat(Setting):
f referer f referer
a user agent a user agent
T request time in seconds T request time in seconds
M request time in milliseconds
D request time in microseconds D request time in microseconds
L request time in decimal seconds L request time in decimal seconds
p process ID p process ID
@ -1305,11 +1445,11 @@ class Loglevel(Setting):
Valid level names are: Valid level names are:
* debug * ``'debug'``
* info * ``'info'``
* warning * ``'warning'``
* error * ``'error'``
* critical * ``'critical'``
""" """
@ -1337,11 +1477,11 @@ class LoggerClass(Setting):
desc = """\ desc = """\
The logger you want to use to log events in Gunicorn. The logger you want to use to log events in Gunicorn.
The default class (``gunicorn.glogging.Logger``) handle most of The default class (``gunicorn.glogging.Logger``) handles most
normal usages in logging. It provides error and access logging. normal usages in logging. It provides error and access logging.
You can provide your own logger by giving Gunicorn a You can provide your own logger by giving Gunicorn a Python path to a
Python path to a subclass like ``gunicorn.glogging.Logger``. class that quacks like ``gunicorn.glogging.Logger``.
""" """
@ -1362,7 +1502,6 @@ class LogConfig(Setting):
class LogConfigDict(Setting): class LogConfigDict(Setting):
name = "logconfig_dict" name = "logconfig_dict"
section = "Logging" section = "Logging"
cli = ["--log-config-dict"]
validator = validate_dict validator = validate_dict
default = {} default = {}
desc = """\ desc = """\
@ -1498,6 +1637,23 @@ class StatsdHost(Setting):
.. versionadded:: 19.1 .. versionadded:: 19.1
""" """
# Datadog Statsd (dogstatsd) tags. https://docs.datadoghq.com/developers/dogstatsd/
class DogstatsdTags(Setting):
name = "dogstatsd_tags"
section = "Logging"
cli = ["--dogstatsd-tags"]
meta = "DOGSTATSD_TAGS"
default = ""
validator = validate_string
desc = """\
A comma-delimited list of datadog statsd (dogstatsd) tags to append to
statsd metrics.
.. versionadded:: 20
"""
class StatsdPrefix(Setting): class StatsdPrefix(Setting):
name = "statsd_prefix" name = "statsd_prefix"
section = "Logging" section = "Logging"
@ -1673,6 +1829,7 @@ class PostWorkerInit(Setting):
Worker. Worker.
""" """
class WorkerInt(Setting): class WorkerInt(Setting):
name = "worker_int" name = "worker_int"
section = "Server Hooks" section = "Server Hooks"
@ -1816,6 +1973,7 @@ class NumWorkersChanged(Setting):
be ``None``. be ``None``.
""" """
class OnExit(Setting): class OnExit(Setting):
name = "on_exit" name = "on_exit"
section = "Server Hooks" section = "Server Hooks"
@ -1896,11 +2054,26 @@ class CertFile(Setting):
SSL certificate file SSL certificate file
""" """
class SSLVersion(Setting): class SSLVersion(Setting):
name = "ssl_version" name = "ssl_version"
section = "SSL" section = "SSL"
cli = ["--ssl-version"] cli = ["--ssl-version"]
validator = validate_ssl_version validator = validate_ssl_version
if hasattr(ssl, "PROTOCOL_TLS"):
default = ssl.PROTOCOL_TLS
else:
default = ssl.PROTOCOL_SSLv23
desc = """\
SSL version to use (see stdlib ssl module's)
.. versionchanged:: 20.0.1
The default value has been changed from ``ssl.PROTOCOL_SSLv23`` to
``ssl.PROTOCOL_TLS`` when Python >= 3.6 .
"""
default = ssl.PROTOCOL_SSLv23 default = ssl.PROTOCOL_SSLv23
desc = """\ desc = """\
SSL version to use. SSL version to use.
@ -1914,10 +2087,11 @@ class SSLVersion(Setting):
Can yield SSL. (Python 3.6+) Can yield SSL. (Python 3.6+)
TLSv1 TLS 1.0 TLSv1 TLS 1.0
TLSv1_1 TLS 1.1 (Python 3.4+) TLSv1_1 TLS 1.1 (Python 3.4+)
TLSv2 TLS 1.2 (Python 3.4+) TLSv1_2 TLS 1.2 (Python 3.4+)
TLS_SERVER Auto-negotiate the highest protocol version like TLS, TLS_SERVER Auto-negotiate the highest protocol version like TLS,
but only support server-side SSLSocket connections. but only support server-side SSLSocket connections.
(Python 3.6+) (Python 3.6+)
============= ============
.. versionchanged:: 19.7 .. versionchanged:: 19.7
The default value has been changed from ``ssl.PROTOCOL_TLSv1`` to The default value has been changed from ``ssl.PROTOCOL_TLSv1`` to
@ -1927,6 +2101,7 @@ class SSLVersion(Setting):
constants. constants.
""" """
class CertReqs(Setting): class CertReqs(Setting):
name = "cert_reqs" name = "cert_reqs"
section = "SSL" section = "SSL"
@ -1937,6 +2112,7 @@ class CertReqs(Setting):
Whether client certificate is required (see stdlib ssl module's) Whether client certificate is required (see stdlib ssl module's)
""" """
class CACerts(Setting): class CACerts(Setting):
name = "ca_certs" name = "ca_certs"
section = "SSL" section = "SSL"
@ -1948,6 +2124,7 @@ class CACerts(Setting):
CA certificates file CA certificates file
""" """
class SuppressRaggedEOFs(Setting): class SuppressRaggedEOFs(Setting):
name = "suppress_ragged_eofs" name = "suppress_ragged_eofs"
section = "SSL" section = "SSL"
@ -1959,6 +2136,7 @@ class SuppressRaggedEOFs(Setting):
Suppress ragged EOFs (see stdlib ssl module's) Suppress ragged EOFs (see stdlib ssl module's)
""" """
class DoHandshakeOnConnect(Setting): class DoHandshakeOnConnect(Setting):
name = "do_handshake_on_connect" name = "do_handshake_on_connect"
section = "SSL" section = "SSL"
@ -1976,9 +2154,22 @@ class Ciphers(Setting):
section = "SSL" section = "SSL"
cli = ["--ciphers"] cli = ["--ciphers"]
validator = validate_string validator = validate_string
default = 'TLSv1' default = None
desc = """\ desc = """\
Ciphers to use (see stdlib ssl module's) SSL Cipher suite to use, in the format of an OpenSSL cipher list.
By default we use the default cipher list from Python's ``ssl`` module,
which contains ciphers considered strong at the time of each Python
release.
As a recommended alternative, the Open Web App Security Project (OWASP)
offers `a vetted set of strong cipher strings rated A+ to C-
<https://www.owasp.org/index.php/TLS_Cipher_String_Cheat_Sheet>`_.
OWASP provides details on user-agent compatibility at each security level.
See the `OpenSSL Cipher List Format Documentation
<https://www.openssl.org/docs/manmaster/man1/ciphers.html#CIPHER-LIST-FORMAT>`_
for details on the format of an OpenSSL cipher list.
""" """
@ -2002,3 +2193,20 @@ class PasteGlobalConf(Setting):
.. versionadded:: 19.7 .. versionadded:: 19.7
""" """
class StripHeaderSpaces(Setting):
name = "strip_header_spaces"
section = "Server Mechanics"
cli = ["--strip-header-spaces"]
validator = validate_bool
action = "store_true"
default = False
desc = """\
Strip spaces present between the header name and the the ``:``.
This is known to induce vulnerabilities and is not compliant with the HTTP/1.1 standard.
See https://portswigger.net/research/http-desync-attacks-request-smuggling-reborn.
Use with care and only if necessary.
"""

View File

@ -28,7 +28,7 @@ class Spew(object):
if '__file__' in frame.f_globals: if '__file__' in frame.f_globals:
filename = frame.f_globals['__file__'] filename = frame.f_globals['__file__']
if (filename.endswith('.pyc') or if (filename.endswith('.pyc') or
filename.endswith('.pyo')): filename.endswith('.pyo')):
filename = filename[:-1] filename = filename[:-1]
name = frame.f_globals['__name__'] name = frame.f_globals['__name__']
line = linecache.getline(filename, lineno) line = linecache.getline(filename, lineno)

View File

@ -8,7 +8,7 @@ import binascii
import json import json
import time import time
import logging import logging
logging.Logger.manager.emittedNoHandlerWarning = 1 logging.Logger.manager.emittedNoHandlerWarning = 1 # noqa
from logging.config import dictConfig from logging.config import dictConfig
from logging.config import fileConfig from logging.config import fileConfig
import os import os
@ -22,76 +22,75 @@ from gunicorn import util
# syslog facility codes # syslog facility codes
SYSLOG_FACILITIES = { SYSLOG_FACILITIES = {
"auth": 4, "auth": 4,
"authpriv": 10, "authpriv": 10,
"cron": 9, "cron": 9,
"daemon": 3, "daemon": 3,
"ftp": 11, "ftp": 11,
"kern": 0, "kern": 0,
"lpr": 6, "lpr": 6,
"mail": 2, "mail": 2,
"news": 7, "news": 7,
"security": 4, # DEPRECATED "security": 4, # DEPRECATED
"syslog": 5, "syslog": 5,
"user": 1, "user": 1,
"uucp": 8, "uucp": 8,
"local0": 16, "local0": 16,
"local1": 17, "local1": 17,
"local2": 18, "local2": 18,
"local3": 19, "local3": 19,
"local4": 20, "local4": 20,
"local5": 21, "local5": 21,
"local6": 22, "local6": 22,
"local7": 23 "local7": 23
} }
CONFIG_DEFAULTS = dict( CONFIG_DEFAULTS = dict(
version=1, version=1,
disable_existing_loggers=False, disable_existing_loggers=False,
loggers={ root={"level": "INFO", "handlers": ["console"]},
"root": {"level": "INFO", "handlers": ["console"]}, loggers={
"gunicorn.error": { "gunicorn.error": {
"level": "INFO", "level": "INFO",
"handlers": ["error_console"], "handlers": ["error_console"],
"propagate": True, "propagate": True,
"qualname": "gunicorn.error" "qualname": "gunicorn.error"
}, },
"gunicorn.access": { "gunicorn.access": {
"level": "INFO", "level": "INFO",
"handlers": ["console"], "handlers": ["console"],
"propagate": True, "propagate": True,
"qualname": "gunicorn.access" "qualname": "gunicorn.access"
}
},
handlers={
"console": {
"class": "logging.StreamHandler",
"formatter": "generic",
"stream": "ext://sys.stdout"
},
"error_console": {
"class": "logging.StreamHandler",
"formatter": "generic",
"stream": "ext://sys.stderr"
},
},
formatters={
"generic": {
"format": "%(asctime)s [%(process)d] [%(levelname)s] %(message)s",
"datefmt": "[%Y-%m-%d %H:%M:%S %z]",
"class": "logging.Formatter"
}
} }
},
handlers={
"console": {
"class": "logging.StreamHandler",
"formatter": "generic",
"stream": "ext://sys.stdout"
},
"error_console": {
"class": "logging.StreamHandler",
"formatter": "generic",
"stream": "ext://sys.stderr"
},
},
formatters={
"generic": {
"format": "%(asctime)s [%(process)d] [%(levelname)s] %(message)s",
"datefmt": "[%Y-%m-%d %H:%M:%S %z]",
"class": "logging.Formatter"
}
}
) )
def loggers(): def loggers():
""" get list of all loggers """ """ get list of all loggers """
root = logging.root root = logging.root
existing = root.manager.loggerDict.keys() existing = list(root.manager.loggerDict.keys())
return [logging.getLogger(name) for name in existing] return [logging.getLogger(name) for name in existing]
@ -109,11 +108,11 @@ class SafeAtoms(dict):
if k.startswith("{"): if k.startswith("{"):
kl = k.lower() kl = k.lower()
if kl in self: if kl in self:
return super(SafeAtoms, self).__getitem__(kl) return super().__getitem__(kl)
else: else:
return "-" return "-"
if k in self: if k in self:
return super(SafeAtoms, self).__getitem__(k) return super().__getitem__(k)
else: else:
return '-' return '-'
@ -214,8 +213,10 @@ class Logger(object):
# set gunicorn.access handler # set gunicorn.access handler
if cfg.accesslog is not None: if cfg.accesslog is not None:
self._set_handler(self.access_log, cfg.accesslog, self._set_handler(
fmt=logging.Formatter(self.access_fmt), stream=sys.stdout) self.access_log, cfg.accesslog,
fmt=logging.Formatter(self.access_fmt), stream=sys.stdout
)
# set syslog handler # set syslog handler
if cfg.syslog: if cfg.syslog:
@ -289,7 +290,7 @@ class Logger(object):
self.error_log.log(lvl, msg, *args, **kwargs) self.error_log.log(lvl, msg, *args, **kwargs)
def atoms(self, resp, req, environ, request_time): def atoms(self, resp, req, environ, request_time):
""" Gets atoms for log formating. """ Gets atoms for log formatting.
""" """
status = resp.status status = resp.status
if isinstance(status, str): if isinstance(status, str):
@ -300,7 +301,8 @@ class Logger(object):
'u': self._get_user(environ) or '-', 'u': self._get_user(environ) or '-',
't': self.now(), 't': self.now(),
'r': "%s %s %s" % (environ['REQUEST_METHOD'], 'r': "%s %s %s" % (environ['REQUEST_METHOD'],
environ['RAW_URI'], environ["SERVER_PROTOCOL"]), environ['RAW_URI'],
environ["SERVER_PROTOCOL"]),
's': status, 's': status,
'm': environ.get('REQUEST_METHOD'), 'm': environ.get('REQUEST_METHOD'),
'U': environ.get('PATH_INFO'), 'U': environ.get('PATH_INFO'),
@ -311,7 +313,8 @@ class Logger(object):
'f': environ.get('HTTP_REFERER', '-'), 'f': environ.get('HTTP_REFERER', '-'),
'a': environ.get('HTTP_USER_AGENT', '-'), 'a': environ.get('HTTP_USER_AGENT', '-'),
'T': request_time.seconds, 'T': request_time.seconds,
'D': (request_time.seconds*1000000) + request_time.microseconds, 'D': (request_time.seconds * 1000000) + request_time.microseconds,
'M': (request_time.seconds * 1000) + int(request_time.microseconds / 1000),
'L': "%d.%06d" % (request_time.seconds, request_time.microseconds), 'L': "%d.%06d" % (request_time.seconds, request_time.microseconds),
'p': "<%s>" % os.getpid() 'p': "<%s>" % os.getpid()
} }
@ -353,12 +356,13 @@ class Logger(object):
# wrap atoms: # wrap atoms:
# - make sure atoms will be test case insensitively # - make sure atoms will be test case insensitively
# - if atom doesn't exist replace it by '-' # - if atom doesn't exist replace it by '-'
safe_atoms = self.atoms_wrapper_class(self.atoms(resp, req, environ, safe_atoms = self.atoms_wrapper_class(
request_time)) self.atoms(resp, req, environ, request_time)
)
try: try:
self.access_log.info(self.cfg.access_log_format, safe_atoms) self.access_log.info(self.cfg.access_log_format, safe_atoms)
except: except Exception:
self.error(traceback.format_exc()) self.error(traceback.format_exc())
def now(self): def now(self):
@ -377,7 +381,6 @@ class Logger(object):
os.dup2(self.logfile.fileno(), sys.stdout.fileno()) os.dup2(self.logfile.fileno(), sys.stdout.fileno())
os.dup2(self.logfile.fileno(), sys.stderr.fileno()) os.dup2(self.logfile.fileno(), sys.stderr.fileno())
for log in loggers(): for log in loggers():
for handler in log.handlers: for handler in log.handlers:
if isinstance(handler, logging.FileHandler): if isinstance(handler, logging.FileHandler):
@ -431,10 +434,7 @@ class Logger(object):
def _set_syslog_handler(self, log, cfg, fmt, name): def _set_syslog_handler(self, log, cfg, fmt, name):
# setup format # setup format
if not cfg.syslog_prefix: prefix = cfg.syslog_prefix or cfg.proc_name.replace(":", ".")
prefix = cfg.proc_name.replace(":", ".")
else:
prefix = cfg.syslog_prefix
prefix = "gunicorn.%s.%s" % (prefix, name) prefix = "gunicorn.%s.%s" % (prefix, name)
@ -452,7 +452,7 @@ class Logger(object):
# finally setup the syslog handler # finally setup the syslog handler
h = logging.handlers.SysLogHandler(address=addr, h = logging.handlers.SysLogHandler(address=addr,
facility=facility, socktype=socktype) facility=facility, socktype=socktype)
h.setFormatter(fmt) h.setFormatter(fmt)
h._gunicorn = True h._gunicorn = True
@ -461,7 +461,7 @@ class Logger(object):
def _get_user(self, environ): def _get_user(self, environ):
user = None user = None
http_auth = environ.get("HTTP_AUTHORIZATION") http_auth = environ.get("HTTP_AUTHORIZATION")
if http_auth and http_auth.startswith('Basic'): if http_auth and http_auth.lower().startswith('basic'):
auth = http_auth.split(" ", 1) auth = http_auth.split(" ", 1)
if len(auth) == 2: if len(auth) == 2:
try: try:

View File

@ -7,7 +7,7 @@ import io
import sys import sys
from gunicorn.http.errors import (NoMoreData, ChunkMissingTerminator, from gunicorn.http.errors import (NoMoreData, ChunkMissingTerminator,
InvalidChunkSize) InvalidChunkSize)
class ChunkedReader(object): class ChunkedReader(object):
@ -18,7 +18,7 @@ class ChunkedReader(object):
def read(self, size): def read(self, size):
if not isinstance(size, int): if not isinstance(size, int):
raise TypeError("size must be an integral type") raise TypeError("size must be an integer type")
if size < 0: if size < 0:
raise ValueError("Size must be positive.") raise ValueError("Size must be positive.")
if size == 0: if size == 0:
@ -187,6 +187,7 @@ class Body(object):
if not ret: if not ret:
raise StopIteration() raise StopIteration()
return ret return ret
next = __next__ next = __next__
def getsize(self, size): def getsize(self, size):

View File

@ -6,13 +6,13 @@
import io import io
import re import re
import socket import socket
from errno import ENOTCONN
from gunicorn.http.unreader import SocketUnreader
from gunicorn.http.body import ChunkedReader, LengthReader, EOFReader, Body from gunicorn.http.body import ChunkedReader, LengthReader, EOFReader, Body
from gunicorn.http.errors import (InvalidHeader, InvalidHeaderName, NoMoreData, from gunicorn.http.errors import (
InvalidHeader, InvalidHeaderName, NoMoreData,
InvalidRequestLine, InvalidRequestMethod, InvalidHTTPVersion, InvalidRequestLine, InvalidRequestMethod, InvalidHTTPVersion,
LimitRequestLine, LimitRequestHeaders) LimitRequestLine, LimitRequestHeaders,
)
from gunicorn.http.errors import InvalidProxyLine, ForbiddenProxyRequest from gunicorn.http.errors import InvalidProxyLine, ForbiddenProxyRequest
from gunicorn.http.errors import InvalidSchemeHeaders from gunicorn.http.errors import InvalidSchemeHeaders
from gunicorn.util import bytes_to_str, split_request_uri from gunicorn.util import bytes_to_str, split_request_uri
@ -27,9 +27,11 @@ VERSION_RE = re.compile(r"HTTP/(\d+)\.(\d+)")
class Message(object): class Message(object):
def __init__(self, cfg, unreader): def __init__(self, cfg, unreader, peer_addr):
self.cfg = cfg self.cfg = cfg
self.unreader = unreader self.unreader = unreader
self.peer_addr = peer_addr
self.remote_addr = peer_addr
self.version = None self.version = None
self.headers = [] self.headers = []
self.trailers = [] self.trailers = []
@ -39,7 +41,7 @@ class Message(object):
# set headers limits # set headers limits
self.limit_request_fields = cfg.limit_request_fields self.limit_request_fields = cfg.limit_request_fields
if (self.limit_request_fields <= 0 if (self.limit_request_fields <= 0
or self.limit_request_fields > MAX_HEADERS): or self.limit_request_fields > MAX_HEADERS):
self.limit_request_fields = MAX_HEADERS self.limit_request_fields = MAX_HEADERS
self.limit_request_field_size = cfg.limit_request_field_size self.limit_request_field_size = cfg.limit_request_field_size
if self.limit_request_field_size < 0: if self.limit_request_field_size < 0:
@ -67,16 +69,10 @@ class Message(object):
# handle scheme headers # handle scheme headers
scheme_header = False scheme_header = False
secure_scheme_headers = {} secure_scheme_headers = {}
if '*' in cfg.forwarded_allow_ips: if ('*' in cfg.forwarded_allow_ips or
not isinstance(self.peer_addr, tuple)
or self.peer_addr[0] in cfg.forwarded_allow_ips):
secure_scheme_headers = cfg.secure_scheme_headers secure_scheme_headers = cfg.secure_scheme_headers
elif isinstance(self.unreader, SocketUnreader):
remote_addr = self.unreader.sock.getpeername()
if self.unreader.sock.family in (socket.AF_INET, socket.AF_INET6):
remote_host = remote_addr[0]
if remote_host in cfg.forwarded_allow_ips:
secure_scheme_headers = cfg.secure_scheme_headers
elif self.unreader.sock.family == socket.AF_UNIX:
secure_scheme_headers = cfg.secure_scheme_headers
# Parse headers into key/value pairs paying attention # Parse headers into key/value pairs paying attention
# to continuation lines. # to continuation lines.
@ -90,7 +86,10 @@ class Message(object):
if curr.find(":") < 0: if curr.find(":") < 0:
raise InvalidHeader(curr.strip()) raise InvalidHeader(curr.strip())
name, value = curr.split(":", 1) name, value = curr.split(":", 1)
name = name.rstrip(" \t").upper() if self.cfg.strip_header_spaces:
name = name.rstrip(" \t").upper()
else:
name = name.upper()
if HEADER_RE.search(name): if HEADER_RE.search(name):
raise InvalidHeaderName(name) raise InvalidHeaderName(name)
@ -102,7 +101,7 @@ class Message(object):
header_length += len(curr) header_length += len(curr)
if header_length > self.limit_request_field_size > 0: if header_length > self.limit_request_field_size > 0:
raise LimitRequestHeaders("limit request headers " raise LimitRequestHeaders("limit request headers "
+ "fields size") "fields size")
value.append(curr) value.append(curr)
value = ''.join(value).rstrip() value = ''.join(value).rstrip()
@ -126,13 +125,15 @@ class Message(object):
def set_body_reader(self): def set_body_reader(self):
chunked = False chunked = False
content_length = None content_length = None
for (name, value) in self.headers: for (name, value) in self.headers:
if name == "CONTENT-LENGTH": if name == "CONTENT-LENGTH":
if content_length is not None:
raise InvalidHeader("CONTENT-LENGTH", req=self)
content_length = value content_length = value
elif name == "TRANSFER-ENCODING": elif name == "TRANSFER-ENCODING":
chunked = value.lower() == "chunked" if value.lower() == "chunked":
elif name == "SEC-WEBSOCKET-KEY1": chunked = True
content_length = 8
if chunked: if chunked:
self.body = Body(ChunkedReader(self, self.unreader)) self.body = Body(ChunkedReader(self, self.unreader))
@ -162,7 +163,7 @@ class Message(object):
class Request(Message): class Request(Message):
def __init__(self, cfg, unreader, req_number=1): def __init__(self, cfg, unreader, peer_addr, req_number=1):
self.method = None self.method = None
self.uri = None self.uri = None
self.path = None self.path = None
@ -172,12 +173,12 @@ class Request(Message):
# get max request line size # get max request line size
self.limit_request_line = cfg.limit_request_line self.limit_request_line = cfg.limit_request_line
if (self.limit_request_line < 0 if (self.limit_request_line < 0
or self.limit_request_line >= MAX_REQUEST_LINE): or self.limit_request_line >= MAX_REQUEST_LINE):
self.limit_request_line = MAX_REQUEST_LINE self.limit_request_line = MAX_REQUEST_LINE
self.req_number = req_number self.req_number = req_number
self.proxy_protocol_info = None self.proxy_protocol_info = None
super(Request, self).__init__(cfg, unreader) super().__init__(cfg, unreader, peer_addr)
def get_data(self, unreader, buf, stop=False): def get_data(self, unreader, buf, stop=False):
data = unreader.read() data = unreader.read()
@ -242,7 +243,7 @@ class Request(Message):
if idx > limit > 0: if idx > limit > 0:
raise LimitRequestLine(idx, limit) raise LimitRequestLine(idx, limit)
break break
elif len(data) - 2 > limit > 0: if len(data) - 2 > limit > 0:
raise LimitRequestLine(len(data), limit) raise LimitRequestLine(len(data), limit)
self.get_data(unreader, buf) self.get_data(unreader, buf)
data = buf.getvalue() data = buf.getvalue()
@ -273,16 +274,10 @@ class Request(Message):
def proxy_protocol_access_check(self): def proxy_protocol_access_check(self):
# check in allow list # check in allow list
if isinstance(self.unreader, SocketUnreader): if ("*" not in self.cfg.proxy_allow_ips and
try: isinstance(self.peer_addr, tuple) and
remote_host = self.unreader.sock.getpeername()[0] self.peer_addr[0] not in self.cfg.proxy_allow_ips):
except socket.error as e: raise ForbiddenProxyRequest(self.peer_addr[0])
if e.args[0] == ENOTCONN:
raise ForbiddenProxyRequest("UNKNOW")
raise
if ("*" not in self.cfg.proxy_allow_ips and
remote_host not in self.cfg.proxy_allow_ips):
raise ForbiddenProxyRequest(remote_host)
def parse_proxy_protocol(self, line): def parse_proxy_protocol(self, line):
bits = line.split() bits = line.split()
@ -357,6 +352,6 @@ class Request(Message):
self.version = (int(match.group(1)), int(match.group(2))) self.version = (int(match.group(1)), int(match.group(2)))
def set_body_reader(self): def set_body_reader(self):
super(Request, self).set_body_reader() super().set_body_reader()
if isinstance(self.body.reader, EOFReader): if isinstance(self.body.reader, EOFReader):
self.body = Body(LengthReader(self.unreader, 0)) self.body = Body(LengthReader(self.unreader, 0))

View File

@ -11,13 +11,14 @@ class Parser(object):
mesg_class = None mesg_class = None
def __init__(self, cfg, source): def __init__(self, cfg, source, source_addr):
self.cfg = cfg self.cfg = cfg
if hasattr(source, "recv"): if hasattr(source, "recv"):
self.unreader = SocketUnreader(source) self.unreader = SocketUnreader(source)
else: else:
self.unreader = IterUnreader(source) self.unreader = IterUnreader(source)
self.mesg = None self.mesg = None
self.source_addr = source_addr
# request counter (for keepalive connetions) # request counter (for keepalive connetions)
self.req_count = 0 self.req_count = 0
@ -38,7 +39,7 @@ class Parser(object):
# Parse the next request # Parse the next request
self.req_count += 1 self.req_count += 1
self.mesg = self.mesg_class(self.cfg, self.unreader, self.req_count) self.mesg = self.mesg_class(self.cfg, self.unreader, self.source_addr, self.req_count)
if not self.mesg: if not self.mesg:
raise StopIteration() raise StopIteration()
return self.mesg return self.mesg

View File

@ -56,7 +56,7 @@ class Unreader(object):
class SocketUnreader(Unreader): class SocketUnreader(Unreader):
def __init__(self, sock, max_chunk=8192): def __init__(self, sock, max_chunk=8192):
super(SocketUnreader, self).__init__() super().__init__()
self.sock = sock self.sock = sock
self.mxchunk = max_chunk self.mxchunk = max_chunk
@ -66,7 +66,7 @@ class SocketUnreader(Unreader):
class IterUnreader(Unreader): class IterUnreader(Unreader):
def __init__(self, iterable): def __init__(self, iterable):
super(IterUnreader, self).__init__() super().__init__()
self.iter = iter(iterable) self.iter = iter(iterable)
def chunk(self): def chunk(self):

View File

@ -11,7 +11,7 @@ import sys
from gunicorn.http.message import HEADER_RE from gunicorn.http.message import HEADER_RE
from gunicorn.http.errors import InvalidHeader, InvalidHeaderName from gunicorn.http.errors import InvalidHeader, InvalidHeaderName
from gunicorn import SERVER_SOFTWARE from gunicorn import SERVER_SOFTWARE, SERVER
import gunicorn.util as util import gunicorn.util as util
# Send files in at most 1GB blocks as some operating systems can have problems # Send files in at most 1GB blocks as some operating systems can have problems
@ -73,6 +73,7 @@ def base_environ(cfg):
"wsgi.multiprocess": (cfg.workers > 1), "wsgi.multiprocess": (cfg.workers > 1),
"wsgi.run_once": False, "wsgi.run_once": False,
"wsgi.file_wrapper": FileWrapper, "wsgi.file_wrapper": FileWrapper,
"wsgi.input_terminated": True,
"SERVER_SOFTWARE": SERVER_SOFTWARE, "SERVER_SOFTWARE": SERVER_SOFTWARE,
} }
@ -194,7 +195,7 @@ class Response(object):
def __init__(self, req, sock, cfg): def __init__(self, req, sock, cfg):
self.req = req self.req = req
self.sock = sock self.sock = sock
self.version = SERVER_SOFTWARE self.version = SERVER
self.status = None self.status = None
self.chunked = False self.chunked = False
self.must_close = False self.must_close = False
@ -251,10 +252,13 @@ class Response(object):
if HEADER_RE.search(name): if HEADER_RE.search(name):
raise InvalidHeaderName('%r' % name) raise InvalidHeaderName('%r' % name)
if not isinstance(value, str):
raise TypeError('%r is not a string' % value)
if HEADER_VALUE_RE.search(value): if HEADER_VALUE_RE.search(value):
raise InvalidHeader('%r' % value) raise InvalidHeader('%r' % value)
value = str(value).strip() value = value.strip()
lname = name.lower().strip() lname = name.lower().strip()
if lname == "content-length": if lname == "content-length":
self.response_length = int(value) self.response_length = int(value)
@ -299,7 +303,7 @@ class Response(object):
headers = [ headers = [
"HTTP/%s.%s %s\r\n" % (self.req.version[0], "HTTP/%s.%s %s\r\n" % (self.req.version[0],
self.req.version[1], self.status), self.req.version[1], self.status),
"Server: %s\r\n" % self.version, "Server: %s\r\n" % self.version,
"Date: %s\r\n" % util.http_date(), "Date: %s\r\n" % util.http_date(),
"Connection: %s\r\n" % connection "Connection: %s\r\n" % connection
@ -315,7 +319,7 @@ class Response(object):
tosend.extend(["%s: %s\r\n" % (k, v) for k, v in self.headers]) tosend.extend(["%s: %s\r\n" % (k, v) for k, v in self.headers])
header_str = "%s\r\n" % "".join(tosend) header_str = "%s\r\n" % "".join(tosend)
util.write(self.sock, util.to_bytestring(header_str, "ascii")) util.write(self.sock, util.to_bytestring(header_str, "latin-1"))
self.headers_sent = True self.headers_sent = True
def write(self, arg): def write(self, arg):
@ -356,12 +360,6 @@ class Response(object):
offset = os.lseek(fileno, 0, os.SEEK_CUR) offset = os.lseek(fileno, 0, os.SEEK_CUR)
if self.response_length is None: if self.response_length is None:
filesize = os.fstat(fileno).st_size filesize = os.fstat(fileno).st_size
# The file may be special and sendfile will fail.
# It may also be zero-length, but that is okay.
if filesize == 0:
return False
nbytes = filesize - offset nbytes = filesize - offset
else: else:
nbytes = self.response_length nbytes = self.response_length
@ -373,13 +371,8 @@ class Response(object):
if self.is_chunked(): if self.is_chunked():
chunk_size = "%X\r\n" % nbytes chunk_size = "%X\r\n" % nbytes
self.sock.sendall(chunk_size.encode('utf-8')) self.sock.sendall(chunk_size.encode('utf-8'))
if nbytes > 0:
sockno = self.sock.fileno() self.sock.sendfile(respiter.filelike, offset=offset, count=nbytes)
sent = 0
while sent != nbytes:
count = min(nbytes - sent, BLKSIZE)
sent += os.sendfile(sockno, fileno, offset + sent, count)
if self.is_chunked(): if self.is_chunked():
self.sock.sendall(b"\r\n") self.sock.sendall(b"\r\n")

View File

@ -19,6 +19,7 @@ GAUGE_TYPE = "gauge"
COUNTER_TYPE = "counter" COUNTER_TYPE = "counter"
HISTOGRAM_TYPE = "histogram" HISTOGRAM_TYPE = "histogram"
class Statsd(Logger): class Statsd(Logger):
"""statsD-based instrumentation, that passes as a logger """statsD-based instrumentation, that passes as a logger
""" """
@ -34,6 +35,8 @@ class Statsd(Logger):
except Exception: except Exception:
self.sock = None self.sock = None
self.dogstatsd_tags = cfg.dogstatsd_tags
# Log errors and warnings # Log errors and warnings
def critical(self, msg, *args, **kwargs): def critical(self, msg, *args, **kwargs):
Logger.critical(self, msg, *args, **kwargs) Logger.critical(self, msg, *args, **kwargs)
@ -51,7 +54,7 @@ class Statsd(Logger):
Logger.exception(self, msg, *args, **kwargs) Logger.exception(self, msg, *args, **kwargs)
self.increment("gunicorn.log.exception", 1) self.increment("gunicorn.log.exception", 1)
# Special treatement for info, the most common log level # Special treatment for info, the most common log level
def info(self, msg, *args, **kwargs): def info(self, msg, *args, **kwargs):
self.log(logging.INFO, msg, *args, **kwargs) self.log(logging.INFO, msg, *args, **kwargs)
@ -116,6 +119,11 @@ class Statsd(Logger):
try: try:
if isinstance(msg, str): if isinstance(msg, str):
msg = msg.encode("ascii") msg = msg.encode("ascii")
# http://docs.datadoghq.com/guides/dogstatsd/#datagram-format
if self.dogstatsd_tags:
msg = msg + b"|#" + self.dogstatsd_tags.encode('ascii')
if self.sock: if self.sock:
self.sock.send(msg) self.sock.send(msg)
except Exception: except Exception:

View File

@ -57,7 +57,7 @@ class Pidfile(object):
if pid1 == self.pid: if pid1 == self.pid:
os.unlink(self.fname) os.unlink(self.fname)
except: except Exception:
pass pass
def validate(self): def validate(self):

View File

@ -2,6 +2,7 @@
# #
# This file is part of gunicorn released under the MIT license. # This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information. # See the NOTICE for more information.
# pylint: disable=no-else-continue
import os import os
import os.path import os.path
@ -15,16 +16,14 @@ COMPILED_EXT_RE = re.compile(r'py[co]$')
class Reloader(threading.Thread): class Reloader(threading.Thread):
def __init__(self, extra_files=None, interval=1, callback=None): def __init__(self, extra_files=None, interval=1, callback=None):
super(Reloader, self).__init__() super().__init__()
self.setDaemon(True) self.daemon = True
self._extra_files = set(extra_files or ()) self._extra_files = set(extra_files or ())
self._extra_files_lock = threading.RLock()
self._interval = interval self._interval = interval
self._callback = callback self._callback = callback
def add_extra_file(self, filename): def add_extra_file(self, filename):
with self._extra_files_lock: self._extra_files.add(filename)
self._extra_files.add(filename)
def get_files(self): def get_files(self):
fnames = [ fnames = [
@ -33,8 +32,7 @@ class Reloader(threading.Thread):
if getattr(module, '__file__', None) if getattr(module, '__file__', None)
] ]
with self._extra_files_lock: fnames.extend(self._extra_files)
fnames.extend(self._extra_files)
return fnames return fnames
@ -55,6 +53,7 @@ class Reloader(threading.Thread):
self._callback(filename) self._callback(filename)
time.sleep(self._interval) time.sleep(self._interval)
has_inotify = False has_inotify = False
if sys.platform.startswith('linux'): if sys.platform.startswith('linux'):
try: try:
@ -74,8 +73,8 @@ if has_inotify:
| inotify.constants.IN_MOVED_TO) | inotify.constants.IN_MOVED_TO)
def __init__(self, extra_files=None, callback=None): def __init__(self, extra_files=None, callback=None):
super(InotifyReloader, self).__init__() super().__init__()
self.setDaemon(True) self.daemon = True
self._callback = callback self._callback = callback
self._dirs = set() self._dirs = set()
self._watcher = Inotify() self._watcher = Inotify()
@ -94,7 +93,7 @@ if has_inotify:
def get_dirs(self): def get_dirs(self):
fnames = [ fnames = [
os.path.dirname(COMPILED_EXT_RE.sub('py', module.__file__)) os.path.dirname(os.path.abspath(COMPILED_EXT_RE.sub('py', module.__file__)))
for module in tuple(sys.modules.values()) for module in tuple(sys.modules.values())
if getattr(module, '__file__', None) if getattr(module, '__file__', None)
] ]
@ -105,7 +104,8 @@ if has_inotify:
self._dirs = self.get_dirs() self._dirs = self.get_dirs()
for dirname in self._dirs: for dirname in self._dirs:
self._watcher.add_watch(dirname, mask=self.event_mask) if os.path.isdir(dirname):
self._watcher.add_watch(dirname, mask=self.event_mask)
for event in self._watcher.event_gen(): for event in self._watcher.event_gen():
if event is None: if event is None:

View File

@ -39,7 +39,7 @@ class BaseSocket(object):
def set_options(self, sock, bound=False): def set_options(self, sock, bound=False):
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
if (self.conf.reuse_port if (self.conf.reuse_port
and hasattr(socket, 'SO_REUSEPORT')): # pragma: no cover and hasattr(socket, 'SO_REUSEPORT')): # pragma: no cover
try: try:
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1)
except socket.error as err: except socket.error as err:
@ -86,7 +86,7 @@ class TCPSocket(BaseSocket):
def set_options(self, sock, bound=False): def set_options(self, sock, bound=False):
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
return super(TCPSocket, self).set_options(sock, bound=bound) return super().set_options(sock, bound=bound)
class TCP6Socket(TCPSocket): class TCP6Socket(TCPSocket):
@ -114,7 +114,7 @@ class UnixSocket(BaseSocket):
os.remove(addr) os.remove(addr)
else: else:
raise ValueError("%r is not a socket" % addr) raise ValueError("%r is not a socket" % addr)
super(UnixSocket, self).__init__(addr, conf, log, fd=fd) super().__init__(addr, conf, log, fd=fd)
def __str__(self): def __str__(self):
return "unix:%s" % self.cfg_addr return "unix:%s" % self.cfg_addr
@ -150,7 +150,11 @@ def create_sockets(conf, log, fds=None):
listeners = [] listeners = []
# get it only once # get it only once
laddr = conf.address addr = conf.address
fdaddr = [bind for bind in addr if isinstance(bind, int)]
if fds:
fdaddr += list(fds)
laddr = [bind for bind in addr if not isinstance(bind, int)]
# check ssl config early to raise the error on startup # check ssl config early to raise the error on startup
# only the certfile is needed since it can contains the keyfile # only the certfile is needed since it can contains the keyfile
@ -161,8 +165,8 @@ def create_sockets(conf, log, fds=None):
raise ValueError('keyfile "%s" does not exist' % conf.keyfile) raise ValueError('keyfile "%s" does not exist' % conf.keyfile)
# sockets are already bound # sockets are already bound
if fds is not None: if fdaddr:
for fd in fds: for fd in fdaddr:
sock = socket.fromfd(fd, socket.AF_UNIX, socket.SOCK_STREAM) sock = socket.fromfd(fd, socket.AF_UNIX, socket.SOCK_STREAM)
sock_name = sock.getsockname() sock_name = sock.getsockname()
sock_type = _sock_type(sock_name) sock_type = _sock_type(sock_name)

View File

@ -4,6 +4,7 @@
# See the NOTICE for more information. # See the NOTICE for more information.
import os import os
import socket
SD_LISTEN_FDS_START = 3 SD_LISTEN_FDS_START = 3
@ -43,3 +44,33 @@ def listen_fds(unset_environment=True):
os.environ.pop('LISTEN_FDS', None) os.environ.pop('LISTEN_FDS', None)
return fds return fds
def sd_notify(state, logger, unset_environment=False):
"""Send a notification to systemd. state is a string; see
the man page of sd_notify (http://www.freedesktop.org/software/systemd/man/sd_notify.html)
for a description of the allowable values.
If the unset_environment parameter is True, sd_notify() will unset
the $NOTIFY_SOCKET environment variable before returning (regardless of
whether the function call itself succeeded or not). Further calls to
sd_notify() will then fail, but the variable is no longer inherited by
child processes.
"""
addr = os.environ.get('NOTIFY_SOCKET')
if addr is None:
# not run in a service, just a noop
return
try:
sock = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM | socket.SOCK_CLOEXEC)
if addr[0] == '@':
addr = '\0' + addr[1:]
sock.connect(addr)
sock.sendall(state.encode('utf-8'))
except Exception:
logger.debug("Exception while invoking sd_notify()", exc_info=True)
finally:
if unset_environment:
os.environ.pop('NOTIFY_SOCKET')
sock.close()

View File

@ -2,11 +2,12 @@
# #
# This file is part of gunicorn released under the MIT license. # This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information. # See the NOTICE for more information.
import ast
import email.utils import email.utils
import errno import errno
import fcntl import fcntl
import html import html
import importlib
import inspect import inspect
import io import io
import logging import logging
@ -53,45 +54,8 @@ except ImportError:
pass pass
try:
from importlib import import_module
except ImportError:
def _resolve_name(name, package, level):
"""Return the absolute name of the module to be imported."""
if not hasattr(package, 'rindex'):
raise ValueError("'package' not set to a string")
dot = len(package)
for _ in range(level, 1, -1):
try:
dot = package.rindex('.', 0, dot)
except ValueError:
msg = "attempted relative import beyond top-level package"
raise ValueError(msg)
return "%s.%s" % (package[:dot], name)
def import_module(name, package=None):
"""Import a module.
The 'package' argument is required when performing a relative import. It
specifies the package to use as the anchor point from which to resolve the
relative import to an absolute import.
"""
if name.startswith('.'):
if not package:
raise TypeError("relative imports require the 'package' argument")
level = 0
for character in name:
if character != '.':
break
level += 1
name = _resolve_name(name[level:], package, level)
__import__(name)
return sys.modules[name]
def load_class(uri, default="gunicorn.workers.sync.SyncWorker", def load_class(uri, default="gunicorn.workers.sync.SyncWorker",
section="gunicorn.workers"): section="gunicorn.workers"):
if inspect.isclass(uri): if inspect.isclass(uri):
return uri return uri
if uri.startswith("egg:"): if uri.startswith("egg:"):
@ -105,7 +69,7 @@ def load_class(uri, default="gunicorn.workers.sync.SyncWorker",
try: try:
return pkg_resources.load_entry_point(dist, section, name) return pkg_resources.load_entry_point(dist, section, name)
except: except Exception:
exc = traceback.format_exc() exc = traceback.format_exc()
msg = "class uri %r invalid or not found: \n\n[%s]" msg = "class uri %r invalid or not found: \n\n[%s]"
raise RuntimeError(msg % (uri, exc)) raise RuntimeError(msg % (uri, exc))
@ -121,9 +85,10 @@ def load_class(uri, default="gunicorn.workers.sync.SyncWorker",
break break
try: try:
return pkg_resources.load_entry_point("gunicorn", return pkg_resources.load_entry_point(
section, uri) "gunicorn", section, uri
except: )
except Exception:
exc = traceback.format_exc() exc = traceback.format_exc()
msg = "class uri %r invalid or not found: \n\n[%s]" msg = "class uri %r invalid or not found: \n\n[%s]"
raise RuntimeError(msg % (uri, exc)) raise RuntimeError(msg % (uri, exc))
@ -131,8 +96,8 @@ def load_class(uri, default="gunicorn.workers.sync.SyncWorker",
klass = components.pop(-1) klass = components.pop(-1)
try: try:
mod = import_module('.'.join(components)) mod = importlib.import_module('.'.join(components))
except: except Exception:
exc = traceback.format_exc() exc = traceback.format_exc()
msg = "class uri %r invalid or not found: \n\n[%s]" msg = "class uri %r invalid or not found: \n\n[%s]"
raise RuntimeError(msg % (uri, exc)) raise RuntimeError(msg % (uri, exc))
@ -180,7 +145,7 @@ def set_owner_process(uid, gid, initgroups=False):
elif gid != os.getgid(): elif gid != os.getgid():
os.setgid(gid) os.setgid(gid)
if uid: if uid and uid != os.getuid():
os.setuid(uid) os.setuid(uid)
@ -190,7 +155,7 @@ def chown(path, uid, gid):
if sys.platform.startswith("win"): if sys.platform.startswith("win"):
def _waitfor(func, pathname, waitall=False): def _waitfor(func, pathname, waitall=False):
# Peform the operation # Perform the operation
func(pathname) func(pathname)
# Now setup the wait loop # Now setup the wait loop
if waitall: if waitall:
@ -247,33 +212,35 @@ def is_ipv6(addr):
return True return True
def parse_address(netloc, default_port=8000): def parse_address(netloc, default_port='8000'):
if re.match(r'unix:(//)?', netloc): if re.match(r'unix:(//)?', netloc):
return re.split(r'unix:(//)?', netloc)[-1] return re.split(r'unix:(//)?', netloc)[-1]
if netloc.startswith("fd://"):
fd = netloc[5:]
try:
return int(fd)
except ValueError:
raise RuntimeError("%r is not a valid file descriptor." % fd) from None
if netloc.startswith("tcp://"): if netloc.startswith("tcp://"):
netloc = netloc.split("tcp://")[1] netloc = netloc.split("tcp://")[1]
host, port = netloc, default_port
# get host
if '[' in netloc and ']' in netloc: if '[' in netloc and ']' in netloc:
host = netloc.split(']')[0][1:].lower() host = netloc.split(']')[0][1:]
port = (netloc.split(']:') + [default_port])[1]
elif ':' in netloc: elif ':' in netloc:
host = netloc.split(':')[0].lower() host, port = (netloc.split(':') + [default_port])[:2]
elif netloc == "": elif netloc == "":
host = "0.0.0.0" host, port = "0.0.0.0", default_port
else:
host = netloc.lower()
#get port try:
netloc = netloc.split(']')[-1]
if ":" in netloc:
port = netloc.split(':', 1)[1]
if not port.isdigit():
raise RuntimeError("%r is not a valid port number." % port)
port = int(port) port = int(port)
else: except ValueError:
port = default_port raise RuntimeError("%r is not a valid port number." % port)
return (host, port)
return host.lower(), port
def close_on_exec(fd): def close_on_exec(fd):
@ -293,6 +260,7 @@ def close(sock):
except socket.error: except socket.error:
pass pass
try: try:
from os import closerange from os import closerange
except ImportError: except ImportError:
@ -354,31 +322,106 @@ def write_error(sock, status_int, reason, mesg):
write_nonblock(sock, http.encode('latin1')) write_nonblock(sock, http.encode('latin1'))
def _called_with_wrong_args(f):
"""Check whether calling a function raised a ``TypeError`` because
the call failed or because something in the function raised the
error.
:param f: The function that was called.
:return: ``True`` if the call failed.
"""
tb = sys.exc_info()[2]
try:
while tb is not None:
if tb.tb_frame.f_code is f.__code__:
# In the function, it was called successfully.
return False
tb = tb.tb_next
# Didn't reach the function.
return True
finally:
# Delete tb to break a circular reference in Python 2.
# https://docs.python.org/2/library/sys.html#sys.exc_info
del tb
def import_app(module): def import_app(module):
parts = module.split(":", 1) parts = module.split(":", 1)
if len(parts) == 1: if len(parts) == 1:
module, obj = module, "application" obj = "application"
else: else:
module, obj = parts[0], parts[1] module, obj = parts[0], parts[1]
try: try:
__import__(module) mod = importlib.import_module(module)
except ImportError: except ImportError:
if module.endswith(".py") and os.path.exists(module): if module.endswith(".py") and os.path.exists(module):
msg = "Failed to find application, did you mean '%s:%s'?" msg = "Failed to find application, did you mean '%s:%s'?"
raise ImportError(msg % (module.rsplit(".", 1)[0], obj)) raise ImportError(msg % (module.rsplit(".", 1)[0], obj))
else: raise
raise
mod = sys.modules[module] # Parse obj as a single expression to determine if it's a valid
# attribute name or function call.
try:
expression = ast.parse(obj, mode="eval").body
except SyntaxError:
raise AppImportError(
"Failed to parse %r as an attribute name or function call." % obj
)
if isinstance(expression, ast.Name):
name = expression.id
args = kwargs = None
elif isinstance(expression, ast.Call):
# Ensure the function name is an attribute name only.
if not isinstance(expression.func, ast.Name):
raise AppImportError("Function reference must be a simple name: %r" % obj)
name = expression.func.id
# Parse the positional and keyword arguments as literals.
try:
args = [ast.literal_eval(arg) for arg in expression.args]
kwargs = {kw.arg: ast.literal_eval(kw.value) for kw in expression.keywords}
except ValueError:
# literal_eval gives cryptic error messages, show a generic
# message with the full expression instead.
raise AppImportError(
"Failed to parse arguments as literal values: %r" % obj
)
else:
raise AppImportError(
"Failed to parse %r as an attribute name or function call." % obj
)
is_debug = logging.root.level == logging.DEBUG is_debug = logging.root.level == logging.DEBUG
try: try:
app = eval(obj, vars(mod)) app = getattr(mod, name)
except NameError: except AttributeError:
if is_debug: if is_debug:
traceback.print_exception(*sys.exc_info()) traceback.print_exception(*sys.exc_info())
raise AppImportError("Failed to find application object %r in %r" % (obj, module)) raise AppImportError("Failed to find attribute %r in %r." % (name, module))
# If the expression was a function call, call the retrieved object
# to get the real application.
if args is not None:
try:
app = app(*args, **kwargs)
except TypeError as e:
# If the TypeError was due to bad arguments to the factory
# function, show Python's nice error message without a
# traceback.
if _called_with_wrong_args(app):
raise AppImportError(
"".join(traceback.format_exception_only(TypeError, e)).strip()
)
# Otherwise it was raised from within the function, show the
# full traceback.
raise
if app is None: if app is None:
raise AppImportError("Failed to find application object: %r" % obj) raise AppImportError("Failed to find application object: %r" % obj)
@ -397,7 +440,7 @@ def getcwd():
cwd = os.environ['PWD'] cwd = os.environ['PWD']
else: else:
cwd = os.getcwd() cwd = os.getcwd()
except: except Exception:
cwd = os.getcwd() cwd = os.getcwd()
return cwd return cwd
@ -443,7 +486,10 @@ def daemonize(enable_stdio_inheritance=False):
closerange(0, 3) closerange(0, 3)
fd_null = os.open(REDIRECT_TO, os.O_RDWR) fd_null = os.open(REDIRECT_TO, os.O_RDWR)
# PEP 446, make fd for /dev/null inheritable
os.set_inheritable(fd_null, True)
# expect fd_null to be always 0 here, but in-case not ...
if fd_null != 0: if fd_null != 0:
os.dup2(fd_null, 0) os.dup2(fd_null, 0)
@ -521,6 +567,7 @@ def to_bytestring(value, encoding="utf8"):
return value.encode(encoding) return value.encode(encoding)
def has_fileno(obj): def has_fileno(obj):
if not hasattr(obj, "fileno"): if not hasattr(obj, "fileno"):
return False return False

View File

@ -7,7 +7,6 @@
SUPPORTED_WORKERS = { SUPPORTED_WORKERS = {
"sync": "gunicorn.workers.sync.SyncWorker", "sync": "gunicorn.workers.sync.SyncWorker",
"eventlet": "gunicorn.workers.geventlet.EventletWorker", "eventlet": "gunicorn.workers.geventlet.EventletWorker",
"gaiohttp": "gunicorn.workers.gaiohttp.AiohttpWorker",
"gevent": "gunicorn.workers.ggevent.GeventWorker", "gevent": "gunicorn.workers.ggevent.GeventWorker",
"gevent_wsgi": "gunicorn.workers.ggevent.GeventPyWSGIWorker", "gevent_wsgi": "gunicorn.workers.ggevent.GeventPyWSGIWorker",
"gevent_pywsgi": "gunicorn.workers.ggevent.GeventPyWSGIWorker", "gevent_pywsgi": "gunicorn.workers.ggevent.GeventPyWSGIWorker",

View File

@ -1,168 +0,0 @@
# -*- coding: utf-8 -
#
# This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information.
import asyncio
import datetime
import functools
import logging
import os
try:
import ssl
except ImportError:
ssl = None
import gunicorn.workers.base as base
from aiohttp.wsgi import WSGIServerHttpProtocol as OldWSGIServerHttpProtocol
class WSGIServerHttpProtocol(OldWSGIServerHttpProtocol):
def log_access(self, request, environ, response, time):
self.logger.access(response, request, environ, datetime.timedelta(0, 0, time))
class AiohttpWorker(base.Worker):
def __init__(self, *args, **kw): # pragma: no cover
super().__init__(*args, **kw)
cfg = self.cfg
if cfg.is_ssl:
self.ssl_context = self._create_ssl_context(cfg)
else:
self.ssl_context = None
self.servers = []
self.connections = {}
def init_process(self):
# create new event_loop after fork
asyncio.get_event_loop().close()
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.loop)
super().init_process()
def run(self):
self._runner = asyncio.ensure_future(self._run(), loop=self.loop)
try:
self.loop.run_until_complete(self._runner)
finally:
self.loop.close()
def wrap_protocol(self, proto):
proto.connection_made = _wrp(
proto, proto.connection_made, self.connections)
proto.connection_lost = _wrp(
proto, proto.connection_lost, self.connections, False)
return proto
def factory(self, wsgi, addr):
# are we in debug level
is_debug = self.log.loglevel == logging.DEBUG
proto = WSGIServerHttpProtocol(
wsgi, readpayload=True,
loop=self.loop,
log=self.log,
debug=is_debug,
keep_alive=self.cfg.keepalive,
access_log=self.log.access_log,
access_log_format=self.cfg.access_log_format)
return self.wrap_protocol(proto)
def get_factory(self, sock, addr):
return functools.partial(self.factory, self.wsgi, addr)
@asyncio.coroutine
def close(self):
try:
if hasattr(self.wsgi, 'close'):
yield from self.wsgi.close()
except:
self.log.exception('Process shutdown exception')
@asyncio.coroutine
def _run(self):
for sock in self.sockets:
factory = self.get_factory(sock.sock, sock.cfg_addr)
self.servers.append(
(yield from self._create_server(factory, sock)))
# If our parent changed then we shut down.
pid = os.getpid()
try:
while self.alive or self.connections:
self.notify()
if (self.alive and
pid == os.getpid() and self.ppid != os.getppid()):
self.log.info("Parent changed, shutting down: %s", self)
self.alive = False
# stop accepting requests
if not self.alive:
if self.servers:
self.log.info(
"Stopping server: %s, connections: %s",
pid, len(self.connections))
for server in self.servers:
server.close()
self.servers.clear()
# prepare connections for closing
for conn in self.connections.values():
if hasattr(conn, 'closing'):
conn.closing()
yield from asyncio.sleep(1.0, loop=self.loop)
except KeyboardInterrupt:
pass
if self.servers:
for server in self.servers:
server.close()
yield from self.close()
@asyncio.coroutine
def _create_server(self, factory, sock):
return self.loop.create_server(factory, sock=sock.sock,
ssl=self.ssl_context)
@staticmethod
def _create_ssl_context(cfg):
""" Creates SSLContext instance for usage in asyncio.create_server.
See ssl.SSLSocket.__init__ for more details.
"""
ctx = ssl.SSLContext(cfg.ssl_version)
ctx.load_cert_chain(cfg.certfile, cfg.keyfile)
ctx.verify_mode = cfg.cert_reqs
if cfg.ca_certs:
ctx.load_verify_locations(cfg.ca_certs)
if cfg.ciphers:
ctx.set_ciphers(cfg.ciphers)
return ctx
class _wrp:
def __init__(self, proto, meth, tracking, add=True):
self._proto = proto
self._id = id(proto)
self._meth = meth
self._tracking = tracking
self._add = add
def __call__(self, *args):
if self._add:
self._tracking[self._id] = self._proto
elif self._id in self._tracking:
del self._tracking[self._id]
conn = self._meth(*args)
return conn

View File

@ -28,8 +28,9 @@ from gunicorn.workers.workertmp import WorkerTmp
class Worker(object): class Worker(object):
SIGNALS = [getattr(signal, "SIG%s" % x) SIGNALS = [getattr(signal, "SIG%s" % x) for x in (
for x in "ABRT HUP QUIT INT TERM USR1 USR2 WINCH CHLD".split()] "ABRT HUP QUIT INT TERM USR1 USR2 WINCH CHLD".split()
)]
PIPE = [] PIPE = []
@ -51,8 +52,13 @@ class Worker(object):
self.reloader = None self.reloader = None
self.nr = 0 self.nr = 0
jitter = randint(0, cfg.max_requests_jitter)
self.max_requests = cfg.max_requests + jitter or sys.maxsize if cfg.max_requests > 0:
jitter = randint(0, cfg.max_requests_jitter)
self.max_requests = cfg.max_requests + jitter
else:
self.max_requests = sys.maxsize
self.alive = True self.alive = True
self.log = log self.log = log
self.tmp = WorkerTmp(cfg) self.tmp = WorkerTmp(cfg)
@ -80,8 +86,7 @@ class Worker(object):
"""\ """\
If you override this method in a subclass, the last statement If you override this method in a subclass, the last statement
in the function should be to call this method with in the function should be to call this method with
super(MyWorkerClass, self).init_process() so that the ``run()`` super().init_process() so that the ``run()`` loop is initiated.
loop is initiated.
""" """
# set environment' variables # set environment' variables
@ -117,6 +122,7 @@ class Worker(object):
def changed(fname): def changed(fname):
self.log.info("Worker reloading: %s modified", fname) self.log.info("Worker reloading: %s modified", fname)
self.alive = False self.alive = False
os.write(self.PIPE[1], b"1")
self.cfg.worker_int(self) self.cfg.worker_int(self)
time.sleep(0.1) time.sleep(0.1)
sys.exit(0) sys.exit(0)
@ -124,9 +130,11 @@ class Worker(object):
reloader_cls = reloader_engines[self.cfg.reload_engine] reloader_cls = reloader_engines[self.cfg.reload_engine]
self.reloader = reloader_cls(extra_files=self.cfg.reload_extra_files, self.reloader = reloader_cls(extra_files=self.cfg.reload_extra_files,
callback=changed) callback=changed)
self.reloader.start()
self.load_wsgi() self.load_wsgi()
if self.reloader:
self.reloader.start()
self.cfg.post_worker_init(self) self.cfg.post_worker_init(self)
# Enter main run loop # Enter main run loop
@ -197,12 +205,14 @@ class Worker(object):
def handle_error(self, req, client, addr, exc): def handle_error(self, req, client, addr, exc):
request_start = datetime.now() request_start = datetime.now()
addr = addr or ('', -1) # unix socket case addr = addr or ('', -1) # unix socket case
if isinstance(exc, (InvalidRequestLine, InvalidRequestMethod, if isinstance(exc, (
InvalidHTTPVersion, InvalidHeader, InvalidHeaderName, InvalidRequestLine, InvalidRequestMethod,
LimitRequestLine, LimitRequestHeaders, InvalidHTTPVersion, InvalidHeader, InvalidHeaderName,
InvalidProxyLine, ForbiddenProxyRequest, LimitRequestLine, LimitRequestHeaders,
InvalidSchemeHeaders, InvalidProxyLine, ForbiddenProxyRequest,
SSLError)): InvalidSchemeHeaders,
SSLError,
)):
status_int = 400 status_int = 400
reason = "Bad Request" reason = "Bad Request"
@ -220,7 +230,9 @@ class Worker(object):
elif isinstance(exc, LimitRequestLine): elif isinstance(exc, LimitRequestLine):
mesg = "%s" % str(exc) mesg = "%s" % str(exc)
elif isinstance(exc, LimitRequestHeaders): elif isinstance(exc, LimitRequestHeaders):
reason = "Request Header Fields Too Large"
mesg = "Error parsing headers: '%s'" % str(exc) mesg = "Error parsing headers: '%s'" % str(exc)
status_int = 431
elif isinstance(exc, InvalidProxyLine): elif isinstance(exc, InvalidProxyLine):
mesg = "'%s'" % str(exc) mesg = "'%s'" % str(exc)
elif isinstance(exc, ForbiddenProxyRequest): elif isinstance(exc, ForbiddenProxyRequest):
@ -235,7 +247,7 @@ class Worker(object):
status_int = 403 status_int = 403
msg = "Invalid request from ip={ip}: {error}" msg = "Invalid request from ip={ip}: {error}"
self.log.debug(msg.format(ip=addr[0], error=str(exc))) self.log.warning(msg.format(ip=addr[0], error=str(exc)))
else: else:
if hasattr(req, "uri"): if hasattr(req, "uri"):
self.log.exception("Error handling request %s", req.uri) self.log.exception("Error handling request %s", req.uri)
@ -255,7 +267,7 @@ class Worker(object):
try: try:
util.write_error(client, status_int, reason, mesg) util.write_error(client, status_int, reason, mesg)
except: except Exception:
self.log.debug("Failed to send error message.") self.log.debug("Failed to send error message.")
def handle_winch(self, sig, fname): def handle_winch(self, sig, fname):

View File

@ -20,7 +20,7 @@ ALREADY_HANDLED = object()
class AsyncWorker(base.Worker): class AsyncWorker(base.Worker):
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
super(AsyncWorker, self).__init__(*args, **kwargs) super().__init__(*args, **kwargs)
self.worker_connections = self.cfg.worker_connections self.worker_connections = self.cfg.worker_connections
def timeout_ctx(self): def timeout_ctx(self):
@ -33,7 +33,7 @@ class AsyncWorker(base.Worker):
def handle(self, listener, client, addr): def handle(self, listener, client, addr):
req = None req = None
try: try:
parser = http.RequestParser(self.cfg, client) parser = http.RequestParser(self.cfg, client, addr)
try: try:
listener_name = listener.getsockname() listener_name = listener.getsockname()
if not self.cfg.keepalive: if not self.cfg.keepalive:
@ -73,11 +73,13 @@ class AsyncWorker(base.Worker):
self.log.debug("Error processing SSL request.") self.log.debug("Error processing SSL request.")
self.handle_error(req, client, addr, e) self.handle_error(req, client, addr, e)
except EnvironmentError as e: except EnvironmentError as e:
if e.errno not in (errno.EPIPE, errno.ECONNRESET): if e.errno not in (errno.EPIPE, errno.ECONNRESET, errno.ENOTCONN):
self.log.exception("Socket error processing request.") self.log.exception("Socket error processing request.")
else: else:
if e.errno == errno.ECONNRESET: if e.errno == errno.ECONNRESET:
self.log.debug("Ignoring connection reset") self.log.debug("Ignoring connection reset")
elif e.errno == errno.ENOTCONN:
self.log.debug("Ignoring socket not connected")
else: else:
self.log.debug("Ignoring EPIPE") self.log.debug("Ignoring EPIPE")
except Exception as e: except Exception as e:
@ -92,15 +94,15 @@ class AsyncWorker(base.Worker):
try: try:
self.cfg.pre_request(self, req) self.cfg.pre_request(self, req)
resp, environ = wsgi.create(req, sock, addr, resp, environ = wsgi.create(req, sock, addr,
listener_name, self.cfg) listener_name, self.cfg)
environ["wsgi.multithread"] = True environ["wsgi.multithread"] = True
self.nr += 1 self.nr += 1
if self.alive and self.nr >= self.max_requests: if self.nr >= self.max_requests:
self.log.info("Autorestarting worker after current request.") if self.alive:
resp.force_close() self.log.info("Autorestarting worker after current request.")
self.alive = False self.alive = False
if not self.cfg.keepalive: if not self.alive or not self.cfg.keepalive:
resp.force_close() resp.force_close()
respiter = self.wsgi(environ, resp.start_response) respiter = self.wsgi(environ, resp.start_response)

View File

@ -1,22 +0,0 @@
# -*- coding: utf-8 -
#
# This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information.
from gunicorn import util
try:
import aiohttp # pylint: disable=unused-import
except ImportError:
raise RuntimeError("You need aiohttp installed to use this worker.")
else:
try:
from aiohttp.worker import GunicornWebWorker as AiohttpWorker
except ImportError:
from gunicorn.workers._gaiohttp import AiohttpWorker
util.warn(
"The 'gaiohttp' worker is deprecated. See --worker-class "
"documentation for more information."
)
__all__ = ['AiohttpWorker']

View File

@ -4,37 +4,66 @@
# See the NOTICE for more information. # See the NOTICE for more information.
from functools import partial from functools import partial
import errno
import os
import sys import sys
try: try:
import eventlet import eventlet
except ImportError: except ImportError:
raise RuntimeError("You need eventlet installed to use this worker.") raise RuntimeError("eventlet worker requires eventlet 0.24.1 or higher")
else:
# validate the eventlet version from pkg_resources import parse_version
if eventlet.version_info < (0, 9, 7): if parse_version(eventlet.__version__) < parse_version('0.24.1'):
raise RuntimeError("You need eventlet >= 0.9.7") raise RuntimeError("eventlet worker requires eventlet 0.24.1 or higher")
from eventlet import hubs, greenthread from eventlet import hubs, greenthread
from eventlet.greenio import GreenSocket from eventlet.greenio import GreenSocket
from eventlet.hubs import trampoline import eventlet.wsgi
from eventlet.wsgi import ALREADY_HANDLED as EVENTLET_ALREADY_HANDLED
import greenlet import greenlet
from gunicorn.workers.base_async import AsyncWorker from gunicorn.workers.base_async import AsyncWorker
def _eventlet_sendfile(fdout, fdin, offset, nbytes): # ALREADY_HANDLED is removed in 0.30.3+ now it's `WSGI_LOCAL.already_handled: bool`
while True: # https://github.com/eventlet/eventlet/pull/544
try: EVENTLET_WSGI_LOCAL = getattr(eventlet.wsgi, "WSGI_LOCAL", None)
return os.sendfile(fdout, fdin, offset, nbytes) EVENTLET_ALREADY_HANDLED = getattr(eventlet.wsgi, "ALREADY_HANDLED", None)
except OSError as e:
if e.args[0] == errno.EAGAIN:
trampoline(fdout, write=True) def _eventlet_socket_sendfile(self, file, offset=0, count=None):
else: # Based on the implementation in gevent which in turn is slightly
raise # modified from the standard library implementation.
if self.gettimeout() == 0:
raise ValueError("non-blocking sockets are not supported")
if offset:
file.seek(offset)
blocksize = min(count, 8192) if count else 8192
total_sent = 0
# localize variable access to minimize overhead
file_read = file.read
sock_send = self.send
try:
while True:
if count:
blocksize = min(count - total_sent, blocksize)
if blocksize <= 0:
break
data = memoryview(file_read(blocksize))
if not data:
break # EOF
while True:
try:
sent = sock_send(data)
except BlockingIOError:
continue
else:
total_sent += sent
if sent < len(data):
data = data[sent:]
else:
break
return total_sent
finally:
if total_sent > 0 and hasattr(file, 'seek'):
file.seek(offset + total_sent)
def _eventlet_serve(sock, handle, concurrency): def _eventlet_serve(sock, handle, concurrency):
@ -79,31 +108,44 @@ def _eventlet_stop(client, server, conn):
def patch_sendfile(): def patch_sendfile():
setattr(os, "sendfile", _eventlet_sendfile) # As of eventlet 0.25.1, GreenSocket.sendfile doesn't exist,
# meaning the native implementations of socket.sendfile will be used.
# If os.sendfile exists, it will attempt to use that, failing explicitly
# if the socket is in non-blocking mode, which the underlying
# socket object /is/. Even the regular _sendfile_use_send will
# fail in that way; plus, it would use the underlying socket.send which isn't
# properly cooperative. So we have to monkey-patch a working socket.sendfile()
# into GreenSocket; in this method, `self.send` will be the GreenSocket's
# send method which is properly cooperative.
if not hasattr(GreenSocket, 'sendfile'):
GreenSocket.sendfile = _eventlet_socket_sendfile
class EventletWorker(AsyncWorker): class EventletWorker(AsyncWorker):
def patch(self): def patch(self):
hubs.use_hub() hubs.use_hub()
eventlet.monkey_patch(os=False) eventlet.monkey_patch()
patch_sendfile() patch_sendfile()
def is_already_handled(self, respiter): def is_already_handled(self, respiter):
# eventlet >= 0.30.3
if getattr(EVENTLET_WSGI_LOCAL, "already_handled", None):
raise StopIteration()
# eventlet < 0.30.3
if respiter == EVENTLET_ALREADY_HANDLED: if respiter == EVENTLET_ALREADY_HANDLED:
raise StopIteration() raise StopIteration()
else: return super().is_already_handled(respiter)
return super(EventletWorker, self).is_already_handled(respiter)
def init_process(self): def init_process(self):
super(EventletWorker, self).init_process()
self.patch() self.patch()
super().init_process()
def handle_quit(self, sig, frame): def handle_quit(self, sig, frame):
eventlet.spawn(super(EventletWorker, self).handle_quit, sig, frame) eventlet.spawn(super().handle_quit, sig, frame)
def handle_usr1(self, sig, frame): def handle_usr1(self, sig, frame):
eventlet.spawn(super(EventletWorker, self).handle_usr1, sig, frame) eventlet.spawn(super().handle_usr1, sig, frame)
def timeout_ctx(self): def timeout_ctx(self):
return eventlet.Timeout(self.cfg.keepalive or None, False) return eventlet.Timeout(self.cfg.keepalive or None, False)
@ -113,7 +155,7 @@ class EventletWorker(AsyncWorker):
client = eventlet.wrap_ssl(client, server_side=True, client = eventlet.wrap_ssl(client, server_side=True,
**self.cfg.ssl_options) **self.cfg.ssl_options)
super(EventletWorker, self).handle(listener, client, addr) super().handle(listener, client, addr)
def run(self): def run(self):
acceptors = [] acceptors = []

View File

@ -3,27 +3,24 @@
# This file is part of gunicorn released under the MIT license. # This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information. # See the NOTICE for more information.
import errno
import os import os
import sys import sys
from datetime import datetime from datetime import datetime
from functools import partial from functools import partial
import time import time
_socket = __import__("socket")
# workaround on osx, disable kqueue
if sys.platform == "darwin":
os.environ['EVENT_NOKQUEUE'] = "1"
try: try:
import gevent import gevent
except ImportError: except ImportError:
raise RuntimeError("You need gevent installed to use this worker.") raise RuntimeError("gevent worker requires gevent 1.4 or higher")
else:
from pkg_resources import parse_version
if parse_version(gevent.__version__) < parse_version('1.4'):
raise RuntimeError("gevent worker requires gevent 1.4 or higher")
from gevent.pool import Pool from gevent.pool import Pool
from gevent.server import StreamServer from gevent.server import StreamServer
from gevent.socket import wait_write, socket from gevent import hub, monkey, socket, pywsgi
from gevent import pywsgi
import gunicorn import gunicorn
from gunicorn.http.wsgi import base_environ from gunicorn.http.wsgi import base_environ
@ -31,19 +28,6 @@ from gunicorn.workers.base_async import AsyncWorker
VERSION = "gevent/%s gunicorn/%s" % (gevent.__version__, gunicorn.__version__) VERSION = "gevent/%s gunicorn/%s" % (gevent.__version__, gunicorn.__version__)
def _gevent_sendfile(fdout, fdin, offset, nbytes):
while True:
try:
return os.sendfile(fdout, fdin, offset, nbytes)
except OSError as e:
if e.args[0] == errno.EAGAIN:
wait_write(fdout)
else:
raise
def patch_sendfile():
setattr(os, "sendfile", _gevent_sendfile)
class GeventWorker(AsyncWorker): class GeventWorker(AsyncWorker):
@ -51,27 +35,17 @@ class GeventWorker(AsyncWorker):
wsgi_handler = None wsgi_handler = None
def patch(self): def patch(self):
from gevent import monkey monkey.patch_all()
monkey.noisy = False
# if the new version is used make sure to patch subprocess
if gevent.version_info[0] == 0:
monkey.patch_all()
else:
monkey.patch_all(subprocess=True)
# monkey patch sendfile to make it none blocking
patch_sendfile()
# patch sockets # patch sockets
sockets = [] sockets = []
for s in self.sockets: for s in self.sockets:
sockets.append(socket(s.FAMILY, _socket.SOCK_STREAM, sockets.append(socket.socket(s.FAMILY, socket.SOCK_STREAM,
fileno=s.sock.fileno())) fileno=s.sock.fileno()))
self.sockets = sockets self.sockets = sockets
def notify(self): def notify(self):
super(GeventWorker, self).notify() super().notify()
if self.ppid != os.getppid(): if self.ppid != os.getppid():
self.log.info("Parent changed, shutting down: %s", self) self.log.info("Parent changed, shutting down: %s", self)
sys.exit(0) sys.exit(0)
@ -102,6 +76,8 @@ class GeventWorker(AsyncWorker):
else: else:
hfun = partial(self.handle, s) hfun = partial(self.handle, s)
server = StreamServer(s, handle=hfun, spawn=pool, **ssl_args) server = StreamServer(s, handle=hfun, spawn=pool, **ssl_args)
if self.cfg.workers > 1:
server.max_accept = 1
server.start() server.start()
servers.append(server) servers.append(server)
@ -137,19 +113,18 @@ class GeventWorker(AsyncWorker):
self.log.warning("Worker graceful timeout (pid:%s)" % self.pid) self.log.warning("Worker graceful timeout (pid:%s)" % self.pid)
for server in servers: for server in servers:
server.stop(timeout=1) server.stop(timeout=1)
except: except Exception:
pass pass
def handle(self, listener, client, addr): def handle(self, listener, client, addr):
# Connected socket timeout defaults to socket.getdefaulttimeout(). # Connected socket timeout defaults to socket.getdefaulttimeout().
# This forces to blocking mode. # This forces to blocking mode.
client.setblocking(1) client.setblocking(1)
super(GeventWorker, self).handle(listener, client, addr) super().handle(listener, client, addr)
def handle_request(self, listener_name, req, sock, addr): def handle_request(self, listener_name, req, sock, addr):
try: try:
super(GeventWorker, self).handle_request(listener_name, req, sock, super().handle_request(listener_name, req, sock, addr)
addr)
except gevent.GreenletExit: except gevent.GreenletExit:
pass pass
except SystemExit: except SystemExit:
@ -158,41 +133,17 @@ class GeventWorker(AsyncWorker):
def handle_quit(self, sig, frame): def handle_quit(self, sig, frame):
# Move this out of the signal handler so we can use # Move this out of the signal handler so we can use
# blocking calls. See #1126 # blocking calls. See #1126
gevent.spawn(super(GeventWorker, self).handle_quit, sig, frame) gevent.spawn(super().handle_quit, sig, frame)
def handle_usr1(self, sig, frame): def handle_usr1(self, sig, frame):
# Make the gevent workers handle the usr1 signal # Make the gevent workers handle the usr1 signal
# by deferring to a new greenlet. See #1645 # by deferring to a new greenlet. See #1645
gevent.spawn(super(GeventWorker, self).handle_usr1, sig, frame) gevent.spawn(super().handle_usr1, sig, frame)
if gevent.version_info[0] == 0: def init_process(self):
self.patch()
def init_process(self): hub.reinit()
# monkey patch here super().init_process()
self.patch()
# reinit the hub
import gevent.core
gevent.core.reinit()
#gevent 0.13 and older doesn't reinitialize dns for us after forking
#here's the workaround
gevent.core.dns_shutdown(fail_requests=1)
gevent.core.dns_init()
super(GeventWorker, self).init_process()
else:
def init_process(self):
# monkey patch here
self.patch()
# reinit the hub
from gevent import hub
hub.reinit()
# then initialize the process
super(GeventWorker, self).init_process()
class GeventResponse(object): class GeventResponse(object):
@ -222,7 +173,7 @@ class PyWSGIHandler(pywsgi.WSGIHandler):
self.server.log.access(resp, req_headers, self.environ, response_time) self.server.log.access(resp, req_headers, self.environ, response_time)
def get_environ(self): def get_environ(self):
env = super(PyWSGIHandler, self).get_environ() env = super().get_environ()
env['gunicorn.sock'] = self.socket env['gunicorn.sock'] = self.socket
env['RAW_URI'] = self.path env['RAW_URI'] = self.path
return env return env

View File

@ -4,12 +4,14 @@
# See the NOTICE for more information. # See the NOTICE for more information.
# design: # design:
# a threaded worker accepts connections in the main loop, accepted # A threaded worker accepts connections in the main loop, accepted
# connections are are added to the thread pool as a connection job. On # connections are added to the thread pool as a connection job.
# keepalive connections are put back in the loop waiting for an event. # Keepalive connections are put back in the loop waiting for an event.
# If no event happen after the keep alive timeout, the connectoin is # If no event happen after the keep alive timeout, the connection is
# closed. # closed.
# pylint: disable=no-else-break
import concurrent.futures as futures
import errno import errno
import os import os
import selectors import selectors
@ -27,13 +29,6 @@ from .. import http
from .. import util from .. import util
from ..http import wsgi from ..http import wsgi
try:
import concurrent.futures as futures
except ImportError:
raise RuntimeError("""
You need to install the 'futures' package to use this worker with this
Python version.
""")
class TConn(object): class TConn(object):
@ -55,10 +50,10 @@ class TConn(object):
# wrap the socket if needed # wrap the socket if needed
if self.cfg.is_ssl: if self.cfg.is_ssl:
self.sock = ssl.wrap_socket(self.sock, server_side=True, self.sock = ssl.wrap_socket(self.sock, server_side=True,
**self.cfg.ssl_options) **self.cfg.ssl_options)
# initialize the parser # initialize the parser
self.parser = http.RequestParser(self.cfg, self.sock) self.parser = http.RequestParser(self.cfg, self.sock, self.client)
def set_timeout(self): def set_timeout(self):
# set the timeout # set the timeout
@ -71,7 +66,7 @@ class TConn(object):
class ThreadWorker(base.Worker): class ThreadWorker(base.Worker):
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
super(ThreadWorker, self).__init__(*args, **kwargs) super().__init__(*args, **kwargs)
self.worker_connections = self.cfg.worker_connections self.worker_connections = self.cfg.worker_connections
self.max_keepalived = self.cfg.worker_connections - self.cfg.threads self.max_keepalived = self.cfg.worker_connections - self.cfg.threads
# initialise the pool # initialise the pool
@ -88,13 +83,17 @@ class ThreadWorker(base.Worker):
if max_keepalived <= 0 and cfg.keepalive: if max_keepalived <= 0 and cfg.keepalive:
log.warning("No keepalived connections can be handled. " + log.warning("No keepalived connections can be handled. " +
"Check the number of worker connections and threads.") "Check the number of worker connections and threads.")
def init_process(self): def init_process(self):
self.tpool = futures.ThreadPoolExecutor(max_workers=self.cfg.threads) self.tpool = self.get_thread_pool()
self.poller = selectors.DefaultSelector() self.poller = selectors.DefaultSelector()
self._lock = RLock() self._lock = RLock()
super(ThreadWorker, self).init_process() super().init_process()
def get_thread_pool(self):
"""Override this method to customize how the thread pool is created"""
return futures.ThreadPoolExecutor(max_workers=self.cfg.threads)
def handle_quit(self, sig, frame): def handle_quit(self, sig, frame):
self.alive = False self.alive = False
@ -124,8 +123,8 @@ class ThreadWorker(base.Worker):
# enqueue the job # enqueue the job
self.enqueue_req(conn) self.enqueue_req(conn)
except EnvironmentError as e: except EnvironmentError as e:
if e.errno not in (errno.EAGAIN, if e.errno not in (errno.EAGAIN, errno.ECONNABORTED,
errno.ECONNABORTED, errno.EWOULDBLOCK): errno.EWOULDBLOCK):
raise raise
def reuse_connection(self, conn, client): def reuse_connection(self, conn, client):
@ -205,11 +204,11 @@ class ThreadWorker(base.Worker):
# check (but do not wait) for finished requests # check (but do not wait) for finished requests
result = futures.wait(self.futures, timeout=0, result = futures.wait(self.futures, timeout=0,
return_when=futures.FIRST_COMPLETED) return_when=futures.FIRST_COMPLETED)
else: else:
# wait for a request to finish # wait for a request to finish
result = futures.wait(self.futures, timeout=1.0, result = futures.wait(self.futures, timeout=1.0,
return_when=futures.FIRST_COMPLETED) return_when=futures.FIRST_COMPLETED)
# clean up finished requests # clean up finished requests
for fut in result.done: for fut in result.done:
@ -218,7 +217,7 @@ class ThreadWorker(base.Worker):
if not self.is_parent_alive(): if not self.is_parent_alive():
break break
# hanle keepalive timeouts # handle keepalive timeouts
self.murder_keepalived() self.murder_keepalived()
self.tpool.shutdown(False) self.tpool.shutdown(False)
@ -239,7 +238,7 @@ class ThreadWorker(base.Worker):
(keepalive, conn) = fs.result() (keepalive, conn) = fs.result()
# if the connection should be kept alived add it # if the connection should be kept alived add it
# to the eventloop and record it # to the eventloop and record it
if keepalive: if keepalive and self.alive:
# flag the socket as non blocked # flag the socket as non blocked
conn.sock.setblocking(False) conn.sock.setblocking(False)
@ -250,11 +249,11 @@ class ThreadWorker(base.Worker):
# add the socket to the event loop # add the socket to the event loop
self.poller.register(conn.sock, selectors.EVENT_READ, self.poller.register(conn.sock, selectors.EVENT_READ,
partial(self.reuse_connection, conn)) partial(self.reuse_connection, conn))
else: else:
self.nr_conns -= 1 self.nr_conns -= 1
conn.close() conn.close()
except: except Exception:
# an exception happened, make sure to close the # an exception happened, make sure to close the
# socket. # socket.
self.nr_conns -= 1 self.nr_conns -= 1
@ -286,11 +285,13 @@ class ThreadWorker(base.Worker):
self.handle_error(req, conn.sock, conn.client, e) self.handle_error(req, conn.sock, conn.client, e)
except EnvironmentError as e: except EnvironmentError as e:
if e.errno not in (errno.EPIPE, errno.ECONNRESET): if e.errno not in (errno.EPIPE, errno.ECONNRESET, errno.ENOTCONN):
self.log.exception("Socket error processing request.") self.log.exception("Socket error processing request.")
else: else:
if e.errno == errno.ECONNRESET: if e.errno == errno.ECONNRESET:
self.log.debug("Ignoring connection reset") self.log.debug("Ignoring connection reset")
elif e.errno == errno.ENOTCONN:
self.log.debug("Ignoring socket not connected")
else: else:
self.log.debug("Ignoring connection epipe") self.log.debug("Ignoring connection epipe")
except Exception as e: except Exception as e:
@ -305,15 +306,16 @@ class ThreadWorker(base.Worker):
self.cfg.pre_request(self, req) self.cfg.pre_request(self, req)
request_start = datetime.now() request_start = datetime.now()
resp, environ = wsgi.create(req, conn.sock, conn.client, resp, environ = wsgi.create(req, conn.sock, conn.client,
conn.server, self.cfg) conn.server, self.cfg)
environ["wsgi.multithread"] = True environ["wsgi.multithread"] = True
self.nr += 1 self.nr += 1
if self.alive and self.nr >= self.max_requests: if self.nr >= self.max_requests:
self.log.info("Autorestarting worker after current request.") if self.alive:
self.log.info("Autorestarting worker after current request.")
self.alive = False
resp.force_close() resp.force_close()
self.alive = False
if not self.cfg.keepalive: if not self.alive or not self.cfg.keepalive:
resp.force_close() resp.force_close()
elif len(self._keep) >= self.max_keepalived: elif len(self._keep) >= self.max_keepalived:
resp.force_close() resp.force_close()

View File

@ -19,9 +19,13 @@ from gunicorn.workers.base import Worker
from gunicorn import __version__ as gversion from gunicorn import __version__ as gversion
# `io_loop` arguments to many Tornado functions have been removed in Tornado 5.0 # Tornado 5.0 updated its IOLoop, and the `io_loop` arguments to many
# <http://www.tornadoweb.org/en/stable/releases/v5.0.0.html#backwards-compatibility-notes> # Tornado functions have been removed in Tornado 5.0. Also, they no
IOLOOP_PARAMETER_REMOVED = tornado.version_info >= (5, 0, 0) # longer store PeriodCallbacks in ioloop._callbacks. Instead we store
# them on our side, and use stop() on them when stopping the worker.
# See https://www.tornadoweb.org/en/stable/releases/v5.0.0.html#backwards-compatibility-notes
# for more details.
TORNADO5 = tornado.version_info >= (5, 0, 0)
class TornadoWorker(Worker): class TornadoWorker(Worker):
@ -40,7 +44,7 @@ class TornadoWorker(Worker):
def handle_exit(self, sig, frame): def handle_exit(self, sig, frame):
if self.alive: if self.alive:
super(TornadoWorker, self).handle_exit(sig, frame) super().handle_exit(sig, frame)
def handle_request(self): def handle_request(self):
self.nr += 1 self.nr += 1
@ -66,8 +70,13 @@ class TornadoWorker(Worker):
pass pass
self.server_alive = False self.server_alive = False
else: else:
if not self.ioloop._callbacks: if TORNADO5:
for callback in self.callbacks:
callback.stop()
self.ioloop.stop() self.ioloop.stop()
else:
if not self.ioloop._callbacks:
self.ioloop.stop()
def init_process(self): def init_process(self):
# IOLoop cannot survive a fork or be shared across processes # IOLoop cannot survive a fork or be shared across processes
@ -75,15 +84,19 @@ class TornadoWorker(Worker):
# should create its own IOLoop. We should clear current IOLoop # should create its own IOLoop. We should clear current IOLoop
# if exists before os.fork. # if exists before os.fork.
IOLoop.clear_current() IOLoop.clear_current()
super(TornadoWorker, self).init_process() super().init_process()
def run(self): def run(self):
self.ioloop = IOLoop.instance() self.ioloop = IOLoop.instance()
self.alive = True self.alive = True
self.server_alive = False self.server_alive = False
if IOLOOP_PARAMETER_REMOVED:
PeriodicCallback(self.watchdog, 1000).start() if TORNADO5:
PeriodicCallback(self.heartbeat, 1000).start() self.callbacks = []
self.callbacks.append(PeriodicCallback(self.watchdog, 1000))
self.callbacks.append(PeriodicCallback(self.heartbeat, 1000))
for callback in self.callbacks:
callback.start()
else: else:
PeriodicCallback(self.watchdog, 1000, io_loop=self.ioloop).start() PeriodicCallback(self.watchdog, 1000, io_loop=self.ioloop).start()
PeriodicCallback(self.heartbeat, 1000, io_loop=self.ioloop).start() PeriodicCallback(self.heartbeat, 1000, io_loop=self.ioloop).start()
@ -92,8 +105,12 @@ class TornadoWorker(Worker):
# instance of tornado.web.Application or is an # instance of tornado.web.Application or is an
# instance of tornado.wsgi.WSGIApplication # instance of tornado.wsgi.WSGIApplication
app = self.wsgi app = self.wsgi
if not isinstance(app, tornado.web.Application) or \
isinstance(app, tornado.wsgi.WSGIApplication): if tornado.version_info[0] < 6:
if not isinstance(app, tornado.web.Application) or \
isinstance(app, tornado.wsgi.WSGIApplication):
app = WSGIContainer(app)
elif not isinstance(app, WSGIContainer):
app = WSGIContainer(app) app = WSGIContainer(app)
# Monkey-patching HTTPConnection.finish to count the # Monkey-patching HTTPConnection.finish to count the
@ -127,13 +144,13 @@ class TornadoWorker(Worker):
# options # options
del _ssl_opt["do_handshake_on_connect"] del _ssl_opt["do_handshake_on_connect"]
del _ssl_opt["suppress_ragged_eofs"] del _ssl_opt["suppress_ragged_eofs"]
if IOLOOP_PARAMETER_REMOVED: if TORNADO5:
server = server_class(app, ssl_options=_ssl_opt) server = server_class(app, ssl_options=_ssl_opt)
else: else:
server = server_class(app, io_loop=self.ioloop, server = server_class(app, io_loop=self.ioloop,
ssl_options=_ssl_opt) ssl_options=_ssl_opt)
else: else:
if IOLOOP_PARAMETER_REMOVED: if TORNADO5:
server = server_class(app) server = server_class(app)
else: else:
server = server_class(app, io_loop=self.ioloop) server = server_class(app, io_loop=self.ioloop)

View File

@ -17,8 +17,10 @@ import gunicorn.http.wsgi as wsgi
import gunicorn.util as util import gunicorn.util as util
import gunicorn.workers.base as base import gunicorn.workers.base as base
class StopWaiting(Exception): class StopWaiting(Exception):
""" exception raised to stop waiting for a connnection """ """ exception raised to stop waiting for a connection """
class SyncWorker(base.Worker): class SyncWorker(base.Worker):
@ -72,7 +74,7 @@ class SyncWorker(base.Worker):
except EnvironmentError as e: except EnvironmentError as e:
if e.errno not in (errno.EAGAIN, errno.ECONNABORTED, if e.errno not in (errno.EAGAIN, errno.ECONNABORTED,
errno.EWOULDBLOCK): errno.EWOULDBLOCK):
raise raise
if not self.is_parent_alive(): if not self.is_parent_alive():
@ -101,7 +103,7 @@ class SyncWorker(base.Worker):
self.accept(listener) self.accept(listener)
except EnvironmentError as e: except EnvironmentError as e:
if e.errno not in (errno.EAGAIN, errno.ECONNABORTED, if e.errno not in (errno.EAGAIN, errno.ECONNABORTED,
errno.EWOULDBLOCK): errno.EWOULDBLOCK):
raise raise
if not self.is_parent_alive(): if not self.is_parent_alive():
@ -127,9 +129,9 @@ class SyncWorker(base.Worker):
try: try:
if self.cfg.is_ssl: if self.cfg.is_ssl:
client = ssl.wrap_socket(client, server_side=True, client = ssl.wrap_socket(client, server_side=True,
**self.cfg.ssl_options) **self.cfg.ssl_options)
parser = http.RequestParser(self.cfg, client) parser = http.RequestParser(self.cfg, client, addr)
req = next(parser) req = next(parser)
self.handle_request(listener, req, client, addr) self.handle_request(listener, req, client, addr)
except http.errors.NoMoreData as e: except http.errors.NoMoreData as e:
@ -144,11 +146,13 @@ class SyncWorker(base.Worker):
self.log.debug("Error processing SSL request.") self.log.debug("Error processing SSL request.")
self.handle_error(req, client, addr, e) self.handle_error(req, client, addr, e)
except EnvironmentError as e: except EnvironmentError as e:
if e.errno not in (errno.EPIPE, errno.ECONNRESET): if e.errno not in (errno.EPIPE, errno.ECONNRESET, errno.ENOTCONN):
self.log.exception("Socket error processing request.") self.log.exception("Socket error processing request.")
else: else:
if e.errno == errno.ECONNRESET: if e.errno == errno.ECONNRESET:
self.log.debug("Ignoring connection reset") self.log.debug("Ignoring connection reset")
elif e.errno == errno.ENOTCONN:
self.log.debug("Ignoring socket not connected")
else: else:
self.log.debug("Ignoring EPIPE") self.log.debug("Ignoring EPIPE")
except Exception as e: except Exception as e:
@ -163,7 +167,7 @@ class SyncWorker(base.Worker):
self.cfg.pre_request(self, req) self.cfg.pre_request(self, req)
request_start = datetime.now() request_start = datetime.now()
resp, environ = wsgi.create(req, client, addr, resp, environ = wsgi.create(req, client, addr,
listener.getsockname(), self.cfg) listener.getsockname(), self.cfg)
# Force the connection closed until someone shows # Force the connection closed until someone shows
# a buffering proxy that supports Keep-Alive to # a buffering proxy that supports Keep-Alive to
# the backend. # the backend.

View File

@ -21,17 +21,21 @@ class WorkerTmp(object):
if fdir and not os.path.isdir(fdir): if fdir and not os.path.isdir(fdir):
raise RuntimeError("%s doesn't exist. Can't create workertmp." % fdir) raise RuntimeError("%s doesn't exist. Can't create workertmp." % fdir)
fd, name = tempfile.mkstemp(prefix="wgunicorn-", dir=fdir) fd, name = tempfile.mkstemp(prefix="wgunicorn-", dir=fdir)
# allows the process to write to the file
util.chown(name, cfg.uid, cfg.gid)
os.umask(old_umask) os.umask(old_umask)
# unlink the file so we don't leak tempory files # change the owner and group of the file if the worker will run as
# a different user or group, so that the worker can modify the file
if cfg.uid != os.geteuid() or cfg.gid != os.getegid():
util.chown(name, cfg.uid, cfg.gid)
# unlink the file so we don't leak temporary files
try: try:
if not IS_CYGWIN: if not IS_CYGWIN:
util.unlink(name) util.unlink(name)
self._tmp = os.fdopen(fd, 'w+b', 1) # In Python 3.8, open() emits RuntimeWarning if buffering=1 for binary mode.
except: # Because we never write to this file, pass 0 to switch buffering off.
self._tmp = os.fdopen(fd, 'w+b', 0)
except Exception:
os.close(fd) os.close(fd)
raise raise

View File

@ -1,4 +1,6 @@
aiohttp aiohttp
coverage>=4.0,<4.4 # TODO: https://github.com/benoitc/gunicorn/issues/1548 gevent
eventlet
coverage
pytest pytest
pytest-cov==2.5.1 pytest-cov

View File

@ -1,16 +0,0 @@
%{__python} setup.py install --skip-build --root=$RPM_BUILD_ROOT
# Build the HTML documentation using the default theme.
%{__python} setup.py build_sphinx
%if ! (0%{?fedora} > 12 || 0%{?rhel} > 5)
%{!?python_sitelib: %global python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())")}
%{!?python_sitearch: %global python_sitearch %(%{__python} -c "from distutils.sysconfig import get_python_lib; print(get_python_lib(1))")}
%endif
INSTALLED_FILES="\
%{python_sitelib}/*
%{_bindir}/*
%doc LICENSE NOTICE README.rst THANKS build/sphinx/html examples/example_config.py
"
echo "$INSTALLED_FILES" > INSTALLED_FILES

View File

@ -1,16 +1,7 @@
[bdist_rpm]
build-requires = python2-devel python-setuptools python-sphinx
requires = python-setuptools >= 0.6c6 python-ctypes
install_script = rpm/install
group = System Environment/Daemons
[tool:pytest] [tool:pytest]
norecursedirs = examples lib local src norecursedirs = examples lib local src
testpaths = tests/ testpaths = tests/
addopts = --assert=plain --cov=gunicorn --cov-report=xml addopts = --assert=plain --cov=gunicorn --cov-report=xml
[wheel]
universal = 1
[metadata] [metadata]
license_file = LICENSE license_file = LICENSE

View File

@ -13,7 +13,7 @@ from gunicorn import __version__
CLASSIFIERS = [ CLASSIFIERS = [
'Development Status :: 4 - Beta', 'Development Status :: 5 - Production/Stable',
'Environment :: Other Environment', 'Environment :: Other Environment',
'Intended Audience :: Developers', 'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License', 'License :: OSI Approved :: MIT License',
@ -21,11 +21,16 @@ CLASSIFIERS = [
'Operating System :: POSIX', 'Operating System :: POSIX',
'Programming Language :: Python', 'Programming Language :: Python',
'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
'Programming Language :: Python :: 3.10',
'Programming Language :: Python :: 3.11',
'Programming Language :: Python :: 3 :: Only', 'Programming Language :: Python :: 3 :: Only',
'Programming Language :: Python :: Implementation :: CPython',
'Programming Language :: Python :: Implementation :: PyPy',
'Topic :: Internet', 'Topic :: Internet',
'Topic :: Utilities', 'Topic :: Utilities',
'Topic :: Software Development :: Libraries :: Python Modules', 'Topic :: Software Development :: Libraries :: Python Modules',
@ -65,11 +70,20 @@ class PyTestCommand(TestCommand):
sys.exit(errno) sys.exit(errno)
extra_require = { install_requires = [
'gevent': ['gevent>=0.13'], # We depend on functioning pkg_resources.working_set.add_entry() and
'eventlet': ['eventlet>=0.9.7'], # pkg_resources.load_entry_point(). These both work as of 3.0 which
# is the first version to support Python 3.4 which we require as a
# floor.
'setuptools>=3.0',
]
extras_require = {
'gevent': ['gevent>=1.4.0'],
'eventlet': ['eventlet>=0.24.1'],
'tornado': ['tornado>=0.2'], 'tornado': ['tornado>=0.2'],
'gthread': [], 'gthread': [],
'setproctitle': ['setproctitle'],
} }
setup( setup(
@ -79,11 +93,18 @@ setup(
description='WSGI HTTP Server for UNIX', description='WSGI HTTP Server for UNIX',
long_description=long_description, long_description=long_description,
author='Benoit Chesneau', author='Benoit Chesneau',
author_email='benoitc@e-engura.com', author_email='benoitc@gunicorn.org',
license='MIT', license='MIT',
url='http://gunicorn.org', url='https://gunicorn.org',
project_urls={
'Documentation': 'https://docs.gunicorn.org',
'Homepage': 'https://gunicorn.org',
'Issue tracker': 'https://github.com/benoitc/gunicorn/issues',
'Source code': 'https://github.com/benoitc/gunicorn',
},
python_requires='>=3.4', python_requires='>=3.5',
install_requires=install_requires,
classifiers=CLASSIFIERS, classifiers=CLASSIFIERS,
zip_safe=False, zip_safe=False,
packages=find_packages(exclude=['examples', 'tests']), packages=find_packages(exclude=['examples', 'tests']),
@ -95,10 +116,9 @@ setup(
entry_points=""" entry_points="""
[console_scripts] [console_scripts]
gunicorn=gunicorn.app.wsgiapp:run gunicorn=gunicorn.app.wsgiapp:run
gunicorn_paster=gunicorn.app.pasterapp:run
[paste.server_runner] [paste.server_runner]
main=gunicorn.app.pasterapp:paste_server main=gunicorn.app.pasterapp:serve
""", """,
extras_require=extra_require, extras_require=extras_require,
) )

View File

@ -0,0 +1 @@
wsgi_app = "app1:app1"

View File

@ -0,0 +1,4 @@
GET /stuff/here?foo=bar HTTP/1.1\r\n
Content-Length : 3\r\n
\r\n
xyz

View File

@ -0,0 +1,5 @@
from gunicorn.config import Config
from gunicorn.http.errors import InvalidHeaderName
cfg = Config()
request = InvalidHeaderName

View File

@ -0,0 +1,5 @@
GET /stuff/here?foo=bar HTTP/1.1\r\n
Content-Length: 3\r\n
Content-Length: 2\r\n
\r\n
xyz

View File

@ -0,0 +1,5 @@
from gunicorn.config import Config
from gunicorn.http.errors import InvalidHeader
cfg = Config()
request = InvalidHeader

View File

@ -0,0 +1,4 @@
GET /stuff/here?foo=bar HTTP/1.1\r\n
Content-Length : 3\r\n
\r\n
xyz

View File

@ -0,0 +1,14 @@
from gunicorn.config import Config
cfg = Config()
cfg.set("strip_header_spaces", True)
request = {
"method": "GET",
"uri": uri("/stuff/here?foo=bar"),
"version": (1, 1),
"headers": [
("CONTENT-LENGTH", "3"),
],
"body": b"xyz"
}

View File

@ -0,0 +1,7 @@
GET /stuff/here?foo=bar HTTP/1.1\r\n
Transfer-Encoding: chunked\r\n
Transfer-Encoding: identity\r\n
\r\n
5\r\n
hello\r\n
000\r\n

View File

@ -0,0 +1,14 @@
from gunicorn.config import Config
cfg = Config()
request = {
"method": "GET",
"uri": uri("/stuff/here?foo=bar"),
"version": (1, 1),
"headers": [
('TRANSFER-ENCODING', 'chunked'),
('TRANSFER-ENCODING', 'identity')
],
"body": b"hello"
}

View File

@ -0,0 +1,7 @@
GET /stuff/here?foo=bar HTTP/1.1\r\n
Transfer-Encoding: identity\r\n
Transfer-Encoding: chunked\r\n
\r\n
5\r\n
hello\r\n
000\r\n

View File

@ -0,0 +1,14 @@
from gunicorn.config import Config
cfg = Config()
request = {
"method": "GET",
"uri": uri("/stuff/here?foo=bar"),
"version": (1, 1),
"headers": [
('TRANSFER-ENCODING', 'identity'),
('TRANSFER-ENCODING', 'chunked')
],
"body": b"hello"
}

View File

@ -7,19 +7,32 @@ from wsgiref.validate import validator
HOST = "127.0.0.1" HOST = "127.0.0.1"
@validator def create_app(name="World", count=1):
def app(environ, start_response): message = (('Hello, %s!\n' % name) * count).encode("utf8")
"""Simplest possible application object""" length = str(len(message))
data = b'Hello, World!\n' @validator
status = '200 OK' def app(environ, start_response):
"""Simplest possible application object"""
response_headers = [ status = '200 OK'
('Content-type', 'text/plain'),
('Content-Length', str(len(data))), response_headers = [
] ('Content-type', 'text/plain'),
start_response(status, response_headers) ('Content-Length', length),
return iter([data]) ]
start_response(status, response_headers)
return iter([message])
return app
app = application = create_app()
none_app = None
def error_factory():
raise TypeError("inner")
def requires_mac_ver(*min_version): def requires_mac_ver(*min_version):
@ -48,18 +61,3 @@ def requires_mac_ver(*min_version):
wrapper.min_version = min_version wrapper.min_version = min_version
return wrapper return wrapper
return decorator return decorator
try:
from types import SimpleNamespace # pylint: disable=unused-import
except ImportError:
class SimpleNamespace(object):
def __init__(self, **kwargs):
vars(self).update(kwargs)
def __repr__(self):
keys = sorted(vars(self))
items = ("{}={!r}".format(k, vars(self)[k]) for k in keys)
return "{}({})".format(type(self).__name__, ", ".join(items))
def __eq__(self, other):
return vars(self) == vars(other)

View File

@ -29,7 +29,7 @@ class request(object):
def __call__(self, func): def __call__(self, func):
def run(): def run():
src = data_source(self.fname) src = data_source(self.fname)
func(src, RequestParser(src, None)) func(src, RequestParser(src, None, None))
run.func_name = func.func_name run.func_name = func.func_name
return run return run

Some files were not shown because too many files have changed in this diff Show More