Merge branch 'master' into master

This commit is contained in:
Randall Leeds 2023-12-27 16:16:21 -08:00 committed by GitHub
commit fd809184c3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
201 changed files with 3937 additions and 4618 deletions

6
.github/dependabot.yml vendored Normal file
View File

@ -0,0 +1,6 @@
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "monthly"

24
.github/workflows/lint.yml vendored Normal file
View File

@ -0,0 +1,24 @@
name: lint
on: [push, pull_request]
permissions:
contents: read # to fetch code (actions/checkout)
jobs:
lint:
name: tox-${{ matrix.toxenv }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
toxenv: [lint, docs-lint, pycodestyle]
python-version: [ "3.10" ]
steps:
- uses: actions/checkout@v4
- name: Using Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install Dependencies
run: |
python -m pip install --upgrade pip
python -m pip install tox
- run: tox -e ${{ matrix.toxenv }}

24
.github/workflows/tox.yml vendored Normal file
View File

@ -0,0 +1,24 @@
name: tox
on: [push, pull_request]
permissions:
contents: read # to fetch code (actions/checkout)
jobs:
tox:
name: ${{ matrix.os }} / ${{ matrix.python-version }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest] # All OSes pass except Windows because tests need Unix-only fcntl, grp, pwd, etc.
python-version: [ "3.7", "3.8", "3.9", "3.10", "3.11", "pypy-3.8" ]
steps:
- uses: actions/checkout@v4
- name: Using Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install Dependencies
run: |
python -m pip install --upgrade pip
python -m pip install tox
- run: tox -e py

View File

@ -12,7 +12,6 @@ ignore=
disable= disable=
attribute-defined-outside-init, attribute-defined-outside-init,
bad-continuation,
bad-mcs-classmethod-argument, bad-mcs-classmethod-argument,
bare-except, bare-except,
broad-except, broad-except,
@ -21,18 +20,18 @@ disable=
eval-used, eval-used,
fixme, fixme,
import-error, import-error,
import-outside-toplevel,
import-self, import-self,
inconsistent-return-statements, inconsistent-return-statements,
invalid-name, invalid-name,
misplaced-comparison-constant,
missing-docstring, missing-docstring,
no-else-return, no-else-return,
no-member, no-member,
no-self-argument, no-self-argument,
no-self-use,
no-staticmethod-decorator, no-staticmethod-decorator,
not-callable, not-callable,
protected-access, protected-access,
raise-missing-from,
redefined-outer-name, redefined-outer-name,
too-few-public-methods, too-few-public-methods,
too-many-arguments, too-many-arguments,
@ -51,3 +50,6 @@ disable=
useless-import-alias, useless-import-alias,
comparison-with-callable, comparison-with-callable,
try-except-raise, try-except-raise,
consider-using-with,
consider-using-f-string,
unspecified-encoding

22
.readthedocs.yaml Normal file
View File

@ -0,0 +1,22 @@
# .readthedocs.yaml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
# Set the version of Python and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "3.11"
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/source/conf.py
# We recommend specifying your dependencies to enable reproducible builds:
# https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html
# python:
# install:
# - requirements: docs/requirements.txt

View File

@ -1,35 +0,0 @@
sudo: false
language: python
matrix:
include:
- python: 3.7
env: TOXENV=lint
dist: xenial
sudo: true
- python: 3.4
env: TOXENV=py34
- python: 3.5
env: TOXENV=py35
- python: 3.6
env: TOXENV=py36
- python: 3.7
env: TOXENV=py37
dist: xenial
sudo: true
- python: 3.8-dev
env: TOXENV=py38-dev
dist: xenial
sudo: true
- python: 3.7
env: TOXENV=docs-lint
dist: xenial
sudo: true
allow_failures:
- env: TOXENV=py38-dev
install: pip install tox
# TODO: https://github.com/tox-dev/tox/issues/149
script: tox --recreate
cache:
directories:
- .tox
- $HOME/.cache/pip

View File

@ -141,7 +141,7 @@ The relevant maintainer for a pull request is assigned in 3 steps:
* Step 2: Find the MAINTAINERS file which affects this directory. If the directory itself does not have a MAINTAINERS file, work your way up the the repo hierarchy until you find one. * Step 2: Find the MAINTAINERS file which affects this directory. If the directory itself does not have a MAINTAINERS file, work your way up the the repo hierarchy until you find one.
* Step 3: The first maintainer listed is the primary maintainer. The pull request is assigned to him. He may assign it to other listed maintainers, at his discretion. * Step 3: The first maintainer listed is the primary maintainer who is assigned the Pull Request. The primary maintainer can reassign a Pull Request to other listed maintainers.
### I'm a maintainer, should I make pull requests too? ### I'm a maintainer, should I make pull requests too?

View File

@ -1,4 +1,4 @@
2009-2018 (c) Benoît Chesneau <benoitc@e-engura.org> 2009-2023 (c) Benoît Chesneau <benoitc@gunicorn.org>
2009-2015 (c) Paul J. Davis <paul.joseph.davis@gmail.com> 2009-2015 (c) Paul J. Davis <paul.joseph.davis@gmail.com>
Permission is hereby granted, free of charge, to any person Permission is hereby granted, free of charge, to any person

View File

@ -1,9 +1,23 @@
Core maintainers
================
Benoit Chesneau <benoitc@gunicorn.org> Benoit Chesneau <benoitc@gunicorn.org>
Paul J. Davis <paul.joseph.davis@gmail.com>
Randall Leeds <randall.leeds@gmail.com>
Konstantin Kapustin <sirkonst@gmail.com> Konstantin Kapustin <sirkonst@gmail.com>
Randall Leeds <randall.leeds@gmail.com>
Berker Peksağ <berker.peksag@gmail.com>
Jason Madden <jason@nextthought.com>
Brett Randall <javabrett@gmail.com>
Alumni
======
This list contains maintainers that are no longer active on the project.
It is thanks to these people that the project has become what it is today.
Thank you!
Paul J. Davis <paul.joseph.davis@gmail.com>
Kenneth Reitz <me@kennethreitz.com> Kenneth Reitz <me@kennethreitz.com>
Nikolay Kim <fafhrd91@gmail.com> Nikolay Kim <fafhrd91@gmail.com>
Andrew Svetlov <andrew.svetlov@gmail.com> Andrew Svetlov <andrew.svetlov@gmail.com>
Stéphane Wirtel <stephane@wirtel.be> Stéphane Wirtel <stephane@wirtel.be>
Berker Peksağ <berker.peksag@gmail.com>

41
NOTICE
View File

@ -1,6 +1,6 @@
Gunicorn Gunicorn
2009-2018 (c) Benoît Chesneau <benoitc@e-engura.org> 2009-2023 (c) Benoît Chesneau <benoitc@gunicorn.org>
2009-2015 (c) Paul J. Davis <paul.joseph.davis@gmail.com> 2009-2015 (c) Paul J. Davis <paul.joseph.davis@gmail.com>
Gunicorn is released under the MIT license. See the LICENSE Gunicorn is released under the MIT license. See the LICENSE
@ -19,7 +19,7 @@ not be used in advertising or publicity pertaining to distribution
of the software without specific, written prior permission. of the software without specific, written prior permission.
VINAY SAJIP DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, VINAY SAJIP DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
INCLUDINGALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
VINAY SAJIP BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR VINAY SAJIP BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR
ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER
IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
@ -82,43 +82,8 @@ WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE. OTHER DEALINGS IN THE SOFTWARE.
doc/sitemap_gen.py
------------------
Under BSD License :
Copyright (c) 2004, 2005, Google Inc.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
* Neither the name of Google Inc. nor the names of its contributors
may be used to endorse or promote products derived from this
software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
util/unlink.py util/unlink.py
-------------- --------------
backport frop python3 Lib/test/support.py backport from python3 Lib/test/support.py

View File

@ -9,26 +9,30 @@ Gunicorn
:alt: Supported Python versions :alt: Supported Python versions
:target: https://pypi.python.org/pypi/gunicorn :target: https://pypi.python.org/pypi/gunicorn
.. image:: https://travis-ci.org/benoitc/gunicorn.svg?branch=master .. image:: https://github.com/benoitc/gunicorn/actions/workflows/tox.yml/badge.svg
:alt: Build Status :alt: Build Status
:target: https://travis-ci.org/benoitc/gunicorn :target: https://github.com/benoitc/gunicorn/actions/workflows/tox.yml
.. image:: https://github.com/benoitc/gunicorn/actions/workflows/lint.yml/badge.svg
:alt: Lint Status
:target: https://github.com/benoitc/gunicorn/actions/workflows/lint.yml
Gunicorn 'Green Unicorn' is a Python WSGI HTTP Server for UNIX. It's a pre-fork Gunicorn 'Green Unicorn' is a Python WSGI HTTP Server for UNIX. It's a pre-fork
worker model ported from Ruby's Unicorn_ project. The Gunicorn server is broadly worker model ported from Ruby's Unicorn_ project. The Gunicorn server is broadly
compatible with various web frameworks, simply implemented, light on server compatible with various web frameworks, simply implemented, light on server
resource usage, and fairly speedy. resource usage, and fairly speedy.
Feel free to join us in `#gunicorn`_ on Freenode_. Feel free to join us in `#gunicorn`_ on `Libera.chat`_.
Documentation Documentation
------------- -------------
The documentation is hosted at http://docs.gunicorn.org. The documentation is hosted at https://docs.gunicorn.org.
Installation Installation
------------ ------------
Gunicorn requires **Python **Python 3.x >= 3.4**. Gunicorn requires **Python 3.x >= 3.5**.
Install from PyPI:: Install from PyPI::
@ -52,6 +56,12 @@ Example with test app::
$ gunicorn --workers=2 test:app $ gunicorn --workers=2 test:app
Contributing
------------
See `our complete contributor's guide <CONTRIBUTING.md>`_ for more details.
License License
------- -------
@ -59,6 +69,6 @@ Gunicorn is released under the MIT License. See the LICENSE_ file for more
details. details.
.. _Unicorn: https://bogomips.org/unicorn/ .. _Unicorn: https://bogomips.org/unicorn/
.. _`#gunicorn`: https://webchat.freenode.net/?channels=gunicorn .. _`#gunicorn`: https://web.libera.chat/?channels=#gunicorn
.. _Freenode: https://freenode.net/ .. _`Libera.chat`: https://libera.chat/
.. _LICENSE: https://github.com/benoitc/gunicorn/blob/master/LICENSE .. _LICENSE: https://github.com/benoitc/gunicorn/blob/master/LICENSE

22
SECURITY.md Normal file
View File

@ -0,0 +1,22 @@
# Security Policy
## Reporting a Vulnerability
**Please note that public Github issues are open for everyone to see!**
If you believe you are found a problem in Gunicorn software, examples or documentation, we encourage you to send your report privately via [email](mailto:security@gunicorn.org?subject=Security%20issue%20in%20Gunicorn), or via Github using the *Report a vulnerability* button in the [Security](https://github.com/benoitc/gunicorn/security) section.
## Supported Releases
At this time, **only the latest release** receives any security attention whatsoever.
| Version | Status |
| ------- | ------------------ |
| latest release | :white_check_mark: |
| 21.2.0 | :x: |
| 20.0.0 | :x: |
| < 20.0 | :x: |
## Python Versions
Gunicorn runs on Python 3.7+, we *highly recommend* the latest release of a [supported series](https://devguide.python.org/versions/) and will not prioritize issues exclusively affecting in EoL environments.

11
THANKS
View File

@ -22,10 +22,12 @@ Andrew Svetlov <andrew.svetlov@gmail.com>
Anil V <avaitla16@gmail.com> Anil V <avaitla16@gmail.com>
Antoine Girard <antoine.girard.dev@gmail.com> Antoine Girard <antoine.girard.dev@gmail.com>
Anton Vlasenko <antares.spica@gmail.com> Anton Vlasenko <antares.spica@gmail.com>
Artur Kruchinin <arturkruchinin@gmail.com>
Bartosz Oler <bartosz@bzimage.us> Bartosz Oler <bartosz@bzimage.us>
Ben Cochran <bcochran@gmail.com> Ben Cochran <bcochran@gmail.com>
Ben Oswald <ben.oswald@root-space.de> Ben Oswald <ben.oswald@root-space.de>
Benjamin Gilbert <bgilbert@backtick.net> Benjamin Gilbert <bgilbert@backtick.net>
Benny Mei <meibenny@gmail.com>
Benoit Chesneau <bchesneau@gmail.com> Benoit Chesneau <bchesneau@gmail.com>
Berker Peksag <berker.peksag@gmail.com> Berker Peksag <berker.peksag@gmail.com>
bninja <andrew@poundpay.com> bninja <andrew@poundpay.com>
@ -39,6 +41,7 @@ Chris Adams <chris@improbable.org>
Chris Forbes <chrisf@ijw.co.nz> Chris Forbes <chrisf@ijw.co.nz>
Chris Lamb <lamby@debian.org> Chris Lamb <lamby@debian.org>
Chris Streeter <chris@chrisstreeter.com> Chris Streeter <chris@chrisstreeter.com>
Christian Clauss <cclauss@me.com>
Christoph Heer <Christoph.Heer@gmail.com> Christoph Heer <Christoph.Heer@gmail.com>
Christos Stavrakakis <cstavr@grnet.gr> Christos Stavrakakis <cstavr@grnet.gr>
CMGS <ilskdw@mspil.edu.cn> CMGS <ilskdw@mspil.edu.cn>
@ -47,6 +50,7 @@ Dan Callaghan <dcallagh@redhat.com>
Dan Sully <daniel-github@electricrain.com> Dan Sully <daniel-github@electricrain.com>
Daniel Quinn <code@danielquinn.org> Daniel Quinn <code@danielquinn.org>
Dariusz Suchojad <dsuch-github@m.zato.io> Dariusz Suchojad <dsuch-github@m.zato.io>
David Black <github@dhb.is>
David Vincelli <david@freshbooks.com> David Vincelli <david@freshbooks.com>
David Wolever <david@wolever.net> David Wolever <david@wolever.net>
Denis Bilenko <denis.bilenko@gmail.com> Denis Bilenko <denis.bilenko@gmail.com>
@ -102,12 +106,14 @@ Konstantin Kapustin <sirkonst@gmail.com>
kracekumar <kracethekingmaker@gmail.com> kracekumar <kracethekingmaker@gmail.com>
Kristian Glass <git@doismellburning.co.uk> Kristian Glass <git@doismellburning.co.uk>
Kristian Øllegaard <kristian.ollegaard@divio.ch> Kristian Øllegaard <kristian.ollegaard@divio.ch>
Krystian <chrisjozwik@outlook.com>
Krzysztof Urbaniak <urban@fail.pl> Krzysztof Urbaniak <urban@fail.pl>
Kyle Kelley <rgbkrk@gmail.com> Kyle Kelley <rgbkrk@gmail.com>
Kyle Mulka <repalviglator@yahoo.com> Kyle Mulka <repalviglator@yahoo.com>
Lars Hansson <romabysen@gmail.com> Lars Hansson <romabysen@gmail.com>
Leonardo Santagada <santagada@gmail.com> Leonardo Santagada <santagada@gmail.com>
Levi Gross <levi@levigross.com> Levi Gross <levi@levigross.com>
licunlong <shenxiaogll@163.com>
Łukasz Kucharski <lkucharski@leon.pl> Łukasz Kucharski <lkucharski@leon.pl>
Mahmoud Hashemi <mahmoudrhashemi@gmail.com> Mahmoud Hashemi <mahmoudrhashemi@gmail.com>
Malthe Borch <mborch@gmail.com> Malthe Borch <mborch@gmail.com>
@ -152,6 +158,7 @@ Rik <rvachterberg@gmail.com>
Ronan Amicel <ronan.amicel@gmail.com> Ronan Amicel <ronan.amicel@gmail.com>
Ryan Peck <ryan@rypeck.com> Ryan Peck <ryan@rypeck.com>
Saeed Gharedaghi <saeed.ghx68@gmail.com> Saeed Gharedaghi <saeed.ghx68@gmail.com>
Samuel Matos <samypr100@users.noreply.github.com>
Sergey Rublev <narma.nsk@gmail.com> Sergey Rublev <narma.nsk@gmail.com>
Shane Reustle <me@shanereustle.com> Shane Reustle <me@shanereustle.com>
shouse-cars <shouse@cars.com> shouse-cars <shouse@cars.com>
@ -162,7 +169,10 @@ Stephen DiCato <Locker537@gmail.com>
Stephen Holsapple <sholsapp@gmail.com> Stephen Holsapple <sholsapp@gmail.com>
Steven Cummings <estebistec@gmail.com> Steven Cummings <estebistec@gmail.com>
Sébastien Fievet <zyegfryed@gmail.com> Sébastien Fievet <zyegfryed@gmail.com>
Tal Einat <532281+taleinat@users.noreply.github.com>
Talha Malik <talham7391@hotmail.com>
TedWantsMore <TedWantsMore@gmx.com> TedWantsMore <TedWantsMore@gmx.com>
Teko012 <112829523+Teko012@users.noreply.github.com>
Thomas Grainger <tagrain@gmail.com> Thomas Grainger <tagrain@gmail.com>
Thomas Steinacher <tom@eggdrop.ch> Thomas Steinacher <tom@eggdrop.ch>
Travis Cline <travis.cline@gmail.com> Travis Cline <travis.cline@gmail.com>
@ -177,3 +187,4 @@ WooParadog <guohaochuan@gmail.com>
Xie Shi <xieshi@douban.com> Xie Shi <xieshi@douban.com>
Yue Du <ifduyue@gmail.com> Yue Du <ifduyue@gmail.com>
zakdances <zakdances@gmail.com> zakdances <zakdances@gmail.com>
Emile Fugulin <emilefugulin@hotmail.com>

View File

@ -2,23 +2,37 @@ version: '{branch}.{build}'
environment: environment:
matrix: matrix:
- TOXENV: lint - TOXENV: lint
PYTHON: "C:\\Python37-x64" PYTHON: "C:\\Python38-x64"
- TOXENV: py35 - TOXENV: docs-lint
PYTHON: "C:\\Python35-x64" PYTHON: "C:\\Python38-x64"
- TOXENV: py36 - TOXENV: pycodestyle
PYTHON: "C:\\Python36-x64" PYTHON: "C:\\Python38-x64"
- TOXENV: py37 # Windows is not ready for testing!!!
PYTHON: "C:\\Python37-x64" # Python's fcntl, grp, pwd, os.geteuid(), and socket.AF_UNIX are all Unix-only.
#- TOXENV: py35
# PYTHON: "C:\\Python35-x64"
#- TOXENV: py36
# PYTHON: "C:\\Python36-x64"
#- TOXENV: py37
# PYTHON: "C:\\Python37-x64"
#- TOXENV: py38
# PYTHON: "C:\\Python38-x64"
#- TOXENV: py39
# PYTHON: "C:\\Python39-x64"
matrix: matrix:
allow_failures: allow_failures:
- TOXENV: py35 - TOXENV: py35
- TOXENV: py36 - TOXENV: py36
- TOXENV: py37 - TOXENV: py37
init: SET "PATH=%PYTHON%;%PYTHON%\\Scripts;%PATH%" - TOXENV: py38
- TOXENV: py39
init:
- SET "PATH=%PYTHON%;%PYTHON%\\Scripts;%PATH%"
install: install:
- pip install tox - pip install tox
build: off build: false
test_script: tox test_script:
- tox
cache: cache:
# Not including the .tox directory since it takes longer to download/extract # Not including the .tox directory since it takes longer to download/extract
# the cache archive than for tox to clean install from the pip cache. # the cache archive than for tox to clean install from the pip cache.

View File

@ -48,29 +48,33 @@ def format_settings(app):
def fmt_setting(s): def fmt_setting(s):
if callable(s.default): if hasattr(s, "default_doc"):
val = s.default_doc
elif callable(s.default):
val = inspect.getsource(s.default) val = inspect.getsource(s.default)
val = "\n".join(" %s" % l for l in val.splitlines()) val = "\n".join(" %s" % line for line in val.splitlines())
val = " ::\n\n" + val val = "\n\n.. code-block:: python\n\n" + val
elif s.default == '': elif s.default == '':
val = "``(empty string)``" val = "``''``"
else: else:
val = "``%s``" % s.default val = "``%r``" % s.default
if s.cli and s.meta: if s.cli and s.meta:
args = ["%s %s" % (arg, s.meta) for arg in s.cli] cli = " or ".join("``%s %s``" % (arg, s.meta) for arg in s.cli)
cli = ', '.join(args)
elif s.cli: elif s.cli:
cli = ", ".join(s.cli) cli = " or ".join("``%s``" % arg for arg in s.cli)
else:
cli = ""
out = [] out = []
out.append(".. _%s:\n" % s.name.replace("_", "-")) out.append(".. _%s:\n" % s.name.replace("_", "-"))
out.append("%s" % s.name) out.append("``%s``" % s.name)
out.append("~" * len(s.name)) out.append("~" * (len(s.name) + 4))
out.append("") out.append("")
if s.cli: if s.cli:
out.append("* ``%s``" % cli) out.append("**Command line:** %s" % cli)
out.append("* %s" % val) out.append("")
out.append("**Default:** %s" % val)
out.append("") out.append("")
out.append(s.desc) out.append(s.desc)
out.append("") out.append("")

View File

@ -16,7 +16,7 @@
<div class="logo-div"> <div class="logo-div">
<div class="latest"> <div class="latest">
Latest version: <strong><a Latest version: <strong><a
href="https://docs.gunicorn.org/en/stable/">19.9.0</a></strong> href="https://docs.gunicorn.org/en/stable/">21.2.0</a></strong>
</div> </div>
<div class="logo"><img src="images/logo.jpg" ></div> <div class="logo"><img src="images/logo.jpg" ></div>
@ -118,11 +118,11 @@
<li><a href="https://github.com/benoitc/gunicorn/projects/4">Forum</a></li> <li><a href="https://github.com/benoitc/gunicorn/projects/4">Forum</a></li>
<li><a href="https://github.com/benoitc/gunicorn/projects/3">Mailing list</a> <li><a href="https://github.com/benoitc/gunicorn/projects/3">Mailing list</a>
</ul> </ul>
<p>Project maintenance guidelines are avaible on the <a href="https://github.com/benoitc/gunicorn/wiki/Project-management">wiki</a></p> <p>Project maintenance guidelines are available on the <a href="https://github.com/benoitc/gunicorn/wiki/Project-management">wiki</a></p>
<h1>Irc</h1> <h1>IRC</h1>
<p>The Gunicorn channel is on the <a href="http://freenode.net/">Freenode</a> IRC <p>The Gunicorn channel is on the <a href="https://libera.chat/">Libera Chat</a> IRC
network. You can chat with the community on the <a href="http://webchat.freenode.net/?channels=gunicorn">#gunicorn channel</a>.</p> network. You can chat with the community on the <a href="https://web.libera.chat/?channels=#gunicorn">#gunicorn channel</a>.</p>
<h1>Issue Tracking</h1> <h1>Issue Tracking</h1>
<p>Bug reports, enhancement requests and tasks generally go in the <a href="http://github.com/benoitc/gunicorn/issues">Github <p>Bug reports, enhancement requests and tasks generally go in the <a href="http://github.com/benoitc/gunicorn/issues">Github

View File

@ -1,112 +1,73 @@
<?xml version="1.0" encoding="UTF-8"?> <?xml version='1.0' encoding='UTF-8'?>
<urlset <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
xmlns="http://www.google.com/schemas/sitemap/0.84"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.google.com/schemas/sitemap/0.84
http://www.google.com/schemas/sitemap/0.84/sitemap.xsd">
<url> <url>
<loc>http://gunicorn.org/</loc> <loc>http://gunicorn.org/</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod> <lastmod>2019-11-27T00:02:48+01:00</lastmod>
<priority>0.5000</priority> <priority>1.0</priority>
</url>
<url>
<loc>http://gunicorn.org/community.html</loc>
<lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5</priority>
</url> </url>
<url> <url>
<loc>http://gunicorn.org/configuration.html</loc> <loc>http://gunicorn.org/configuration.html</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod> <lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5000</priority> <priority>0.5</priority>
</url> </url>
<url> <url>
<loc>http://gunicorn.org/configure.html</loc> <loc>http://gunicorn.org/configure.html</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod> <lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5000</priority> <priority>0.5</priority>
</url>
<url>
<loc>http://gunicorn.org/css/</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod>
<priority>0.5000</priority>
</url>
<url>
<loc>http://gunicorn.org/css/index.css</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod>
<priority>0.5000</priority>
</url>
<url>
<loc>http://gunicorn.org/css/style.css</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod>
<priority>0.5000</priority>
</url> </url>
<url> <url>
<loc>http://gunicorn.org/deploy.html</loc> <loc>http://gunicorn.org/deploy.html</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod> <lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5000</priority> <priority>0.5</priority>
</url> </url>
<url> <url>
<loc>http://gunicorn.org/deployment.html</loc> <loc>http://gunicorn.org/deployment.html</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod> <lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5000</priority> <priority>0.5</priority>
</url> </url>
<url> <url>
<loc>http://gunicorn.org/design.html</loc> <loc>http://gunicorn.org/design.html</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod> <lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5000</priority> <priority>0.5</priority>
</url> </url>
<url> <url>
<loc>http://gunicorn.org/faq.html</loc> <loc>http://gunicorn.org/faq.html</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod> <lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5000</priority> <priority>0.5</priority>
</url>
<url>
<loc>http://gunicorn.org/images/</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod>
<priority>0.5000</priority>
</url>
<url>
<loc>http://gunicorn.org/images/gunicorn.png</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod>
<priority>0.5000</priority>
</url>
<url>
<loc>http://gunicorn.org/images/large_gunicorn.png</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod>
<priority>0.5000</priority>
</url>
<url>
<loc>http://gunicorn.org/images/logo.png</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod>
<priority>0.5000</priority>
</url>
<url>
<loc>http://gunicorn.org/index.html</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod>
<priority>0.5000</priority>
</url> </url>
<url> <url>
<loc>http://gunicorn.org/install.html</loc> <loc>http://gunicorn.org/install.html</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod> <lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5000</priority> <priority>0.5</priority>
</url> </url>
<url> <url>
<loc>http://gunicorn.org/installation.html</loc> <loc>http://gunicorn.org/installation.html</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod> <lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5000</priority> <priority>0.5</priority>
</url> </url>
<url> <url>
<loc>http://gunicorn.org/news.html</loc> <loc>http://gunicorn.org/news.html</loc>
<lastmod>2010-07-08T19:57:19Z</lastmod> <lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5000</priority> <priority>0.5</priority>
</url> </url>
<url> <url>
<loc>http://gunicorn.org/run.html</loc> <loc>http://gunicorn.org/run.html</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod> <lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5000</priority> <priority>0.5</priority>
</url> </url>
<url> <url>
<loc>http://gunicorn.org/tuning.html</loc> <loc>http://gunicorn.org/tuning.html</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod> <lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5000</priority> <priority>0.5</priority>
</url> </url>
<url> <url>
<loc>http://gunicorn.org/usage.html</loc> <loc>http://gunicorn.org/usage.html</loc>
<lastmod>2010-07-01T05:14:22Z</lastmod> <lastmod>2012-10-04T00:43:15+05:45</lastmod>
<priority>0.5000</priority> <priority>0.5</priority>
</url> </url>
</urlset> </urlset>

View File

@ -1,19 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<site
base_url="http://gunicorn.org"
store_into="htdocs/sitemap.xml"
verbose="1"
>
<directory path="htdocs/" url="http://gunicorn.org/" />
<!-- Exclude URLs that end with a '~' (IE: emacs backup files) -->
<filter action="drop" type="wildcard" pattern="*~" />
<!-- Exclude URLs within UNIX-style hidden files or directories -->
<filter action="drop" type="regexp" pattern="/\.[^/]*" />
<!-- Exclude github CNAME file -->
<filter action="drop" type="wildcard" pattern="*CNAME" />
</site>

2221
docs/sitemap_gen.py Executable file → Normal file

File diff suppressed because it is too large Load Diff

View File

@ -75,7 +75,7 @@ Changelog - 2012
- fix tornado.wsgi.WSGIApplication calling error - fix tornado.wsgi.WSGIApplication calling error
- **breaking change**: take the control on graceful reload back. - **breaking change**: take the control on graceful reload back.
graceful can't be overrided anymore using the on_reload function. graceful can't be overridden anymore using the on_reload function.
0.14.3 / 2012-05-15 0.14.3 / 2012-05-15
------------------- -------------------

View File

@ -38,10 +38,10 @@ Changelog - 2013
- fix: give the initial global_conf to paster application - fix: give the initial global_conf to paster application
- fix: fix 'Expect: 100-continue' support on Python 3 - fix: fix 'Expect: 100-continue' support on Python 3
New versionning: New versioning:
++++++++++++++++ ++++++++++++++++
With this release, the versionning of Gunicorn is changing. Gunicorn is With this release, the versioning of Gunicorn is changing. Gunicorn is
stable since a long time and there is no point to release a "1.0" now. stable since a long time and there is no point to release a "1.0" now.
It should have been done since a long time. 0.17 really meant it was the It should have been done since a long time. 0.17 really meant it was the
17th stable version. From the beginning we have only 2 kind of 17th stable version. From the beginning we have only 2 kind of
@ -49,7 +49,7 @@ releases:
major release: releases with major changes or huge features added major release: releases with major changes or huge features added
services releases: fixes and minor features added So from now we will services releases: fixes and minor features added So from now we will
apply the following versionning ``<major>.<service>``. For example ``17.5`` is a apply the following versioning ``<major>.<service>``. For example ``17.5`` is a
service release. service release.
0.17.4 / 2013-04-24 0.17.4 / 2013-04-24

View File

@ -71,7 +71,7 @@ AioHttp worker
Async worker Async worker
++++++++++++ ++++++++++++
- fix :issue:`790`: StopIteration shouldn't be catched at this level. - fix :issue:`790`: StopIteration shouldn't be caught at this level.
Logging Logging
@ -180,7 +180,7 @@ core
- add: syslog logging can now be done to a unix socket - add: syslog logging can now be done to a unix socket
- fix logging: don't try to redirect stdout/stderr to the logfile. - fix logging: don't try to redirect stdout/stderr to the logfile.
- fix logging: don't propagate log - fix logging: don't propagate log
- improve logging: file option can be overriden by the gunicorn options - improve logging: file option can be overridden by the gunicorn options
`--error-logfile` and `--access-logfile` if they are given. `--error-logfile` and `--access-logfile` if they are given.
- fix: don't override SERVER_* by the Host header - fix: don't override SERVER_* by the Host header
- fix: handle_error - fix: handle_error

68
docs/source/2018-news.rst Normal file
View File

@ -0,0 +1,68 @@
================
Changelog - 2018
================
.. note::
Please see :doc:`news` for the latest changes
19.9.0 / 2018/07/03
===================
- fix: address a regression that prevented syslog support from working
(:issue:`1668`, :pr:`1773`)
- fix: correctly set `REMOTE_ADDR` on versions of Python 3 affected by
`Python Issue 30205 <https://bugs.python.org/issue30205>`_
(:issue:`1755`, :pr:`1796`)
- fix: show zero response length correctly in access log (:pr:`1787`)
- fix: prevent raising :exc:`AttributeError` when ``--reload`` is not passed
in case of a :exc:`SyntaxError` raised from the WSGI application.
(:issue:`1805`, :pr:`1806`)
- The internal module ``gunicorn.workers.async`` was renamed to ``gunicorn.workers.base_async``
since ``async`` is now a reserved word in Python 3.7.
(:pr:`1527`)
19.8.1 / 2018/04/30
===================
- fix: secure scheme headers when bound to a unix socket
(:issue:`1766`, :pr:`1767`)
19.8.0 / 2018/04/28
===================
- Eventlet 0.21.0 support (:issue:`1584`)
- Tornado 5 support (:issue:`1728`, :pr:`1752`)
- support watching additional files with ``--reload-extra-file``
(:pr:`1527`)
- support configuring logging with a dictionary with ``--logging-config-dict``
(:issue:`1087`, :pr:`1110`, :pr:`1602`)
- add support for the ``--config`` flag in the ``GUNICORN_CMD_ARGS`` environment
variable (:issue:`1576`, :pr:`1581`)
- disable ``SO_REUSEPORT`` by default and add the ``--reuse-port`` setting
(:issue:`1553`, :issue:`1603`, :pr:`1669`)
- fix: installing `inotify` on MacOS no longer breaks the reloader
(:issue:`1540`, :pr:`1541`)
- fix: do not throw ``TypeError`` when ``SO_REUSEPORT`` is not available
(:issue:`1501`, :pr:`1491`)
- fix: properly decode HTTP paths containing certain non-ASCII characters
(:issue:`1577`, :pr:`1578`)
- fix: remove whitespace when logging header values under gevent (:pr:`1607`)
- fix: close unlinked temporary files (:issue:`1327`, :pr:`1428`)
- fix: parse ``--umask=0`` correctly (:issue:`1622`, :pr:`1632`)
- fix: allow loading applications using relative file paths
(:issue:`1349`, :pr:`1481`)
- fix: force blocking mode on the gevent sockets (:issue:`880`, :pr:`1616`)
- fix: preserve leading `/` in request path (:issue:`1512`, :pr:`1511`)
- fix: forbid contradictory secure scheme headers
- fix: handle malformed basic authentication headers in access log
(:issue:`1683`, :pr:`1684`)
- fix: defer handling of ``USR1`` signal to a new greenlet under gevent
(:issue:`1645`, :pr:`1651`)
- fix: the threaded worker would sometimes close the wrong keep-alive
connection under Python 2 (:issue:`1698`, :pr:`1699`)
- fix: re-open log files on ``USR1`` signal using ``handler._open`` to
support subclasses of ``FileHandler`` (:issue:`1739`, :pr:`1742`)
- deprecation: the ``gaiohttp`` worker is deprecated, see the
:ref:`worker-class` documentation for more information
(:issue:`1338`, :pr:`1418`, :pr:`1569`)

121
docs/source/2019-news.rst Normal file
View File

@ -0,0 +1,121 @@
================
Changelog - 2019
================
.. note::
Please see :doc:`news` for the latest changes
20.0.4 / 2019/11/26
===================
- fix binding a socket using the file descriptor
- remove support for the `bdist_rpm` build
20.0.3 / 2019/11/24
===================
- fixed load of a config file without a Python extension
- fixed `socketfromfd.fromfd` when defaults are not set
.. note:: we now warn when we load a config file without Python Extension
20.0.2 / 2019/11/23
===================
- fix changelog
20.0.1 / 2019/11/23
===================
- fixed the way the config module is loaded. `__file__` is now available
- fixed `wsgi.input_terminated`. It is always true.
- use the highest protocol version of openssl by default
- only support Python >= 3.5
- added `__repr__` method to `Config` instance
- fixed support of AIX platform and musl libc in `socketfromfd.fromfd` function
- fixed support of applications loaded from a factory function
- fixed chunked encoding support to prevent any `request smuggling <https://portswigger.net/research/http-desync-attacks-request-smuggling-reborn>`_
- Capture os.sendfile before patching in gevent and eventlet workers.
fix `RecursionError`.
- removed locking in reloader when adding new files
- load the WSGI application before the loader to pick up all files
.. note:: this release add official support for applications loaded from a factory function
as documented in Flask and other places.
19.10.0 / 2019/11/23
====================
- unblock select loop during reload of a sync worker
- security fix: http desync attack
- handle `wsgi.input_terminated`
- added support for str and bytes in unix socket addresses
- fixed `max_requests` setting
- headers values are now encoded as LATN1, not ASCII
- fixed `InotifyReloadeder`: handle `module.__file__` is None
- fixed compatibility with tornado 6
- fixed root logging
- Prevent removalof unix sockets from `reuse_port`
- Clear tornado ioloop before os.fork
- Miscellaneous fixes and improvement for linting using Pylint
20.0 / 2019/10/30
=================
- Fixed `fdopen` `RuntimeWarning` in Python 3.8
- Added check and exception for str type on value in Response process_headers method.
- Ensure WSGI header value is string before conducting regex search on it.
- Added pypy3 to list of tested environments
- Grouped `StopIteration` and `KeyboardInterrupt` exceptions with same body together in Arbiter.run()
- Added `setproctitle` module to `extras_require` in setup.py
- Avoid unnecessary chown of temporary files
- Logging: Handle auth type case insensitively
- Removed `util.import_module`
- Removed fallback for `types.SimpleNamespace` in tests utils
- Use `SourceFileLoader` instead instead of `execfile_`
- Use `importlib` instead of `__import__` and eval`
- Fixed eventlet patching
- Added optional `datadog <https://www.datadoghq.com>`_ tags for statsd metrics
- Header values now are encoded using latin-1, not ascii.
- Rewritten `parse_address` util added test
- Removed redundant super() arguments
- Simplify `futures` import in gthread module
- Fixed worker_connections` setting to also affects the Gthread worker type
- Fixed setting max_requests
- Bump minimum Eventlet and Gevent versions to 0.24 and 1.4
- Use Python default SSL cipher list by default
- handle `wsgi.input_terminated` extension
- Simplify Paste Deployment documentation
- Fix root logging: root and logger are same level.
- Fixed typo in ssl_version documentation
- Documented systemd deployment unit examples
- Added systemd sd_notify support
- Fixed typo in gthread.py
- Added `tornado <https://www.tornadoweb.org/>`_ 5 and 6 support
- Declare our setuptools dependency
- Added support to `--bind` to open file descriptors
- Document how to serve WSGI app modules from Gunicorn
- Provide guidance on X-Forwarded-For access log in documentation
- Add support for named constants in the `--ssl-version` flag
- Clarify log format usage of header & environment in documentation
- Fixed systemd documentation to properly setup gunicorn unix socket
- Prevent removal unix socket for reuse_port
- Fix `ResourceWarning` when reading a Python config module
- Remove unnecessary call to dict keys method
- Support str and bytes for UNIX socket addresses
- fixed `InotifyReloadeder`: handle `module.__file__` is None
- `/dev/shm` as a convenient alternative to making your own tmpfs mount in fchmod FAQ
- fix examples to work on python3
- Fix typo in `--max-requests` documentation
- Clear tornado ioloop before os.fork
- Miscellaneous fixes and improvement for linting using Pylint
Breaking Change
+++++++++++++++
- Removed gaiohttp worker
- Drop support for Python 2.x
- Drop support for EOL Python 3.2 and 3.3
- Drop support for Paste Deploy server blocks

View File

@ -0,0 +1,7 @@
================
Changelog - 2020
================
.. note::
Please see :doc:`news` for the latest changes

54
docs/source/2021-news.rst Normal file
View File

@ -0,0 +1,54 @@
================
Changelog - 2021
================
.. note::
Please see :doc:`news` for the latest changes
20.1.0 - 2021-02-12
===================
- document WEB_CONCURRENCY is set by, at least, Heroku
- capture peername from accept: Avoid calls to getpeername by capturing the peer name returned by
accept
- log a warning when a worker was terminated due to a signal
- fix tornado usage with latest versions of Django
- add support for python -m gunicorn
- fix systemd socket activation example
- allows to set wsgi application in configg file using `wsgi_app`
- document `--timeout = 0`
- always close a connection when the number of requests exceeds the max requests
- Disable keepalive during graceful shutdown
- kill tasks in the gthread workers during upgrade
- fix latency in gevent worker when accepting new requests
- fix file watcher: handle errors when new worker reboot and ensure the list of files is kept
- document the default name and path of the configuration file
- document how variable impact configuration
- document the `$PORT` environment variable
- added milliseconds option to request_time in access_log
- added PIP requirements to be used for example
- remove version from the Server header
- fix sendfile: use `socket.sendfile` instead of `os.sendfile`
- reloader: use absolute path to prevent empty to prevent0 `InotifyError` when a file
is added to the working directory
- Add --print-config option to print the resolved settings at startup.
- remove the `--log-dict-config` CLI flag because it never had a working format
(the `logconfig_dict` setting in configuration files continues to work)
** Breaking changes **
- minimum version is Python 3.5
- remove version from the Server header
** Documentation **
** Others **
- miscellaneous changes in the code base to be a better citizen with Python 3
- remove dead code
- fix documentation generation

56
docs/source/2023-news.rst Normal file
View File

@ -0,0 +1,56 @@
================
Changelog - 2023
================
22.0.0 - TBDTBDTBD
==================
- fix numerous security vulnerabilites in HTTP parser (closing some request smuggling vectors)
- parsing additional requests is no longer attempted past unsupported request framing
- on HTTP versions < 1.1 support for chunked transfer is refused (only used in exploits)
- requests conflicting configured or passed SCRIPT_NAME now produce a verbose error
- Trailer fields are no longer inspected for headers indicating secure scheme
** Breaking changes **
- the limitations on valid characters in the HTTP method have been bounded to Internet Standards
- requests specifying unsupported transfer coding (order) are refused by default (rare)
- HTTP methods are no longer casefolded by default (IANA method registry contains none affacted)
- HTTP methods containing the number sign (#) are no longer accepted by default (rare)
- HTTP versions < 1.0 or >= 2.0 are no longer accepted by default (rare, only HTTP/1.1 is supported)
- HTTP versions consisting of multiple digits or containing a prefix/suffix are no longer accepted
- HTTP header field names Gunicorn cannot safely map to variables are silently dropped, as in other software
- HTTP headers with empty field name are refused by default (no legitimate use cases, used in exploits)
- requests with both Transfer-Encoding and Content-Length are refused by default (such a message might indicate an attempt to perform request smuggling)
- empty transfer codings are no longer permitted (reportedly seen with really old & broken proxies)
21.2.0 - 2023-07-19
===================
- fix thread worker: revert change considering connection as idle .
*** NOTE ***
This is fixing the bad file description error.
21.0.1 - 2023-07-17
===================
- fix documentation build
21.0.0 - 2023-07-17
===================
- support python 3.11
- fix gevent and eventlet workers
- fix threads support (gththread): improve performance and unblock requests
- SSL: noaw use SSLContext object
- HTTP parser: miscellaneous fixes
- remove unecessary setuid calls
- fix testing
- improve logging
- miscellaneous fixes to core engine
*** RELEASE NOTE ***
We made this release major to start our new release cycle. More info will be provided on our discussion forum.

View File

@ -15,7 +15,7 @@ for 3 different purposes:
* `Mailing list <https://github.com/benoitc/gunicorn/projects/3>`_ : Discussion of Gunicorn development, new features * `Mailing list <https://github.com/benoitc/gunicorn/projects/3>`_ : Discussion of Gunicorn development, new features
and project management. and project management.
Project maintenance guidelines are avaible on the `wiki <https://github.com/benoitc/gunicorn/wiki/Project-management>`_ Project maintenance guidelines are available on the `wiki <https://github.com/benoitc/gunicorn/wiki/Project-management>`_
. .
IRC IRC

View File

@ -4,28 +4,46 @@
Configuration Overview Configuration Overview
====================== ======================
Gunicorn pulls configuration information from three distinct places. Gunicorn reads configuration information from five places.
The first place that Gunicorn will read configuration from is the framework Gunicorn first reads environment variables for some configuration
specific configuration file. Currently this only affects Paster applications. :ref:`settings <settings>`.
The second source of configuration information is a configuration file that is Gunicorn then reads configuration from a framework specific configuration
optionally specified on the command line. Anything specified in the Gunicorn file. Currently this only affects Paster applications.
config file will override any framework specific settings.
The third source of configuration information is an optional configuration file
``gunicorn.conf.py`` searched in the current working directory or specified
using a command line argument. Anything specified in this configuration file
will override any framework specific settings.
The fourth place of configuration information are command line arguments
stored in an environment variable named ``GUNICORN_CMD_ARGS``.
Lastly, the command line arguments used to invoke Gunicorn are the final place Lastly, the command line arguments used to invoke Gunicorn are the final place
considered for configuration settings. If an option is specified on the command considered for configuration settings. If an option is specified on the command
line, this is the value that will be used. line, this is the value that will be used.
When a configuration file is specified in the command line arguments and in the
``GUNICORN_CMD_ARGS`` environment variable, only the configuration
file specified on the command line is used.
Once again, in order of least to most authoritative: Once again, in order of least to most authoritative:
1. Framework Settings 1. Environment Variables
2. Configuration File 2. Framework Settings
3. Command Line 3. Configuration File
4. ``GUNICORN_CMD_ARGS``
5. Command Line
.. note:: .. note::
To check your configuration when using the command line or the To print your resolved configuration when using the command line or the
configuration file you can run the following command::
$ gunicorn --print-config APP_MODULE
To check your resolved configuration when using the command line or the
configuration file you can run the following command:: configuration file you can run the following command::
$ gunicorn --check-config APP_MODULE $ gunicorn --check-config APP_MODULE
@ -47,14 +65,16 @@ usual::
There is also a ``--version`` flag available to the command line scripts that There is also a ``--version`` flag available to the command line scripts that
isn't mentioned in the list of :ref:`settings <settings>`. isn't mentioned in the list of :ref:`settings <settings>`.
.. _configuration_file:
Configuration File Configuration File
================== ==================
The configuration file should be a valid Python source file. It only needs to The configuration file should be a valid Python source file with a **python
be readable from the file system. More specifically, it does not need to be extension** (e.g. `gunicorn.conf.py`). It only needs to be readable from the
importable. Any Python is valid. Just consider that this will be run every time file system. More specifically, it does not have to be on the module path
you start Gunicorn (including when you signal Gunicorn to reload). (sys.path, PYTHONPATH). Any Python is valid. Just consider that this will be
run every time you start Gunicorn (including when you signal Gunicorn to reload).
To set a parameter, just assign to it. There's no special syntax. The values To set a parameter, just assign to it. There's no special syntax. The values
you provide will be used for the configuration values. you provide will be used for the configuration values.

View File

@ -13,7 +13,8 @@ Here is a small example where we create a very small WSGI app and load it with
a custom Application: a custom Application:
.. literalinclude:: ../../examples/standalone_app.py .. literalinclude:: ../../examples/standalone_app.py
:lines: 11-60 :start-after: # See the NOTICE for more information
:lines: 2-
Direct Usage of Existing WSGI Apps Direct Usage of Existing WSGI Apps
---------------------------------- ----------------------------------

View File

@ -2,7 +2,7 @@
Deploying Gunicorn Deploying Gunicorn
================== ==================
We strongly recommend to use Gunicorn behind a proxy server. We strongly recommend using Gunicorn behind a proxy server.
Nginx Configuration Nginx Configuration
=================== ===================
@ -38,6 +38,22 @@ To turn off buffering, you only need to add ``proxy_buffering off;`` to your
} }
... ...
If you want to ignore aborted requests like health check of Load Balancer, some
of which close the connection without waiting for a response, you need to turn
on `ignoring client abort`_.
To ignore aborted requests, you only need to add
``proxy_ignore_client_abort on;`` to your ``location`` block::
...
proxy_ignore_client_abort on;
...
.. note::
The default value of ``proxy_ignore_client_abort`` is ``off``. Error code
499 may appear in Nginx log and ``Ignoring EPIPE`` may appear in Gunicorn
log if loglevel is set to ``debug``.
It is recommended to pass protocol information to Gunicorn. Many web It is recommended to pass protocol information to Gunicorn. Many web
frameworks use this information to generate URLs. Without this frameworks use this information to generate URLs. Without this
information, the application may mistakenly generate 'http' URLs in information, the application may mistakenly generate 'http' URLs in
@ -216,7 +232,7 @@ A tool that is starting to be common on linux systems is Systemd_. It is a
system services manager that allows for strict process management, resources system services manager that allows for strict process management, resources
and permissions control. and permissions control.
Below are configurations files and instructions for using systemd to create Below are configuration files and instructions for using systemd to create
a unix socket for incoming Gunicorn requests. Systemd will listen on this a unix socket for incoming Gunicorn requests. Systemd will listen on this
socket and start gunicorn automatically in response to traffic. Later in socket and start gunicorn automatically in response to traffic. Later in
this section are instructions for configuring Nginx to forward web traffic this section are instructions for configuring Nginx to forward web traffic
@ -258,9 +274,9 @@ to the newly created unix socket:
# Our service won't need permissions for the socket, since it # Our service won't need permissions for the socket, since it
# inherits the file descriptor by socket activation # inherits the file descriptor by socket activation
# only the nginx daemon will need access to the socket # only the nginx daemon will need access to the socket
User=www-data SocketUser=www-data
# Optionally restrict the socket permissions even more. # Optionally restrict the socket permissions even more.
# Mode=600 # SocketMode=600
[Install] [Install]
WantedBy=sockets.target WantedBy=sockets.target
@ -286,8 +302,8 @@ HTML from your server in the terminal.
.. note:: .. note::
``www-data`` is the default nginx user in debian, other distriburions use ``www-data`` is the default nginx user in debian, other distributions use
different users (for example: ``http`` or ``nginx``). Check you distro to different users (for example: ``http`` or ``nginx``). Check your distro to
know what to put for the socket user, and for the sudo command. know what to put for the socket user, and for the sudo command.
You must now configure your web proxy to send traffic to the new Gunicorn You must now configure your web proxy to send traffic to the new Gunicorn
@ -357,3 +373,4 @@ utility::
.. _Virtualenv: https://pypi.python.org/pypi/virtualenv .. _Virtualenv: https://pypi.python.org/pypi/virtualenv
.. _Systemd: https://www.freedesktop.org/wiki/Software/systemd/ .. _Systemd: https://www.freedesktop.org/wiki/Software/systemd/
.. _Gaffer: https://gaffer.readthedocs.io/ .. _Gaffer: https://gaffer.readthedocs.io/
.. _`ignoring client abort`: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ignore_client_abort

View File

@ -46,6 +46,22 @@ Gevent_). Greenlets are an implementation of cooperative multi-threading for
Python. In general, an application should be able to make use of these worker Python. In general, an application should be able to make use of these worker
classes with no changes. classes with no changes.
For full greenlet support applications might need to be adapted.
When using, e.g., Gevent_ and Psycopg_ it makes sense to ensure psycogreen_ is
installed and `setup <http://www.gevent.org/api/gevent.monkey.html#plugins>`_.
Other applications might not be compatible at all as they, e.g., rely on
the original unpatched behavior.
Gthread Workers
---------------
The worker `gthread` is a threaded worker. It accepts connections in the
main loop. Accepted connections are added to the thread pool as a
connection job. On keepalive connections are put back in the loop
waiting for an event. If no event happens after the keepalive timeout,
the connection is closed.
Tornado Workers Tornado Workers
--------------- ---------------
@ -59,32 +75,10 @@ WSGI application, this is not a recommended configuration.
AsyncIO Workers AsyncIO Workers
--------------- ---------------
These workers are compatible with python3. You have two kind of workers. These workers are compatible with Python 3.
The worker `gthread` is a threaded worker. It accepts connections in the You can port also your application to use aiohttp_'s ``web.Application`` API and use the
main loop, accepted connections are added to the thread pool as a ``aiohttp.worker.GunicornWebWorker`` worker.
connection job. On keepalive connections are put back in the loop
waiting for an event. If no event happen after the keep alive timeout,
the connection is closed.
The worker `gaiohttp` is a full asyncio worker using aiohttp_.
.. note::
The ``gaiohttp`` worker requires the aiohttp_ module to be installed.
aiohttp_ has removed its native WSGI application support in version 2.
If you want to continue to use the ``gaiohttp`` worker with your WSGI
application (e.g. an application that uses Flask or Django), there are
three options available:
#. Install aiohttp_ version 1.3.5 instead of version 2::
$ pip install aiohttp==1.3.5
#. Use aiohttp_wsgi_ to wrap your WSGI application. You can take a look
at the `example`_ in the Gunicorn repository.
#. Port your application to use aiohttp_'s ``web.Application`` API.
#. Use the ``aiohttp.worker.GunicornWebWorker`` worker instead of the
deprecated ``gaiohttp`` worker.
Choosing a Worker Type Choosing a Worker Type
====================== ======================
@ -149,14 +143,11 @@ signal, as the application code will be shared among workers but loaded only in
the worker processes (unlike when using the preload setting, which loads the the worker processes (unlike when using the preload setting, which loads the
code in the master process). code in the master process).
.. note::
Under Python 2.x, you need to install the 'futures' package to use this
feature.
.. _Greenlets: https://github.com/python-greenlet/greenlet .. _Greenlets: https://github.com/python-greenlet/greenlet
.. _Eventlet: http://eventlet.net/ .. _Eventlet: http://eventlet.net/
.. _Gevent: http://www.gevent.org/ .. _Gevent: http://www.gevent.org/
.. _Hey: https://github.com/rakyll/hey .. _Hey: https://github.com/rakyll/hey
.. _aiohttp: https://aiohttp.readthedocs.io/en/stable/ .. _aiohttp: https://docs.aiohttp.org/en/stable/deployment.html#nginx-gunicorn
.. _aiohttp_wsgi: https://aiohttp-wsgi.readthedocs.io/en/stable/index.html
.. _`example`: https://github.com/benoitc/gunicorn/blob/master/examples/frameworks/flaskapp_aiohttp_wsgi.py .. _`example`: https://github.com/benoitc/gunicorn/blob/master/examples/frameworks/flaskapp_aiohttp_wsgi.py
.. _Psycopg: http://initd.org/psycopg/
.. _psycogreen: https://github.com/psycopg/psycogreen/

View File

@ -106,9 +106,9 @@ threads. However `a work has been started
Why I don't see any logs in the console? Why I don't see any logs in the console?
---------------------------------------- ----------------------------------------
In version R19, Gunicorn doesn't log by default in the console. In version 19.0, Gunicorn doesn't log by default in the console.
To watch the logs in the console you need to use the option ``--log-file=-``. To watch the logs in the console you need to use the option ``--log-file=-``.
In version R20, Gunicorn logs to the console by default again. In version 19.2, Gunicorn logs to the console by default again.
Kernel Parameters Kernel Parameters
================= =================
@ -129,9 +129,13 @@ One of the first settings that usually needs to be bumped is the maximum number
of open file descriptors for a given process. For the confused out there, of open file descriptors for a given process. For the confused out there,
remember that Unices treat sockets as files. remember that Unices treat sockets as files.
:: .. warning:: ``sudo ulimit`` may not work
$ sudo ulimit -n 2048 Considering non-privileged users are not able to relax the limit, you should
firstly switch to root user, increase the limit, then run gunicorn. Using ``sudo
ulimit`` would not take effect.
Try systemd's service unit file, or an initscript which runs as root.
How can I increase the maximum socket backlog? How can I increase the maximum socket backlog?
---------------------------------------------- ----------------------------------------------
@ -205,3 +209,30 @@ Check the result::
tmpfs 65536 0 65536 0% /mem tmpfs 65536 0 65536 0% /mem
Now you can set ``--worker-tmp-dir /mem``. Now you can set ``--worker-tmp-dir /mem``.
Why are Workers Silently Killed?
--------------------------------------------------------------
A sometimes subtle problem to debug is when a worker process is killed and there
is little logging information about what happened.
If you use a reverse proxy like NGINX you might see 502 returned to a client.
In the gunicorn logs you might simply see ``[35] [INFO] Booting worker with pid: 35``
It's completely normal for workers to be stop and start, for example due to
max-requests setting. Ordinarily gunicorn will capture any signals and log something.
This particular failure case is usually due to a SIGKILL being received, as it's
not possible to catch this signal silence is usually a common side effect! A common
cause of SIGKILL is when OOM killer terminates a process due to low memory condition.
This is increasingly common in container deployments where memory limits are enforced
by cgroups, you'll usually see evidence of this from dmesg::
dmesg | grep gunicorn
Memory cgroup out of memory: Kill process 24534 (gunicorn) score 1506 or sacrifice child
Killed process 24534 (gunicorn) total-vm:1016648kB, anon-rss:550160kB, file-rss:25824kB, shmem-rss:0kB
In these instances adjusting the memory limit is usually your best bet, it's also possible
to configure OOM not to send SIGKILL by default.

View File

@ -7,7 +7,7 @@ Gunicorn - WSGI server
:Website: http://gunicorn.org :Website: http://gunicorn.org
:Source code: https://github.com/benoitc/gunicorn :Source code: https://github.com/benoitc/gunicorn
:Issue tracker: https://github.com/benoitc/gunicorn/issues :Issue tracker: https://github.com/benoitc/gunicorn/issues
:IRC: ``#gunicorn`` on Freenode :IRC: ``#gunicorn`` on Libera Chat
:Usage questions: https://github.com/benoitc/gunicorn/issues :Usage questions: https://github.com/benoitc/gunicorn/issues
Gunicorn 'Green Unicorn' is a Python WSGI HTTP Server for UNIX. It's a pre-fork Gunicorn 'Green Unicorn' is a Python WSGI HTTP Server for UNIX. It's a pre-fork
@ -23,7 +23,7 @@ Features
* Simple Python configuration * Simple Python configuration
* Multiple worker configurations * Multiple worker configurations
* Various server hooks for extensibility * Various server hooks for extensibility
* Compatible with Python 3.x >= 3.4 * Compatible with Python 3.x >= 3.5
Contents Contents

View File

@ -4,7 +4,7 @@ Installation
.. highlight:: bash .. highlight:: bash
:Requirements: **Python 3.x >= 3.4** :Requirements: **Python 3.x >= 3.5**
To install the latest released version of Gunicorn:: To install the latest released version of Gunicorn::
@ -40,7 +40,7 @@ want to consider one of the alternate worker types.
$ pip install gunicorn[gevent] # Or, using extra $ pip install gunicorn[gevent] # Or, using extra
.. note:: .. note::
Both require ``greenlet``, which should get installed automatically, Both require ``greenlet``, which should get installed automatically.
If its installation fails, you probably need to install If its installation fails, you probably need to install
the Python headers. These headers are available in most package the Python headers. These headers are available in most package
managers. On Ubuntu the package name for ``apt-get`` is managers. On Ubuntu the package name for ``apt-get`` is
@ -52,10 +52,32 @@ want to consider one of the alternate worker types.
installed, this is the most likely reason. installed, this is the most likely reason.
Extra Packages
==============
Some Gunicorn options require additional packages. You can use the ``[extra]``
syntax to install these at the same time as Gunicorn.
Most extra packages are needed for alternate worker types. See the
`design docs`_ for more information on when you'll want to consider an
alternate worker type.
* ``gunicorn[eventlet]`` - Eventlet-based greenlets workers
* ``gunicorn[gevent]`` - Gevent-based greenlets workers
* ``gunicorn[gthread]`` - Threaded workers
* ``gunicorn[tornado]`` - Tornado-based workers, not recommended
If you are running more than one instance of Gunicorn, the :ref:`proc-name`
setting will help distinguish between them in tools like ``ps`` and ``top``.
* ``gunicorn[setproctitle]`` - Enables setting the process name
Multiple extras can be combined, like
``pip install gunicorn[gevent,setproctitle]``.
Debian GNU/Linux Debian GNU/Linux
================ ================
If you are using Debian GNU/Linux and it is recommended that you use If you are using Debian GNU/Linux it is recommended that you use
system packages to install Gunicorn except maybe when you want to use system packages to install Gunicorn except maybe when you want to use
different versions of Gunicorn with virtualenv. This has a number of different versions of Gunicorn with virtualenv. This has a number of
advantages: advantages:
@ -74,16 +96,43 @@ advantages:
rolled back in case of incompatibility. The package can also be purged rolled back in case of incompatibility. The package can also be purged
entirely from the system in seconds. entirely from the system in seconds.
stable ("stretch") stable ("buster")
------------------ ------------------
The version of Gunicorn in the Debian_ "stable" distribution is 19.6.0 (June The version of Gunicorn in the Debian_ "stable" distribution is 19.9.0
2017). You can install it using:: (December 2020). You can install it using::
$ sudo apt-get install gunicorn $ sudo apt-get install gunicorn3
You can also use the most recent version by using `Debian Backports`_. You can also use the most recent version 20.0.4 (December 2020) by using
First, copy the following line to your ``/etc/apt/sources.list``:: `Debian Backports`_. First, copy the following line to your
``/etc/apt/sources.list``::
deb http://ftp.debian.org/debian buster-backports main
Then, update your local package lists::
$ sudo apt-get update
You can then install the latest version using::
$ sudo apt-get -t buster-backports install gunicorn
oldstable ("stretch")
---------------------
While Debian releases newer than Stretch will give you gunicorn with Python 3
support no matter if you install the gunicorn or gunicorn3 package for Stretch
you specifically have to install gunicorn3 to get Python 3 support.
The version of Gunicorn in the Debian_ "oldstable" distribution is 19.6.0
(December 2020). You can install it using::
$ sudo apt-get install gunicorn3
You can also use the most recent version 19.7.1 (December 2020) by using
`Debian Backports`_. First, copy the following line to your
``/etc/apt/sources.list``::
deb http://ftp.debian.org/debian stretch-backports main deb http://ftp.debian.org/debian stretch-backports main
@ -93,34 +142,13 @@ Then, update your local package lists::
You can then install the latest version using:: You can then install the latest version using::
$ sudo apt-get -t stretch-backports install gunicorn $ sudo apt-get -t stretch-backports install gunicorn3
oldstable ("jessie") Testing ("bullseye") / Unstable ("sid")
-------------------- ---------------------------------------
The version of Gunicorn in the Debian_ "oldstable" distribution is 19.0 (June "bullseye" and "sid" contain the latest released version of Gunicorn 20.0.4
2014). you can install it using:: (December 2020). You can install it in the usual way::
$ sudo apt-get install gunicorn
You can also use the most recent version by using `Debian Backports`_.
First, copy the following line to your ``/etc/apt/sources.list``::
deb http://ftp.debian.org/debian jessie-backports main
Then, update your local package lists::
$ sudo apt-get update
You can then install the latest version using::
$ sudo apt-get -t jessie-backports install gunicorn
Testing ("buster") / Unstable ("sid")
-------------------------------------
"buster" and "sid" contain the latest released version of Gunicorn. You can
install it in the usual way::
$ sudo apt-get install gunicorn $ sudo apt-get install gunicorn
@ -128,8 +156,8 @@ install it in the usual way::
Ubuntu Ubuntu
====== ======
Ubuntu_ 12.04 (trusty) or later contains Gunicorn package by default so that Ubuntu_ 20.04 LTS (Focal Fossa) or later contains the Gunicorn package by
you can install it in the usual way:: default 20.0.4 (December 2020) so that you can install it in the usual way::
$ sudo apt-get update $ sudo apt-get update
$ sudo apt-get install gunicorn $ sudo apt-get install gunicorn

View File

@ -2,72 +2,41 @@
Changelog Changelog
========= =========
20.0 / not released 21.2.0 - 2023-07-19
=================== ===================
- fix: Added support for binding to file descriptors (:issue:`1107`, :pr:`1809`) - fix thread worker: revert change considering connection as idle .
19.9.0 / 2018/07/03 *** NOTE ***
This is fixing the bad file description error.
21.1.0 - 2023-07-18
=================== ===================
- fix: address a regression that prevented syslog support from working - fix thread worker: fix socket removal from the queue
(:issue:`1668`, :pr:`1773`)
- fix: correctly set `REMOTE_ADDR` on versions of Python 3 affected by
`Python Issue 30205 <https://bugs.python.org/issue30205>`_
(:issue:`1755`, :pr:`1796`)
- fix: show zero response length correctly in access log (:pr:`1787`)
- fix: prevent raising :exc:`AttributeError` when ``--reload`` is not passed
in case of a :exc:`SyntaxError` raised from the WSGI application.
(:issue:`1805`, :pr:`1806`)
- The internal module ``gunicorn.workers.async`` was renamed to ``gunicorn.workers.base_async``
since ``async`` is now a reserved word in Python 3.7.
(:pr:`1527`)
19.8.1 / 2018/04/30 21.0.1 - 2023-07-17
=================== ===================
- fix: secure scheme headers when bound to a unix socket - fix documentation build
(:issue:`1766`, :pr:`1767`)
19.8.0 / 2018/04/28 21.0.0 - 2023-07-17
=================== ===================
- Eventlet 0.21.0 support (:issue:`1584`) - support python 3.11
- Tornado 5 support (:issue:`1728`, :pr:`1752`) - fix gevent and eventlet workers
- support watching additional files with ``--reload-extra-file`` - fix threads support (gththread): improve performance and unblock requests
(:pr:`1527`) - SSL: noaw use SSLContext object
- support configuring logging with a dictionary with ``--logging-config-dict`` - HTTP parser: miscellaneous fixes
(:issue:`1087`, :pr:`1110`, :pr:`1602`) - remove unecessary setuid calls
- add support for the ``--config`` flag in the ``GUNICORN_CMD_ARGS`` environment - fix testing
variable (:issue:`1576`, :pr:`1581`) - improve logging
- disable ``SO_REUSEPORT`` by default and add the ``--reuse-port`` setting - miscellaneous fixes to core engine
(:issue:`1553`, :issue:`1603`, :pr:`1669`)
- fix: installing `inotify` on MacOS no longer breaks the reloader
(:issue:`1540`, :pr:`1541`)
- fix: do not throw ``TypeError`` when ``SO_REUSEPORT`` is not available
(:issue:`1501`, :pr:`1491`)
- fix: properly decode HTTP paths containing certain non-ASCII characters
(:issue:`1577`, :pr:`1578`)
- fix: remove whitespace when logging header values under gevent (:pr:`1607`)
- fix: close unlinked temporary files (:issue:`1327`, :pr:`1428`)
- fix: parse ``--umask=0`` correctly (:issue:`1622`, :pr:`1632`)
- fix: allow loading applications using relative file paths
(:issue:`1349`, :pr:`1481`)
- fix: force blocking mode on the gevent sockets (:issue:`880`, :pr:`1616`)
- fix: preserve leading `/` in request path (:issue:`1512`, :pr:`1511`)
- fix: forbid contradictory secure scheme headers
- fix: handle malformed basic authentication headers in access log
(:issue:`1683`, :pr:`1684`)
- fix: defer handling of ``USR1`` signal to a new greenlet under gevent
(:issue:`1645`, :pr:`1651`)
- fix: the threaded worker would sometimes close the wrong keep-alive
connection under Python 2 (:issue:`1698`, :pr:`1699`)
- fix: re-open log files on ``USR1`` signal using ``handler._open`` to
support subclasses of ``FileHandler`` (:issue:`1739`, :pr:`1742`)
- deprecation: the ``gaiohttp`` worker is deprecated, see the
:ref:`worker-class` documentation for more information
(:issue:`1338`, :pr:`1418`, :pr:`1569`)
*** RELEASE NOTE ***
We made this release major to start our new release cycle. More info will be provided on our discussion forum.
History History
======= =======
@ -75,6 +44,11 @@ History
.. toctree:: .. toctree::
:titlesonly: :titlesonly:
2023-news
2021-news
2020-news
2019-news
2018-news
2017-news 2017-news
2016-news 2016-news
2015-news 2015-news
@ -83,3 +57,4 @@ History
2012-news 2012-news
2011-news 2011-news
2010-news 2010-news

View File

@ -4,8 +4,9 @@ Running Gunicorn
.. highlight:: bash .. highlight:: bash
You can run Gunicorn by using commands or integrate with Django or Paster. For You can run Gunicorn by using commands or integrate with popular frameworks
deploying Gunicorn in production see :doc:`deploy`. like Django, Pyramid, or TurboGears. For deploying Gunicorn in production see
:doc:`deploy`.
Commands Commands
======== ========
@ -20,12 +21,15 @@ gunicorn
Basic usage:: Basic usage::
$ gunicorn [OPTIONS] APP_MODULE $ gunicorn [OPTIONS] [WSGI_APP]
Where ``APP_MODULE`` is of the pattern ``$(MODULE_NAME):$(VARIABLE_NAME)``. The Where ``WSGI_APP`` is of the pattern ``$(MODULE_NAME):$(VARIABLE_NAME)``. The
module name can be a full dotted path. The variable name refers to a WSGI module name can be a full dotted path. The variable name refers to a WSGI
callable that should be found in the specified module. callable that should be found in the specified module.
.. versionchanged:: 20.1.0
``WSGI_APP`` is optional if it is defined in a :ref:`config` file.
Example with the test app: Example with the test app:
.. code-block:: python .. code-block:: python
@ -41,10 +45,31 @@ Example with the test app:
start_response(status, response_headers) start_response(status, response_headers)
return iter([data]) return iter([data])
You can now run the app with the following command:: You can now run the app with the following command:
.. code-block:: text
$ gunicorn --workers=2 test:app $ gunicorn --workers=2 test:app
The variable name can also be a function call. In that case the name
will be imported from the module, then called to get the application
object. This is commonly referred to as the "application factory"
pattern.
.. code-block:: python
def create_app():
app = FrameworkApp()
...
return app
.. code-block:: text
$ gunicorn --workers=2 'test:create_app()'
Positional and keyword arguments can also be passed, but it is
recommended to load configuration from environment variables rather than
the command line.
Commonly Used Arguments Commonly Used Arguments
^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^
@ -61,7 +86,7 @@ Commonly Used Arguments
to run. You'll definitely want to read the production page for the to run. You'll definitely want to read the production page for the
implications of this parameter. You can set this to ``$(NAME)`` implications of this parameter. You can set this to ``$(NAME)``
where ``$(NAME)`` is one of ``sync``, ``eventlet``, ``gevent``, where ``$(NAME)`` is one of ``sync``, ``eventlet``, ``gevent``,
``tornado``, ``gthread``, ``gaiohttp`` (deprecated). ``tornado``, ``gthread``.
``sync`` is the default. See the :ref:`worker-class` documentation for more ``sync`` is the default. See the :ref:`worker-class` documentation for more
information. information.
* ``-n APP_NAME, --name=APP_NAME`` - If setproctitle_ is installed you can * ``-n APP_NAME, --name=APP_NAME`` - If setproctitle_ is installed you can
@ -78,7 +103,7 @@ See :ref:`configuration` and :ref:`settings` for detailed usage.
Integration Integration
=========== ===========
We also provide integration for both Django and Paster applications. Gunicorn also provides integration for Django and Paste Deploy applications.
Django Django
------ ------
@ -104,13 +129,40 @@ option::
$ gunicorn --env DJANGO_SETTINGS_MODULE=myproject.settings myproject.wsgi $ gunicorn --env DJANGO_SETTINGS_MODULE=myproject.settings myproject.wsgi
Paste Paste Deployment
----- ----------------
If you are a user/developer of a paste-compatible framework/app (as Frameworks such as Pyramid and Turbogears are typically configured using Paste
Pyramid, Pylons and Turbogears) you can use the Deployment configuration files. If you would like to use these files with
`--paste <http://docs.gunicorn.org/en/latest/settings.html#paste>`_ option Gunicorn, there are two approaches.
to run your application.
As a server runner, Gunicorn can serve your application using the commands from
your framework, such as ``pserve`` or ``gearbox``. To use Gunicorn with these
commands, specify it as a server in your configuration file:
.. code-block:: ini
[server:main]
use = egg:gunicorn#main
host = 127.0.0.1
port = 8080
workers = 3
This approach is the quickest way to get started with Gunicorn, but there are
some limitations. Gunicorn will have no control over how the application is
loaded, so settings such as reload_ will have no effect and Gunicorn will be
unable to hot upgrade a running application. Using the daemon_ option may
confuse your command line tool. Instead, use the built-in support for these
features provided by that tool. For example, run ``pserve --reload`` instead of
specifying ``reload = True`` in the server configuration block. For advanced
configuration of Gunicorn, such as `Server Hooks`_ specifying a Gunicorn
configuration file using the ``config`` key is supported.
To use the full power of Gunicorn's reloading and hot code upgrades, use the
`paste option`_ to run your application instead. When used this way, Gunicorn
will use the application defined by the PasteDeploy configuration file, but
Gunicorn will not use any server configuration defined in the file. Instead,
`configure gunicorn`_.
For example:: For example::
@ -120,4 +172,13 @@ Or use a different application::
$ gunicorn --paste development.ini#admin -b :8080 --chdir /path/to/project $ gunicorn --paste development.ini#admin -b :8080 --chdir /path/to/project
It is all here. No configuration files nor additional Python modules to write! With both approaches, Gunicorn will use any loggers section found in Paste
Deployment configuration file, unless instructed otherwise by specifying
additional `logging settings`_.
.. _reload: http://docs.gunicorn.org/en/latest/settings.html#reload
.. _daemon: http://docs.gunicorn.org/en/latest/settings.html#daemon
.. _Server Hooks: http://docs.gunicorn.org/en/latest/settings.html#server-hooks
.. _paste option: http://docs.gunicorn.org/en/latest/settings.html#paste
.. _configure gunicorn: http://docs.gunicorn.org/en/latest/configure.html
.. _logging settings: http://docs.gunicorn.org/en/latest/settings.html#logging

File diff suppressed because it is too large Load Diff

View File

27
examples/deep/test.py Normal file
View File

@ -0,0 +1,27 @@
# -*- coding: utf-8 -
#
# This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information.
#
# Example code from Eventlet sources
from wsgiref.validate import validator
from gunicorn import __version__
@validator
def app(environ, start_response):
"""Simplest possible application object"""
data = b'Hello, World!\n'
status = '200 OK'
response_headers = [
('Content-type', 'text/plain'),
('Content-Length', str(len(data))),
('X-Gunicorn-Version', __version__),
('Foo', 'B\u00e5r'), # Foo: Bår
]
start_response(status, response_headers)
return iter([data])

View File

@ -5,12 +5,9 @@
# #
# Example code from Eventlet sources # Example code from Eventlet sources
from wsgiref.validate import validator
from gunicorn import __version__ from gunicorn import __version__
@validator
def app(environ, start_response): def app(environ, start_response):
"""Simplest possible application object""" """Simplest possible application object"""
@ -24,8 +21,7 @@ def app(environ, start_response):
response_headers = [ response_headers = [
('Content-type', 'text/plain'), ('Content-type', 'text/plain'),
('Content-Length', str(len(data))), ('Content-Length', str(len(data))),
('X-Gunicorn-Version', __version__), ('X-Gunicorn-Version', __version__)
("Test", "test тест"),
] ]
start_response(status, response_headers) start_response(status, response_headers)
return iter([data]) return iter([data])

View File

@ -214,3 +214,27 @@ def worker_int(worker):
def worker_abort(worker): def worker_abort(worker):
worker.log.info("worker received SIGABRT signal") worker.log.info("worker received SIGABRT signal")
def ssl_context(conf, default_ssl_context_factory):
import ssl
# The default SSLContext returned by the factory function is initialized
# with the TLS parameters from config, including TLS certificates and other
# parameters.
context = default_ssl_context_factory()
# The SSLContext can be further customized, for example by enforcing
# minimum TLS version.
context.minimum_version = ssl.TLSVersion.TLSv1_3
# Server can also return different server certificate depending which
# hostname the client uses. Requires Python 3.7 or later.
def sni_callback(socket, server_hostname, context):
if server_hostname == "foo.127.0.0.1.nip.io":
new_context = default_ssl_context_factory()
new_context.load_cert_chain(certfile="foo.pem", keyfile="foo-key.pem")
socket.context = new_context
context.sni_callback = sni_callback
return context

View File

@ -10,7 +10,7 @@ def child_process(queue):
class GunicornSubProcessTestMiddleware(object): class GunicornSubProcessTestMiddleware(object):
def __init__(self): def __init__(self):
super(GunicornSubProcessTestMiddleware, self).__init__() super().__init__()
self.queue = Queue() self.queue = Queue()
self.process = Process(target=child_process, args=(self.queue,)) self.process = Process(target=child_process, args=(self.queue,))
self.process.start() self.process.start()

View File

@ -12,7 +12,7 @@ class SimpleTest(TestCase):
""" """
Tests that 1 + 1 always equals 2. Tests that 1 + 1 always equals 2.
""" """
self.failUnlessEqual(1 + 1, 2) self.assertEqual(1 + 1, 2)
__test__ = {"doctest": """ __test__ = {"doctest": """
Another way to test that 1 + 1 is equal to 2. Another way to test that 1 + 1 is equal to 2.

View File

@ -0,0 +1,5 @@
-r requirements_flaskapp.txt
-r requirements_cherryapp.txt
-r requirements_pyramidapp.txt
-r requirements_tornadoapp.txt
-r requirements_webpyapp.txt

View File

@ -0,0 +1 @@
cherrypy

View File

@ -0,0 +1 @@
flask

View File

@ -0,0 +1 @@
pyramid

View File

@ -0,0 +1 @@
tornado<6

View File

@ -0,0 +1 @@
web-py

View File

@ -13,4 +13,4 @@ def app(environ, start_response):
log.info("Hello Info!") log.info("Hello Info!")
log.warn("Hello Warn!") log.warn("Hello Warn!")
log.error("Hello Error!") log.error("Hello Error!")
return ["Hello World!\n"] return [b"Hello World!\n"]

View File

@ -9,7 +9,7 @@
# #
# Launch a server with the app in a terminal # Launch a server with the app in a terminal
# #
# $ gunicorn -w3 readline:app # $ gunicorn -w3 readline_app:app
# #
# Then in another terminal launch the following command: # Then in another terminal launch the following command:
# #
@ -27,8 +27,7 @@ def app(environ, start_response):
response_headers = [ response_headers = [
('Content-type', 'text/plain'), ('Content-type', 'text/plain'),
('Transfer-Encoding', "chunked"), ('Transfer-Encoding', "chunked"),
('X-Gunicorn-Version', __version__), ('X-Gunicorn-Version', __version__)
#("Test", "test тест"),
] ]
start_response(status, response_headers) start_response(status, response_headers)
@ -42,4 +41,4 @@ def app(environ, start_response):
print(line) print(line)
lines.append(line) lines.append(line)
return iter(lines) return iter(lines)

View File

@ -35,7 +35,7 @@ class StandaloneApplication(gunicorn.app.base.BaseApplication):
def __init__(self, app, options=None): def __init__(self, app, options=None):
self.options = options or {} self.options = options or {}
self.application = app self.application = app
super(StandaloneApplication, self).__init__() super().__init__()
def load_config(self): def load_config(self):
config = {key: value for key, value in self.options.items() config = {key: value for key, value in self.options.items()

View File

@ -21,7 +21,7 @@ def app(environ, start_response):
('Content-type', 'text/plain'), ('Content-type', 'text/plain'),
('Content-Length', str(len(data))), ('Content-Length', str(len(data))),
('X-Gunicorn-Version', __version__), ('X-Gunicorn-Version', __version__),
#("Test", "test тест"), ('Foo', 'B\u00e5r'), # Foo: Bår
] ]
start_response(status, response_headers) start_response(status, response_headers)
return iter([data]) return iter([data])

View File

@ -250,7 +250,7 @@ class WebSocket(object):
data = struct.unpack('<I', buf[f['hlen']:f['hlen']+4])[0] data = struct.unpack('<I', buf[f['hlen']:f['hlen']+4])[0]
of1 = f['hlen']+4 of1 = f['hlen']+4
b = '' b = ''
for i in xrange(0, int(f['length']/4)): for i in range(0, int(f['length']/4)):
mask = struct.unpack('<I', buf[of1+4*i:of1+4*(i+1)])[0] mask = struct.unpack('<I', buf[of1+4*i:of1+4*(i+1)])[0]
b += struct.pack('I', data ^ mask) b += struct.pack('I', data ^ mask)
@ -292,10 +292,8 @@ class WebSocket(object):
As per the dataframing section (5.3) for the websocket spec As per the dataframing section (5.3) for the websocket spec
""" """
if isinstance(message, unicode): if isinstance(message, str):
message = message.encode('utf-8') message = message.encode('utf-8')
elif not isinstance(message, str):
message = str(message)
packed = "\x00%s\xFF" % message packed = "\x00%s\xFF" % message
return packed return packed
@ -353,7 +351,7 @@ class WebSocket(object):
def send(self, message): def send(self, message):
"""Send a message to the browser. """Send a message to the browser.
*message* should be convertable to a string; unicode objects should be *message* should be convertible to a string; unicode objects should be
encodable as utf-8. Raises socket.error with errno of 32 encodable as utf-8. Raises socket.error with errno of 32
(broken pipe) if the socket has already been closed by the client.""" (broken pipe) if the socket has already been closed by the client."""
if self.version in ['7', '8', '13']: if self.version in ['7', '8', '13']:

View File

@ -251,7 +251,7 @@ class WebSocket(object):
data = struct.unpack('<I', buf[f['hlen']:f['hlen']+4])[0] data = struct.unpack('<I', buf[f['hlen']:f['hlen']+4])[0]
of1 = f['hlen']+4 of1 = f['hlen']+4
b = '' b = ''
for i in xrange(0, int(f['length']/4)): for i in range(0, int(f['length']/4)):
mask = struct.unpack('<I', buf[of1+4*i:of1+4*(i+1)])[0] mask = struct.unpack('<I', buf[of1+4*i:of1+4*(i+1)])[0]
b += struct.pack('I', data ^ mask) b += struct.pack('I', data ^ mask)
@ -293,10 +293,8 @@ class WebSocket(object):
As per the dataframing section (5.3) for the websocket spec As per the dataframing section (5.3) for the websocket spec
""" """
if isinstance(message, unicode): if isinstance(message, str):
message = message.encode('utf-8') message = message.encode('utf-8')
elif not isinstance(message, str):
message = str(message)
packed = "\x00%s\xFF" % message packed = "\x00%s\xFF" % message
return packed return packed
@ -354,7 +352,7 @@ class WebSocket(object):
def send(self, message): def send(self, message):
"""Send a message to the browser. """Send a message to the browser.
*message* should be convertable to a string; unicode objects should be *message* should be convertible to a string; unicode objects should be
encodable as utf-8. Raises socket.error with errno of 32 encodable as utf-8. Raises socket.error with errno of 32
(broken pipe) if the socket has already been closed by the client.""" (broken pipe) if the socket has already been closed by the client."""
if self.version in ['7', '8', '13']: if self.version in ['7', '8', '13']:

View File

@ -8,7 +8,7 @@ max_mem = 100000
class MemoryWatch(threading.Thread): class MemoryWatch(threading.Thread):
def __init__(self, server, max_mem): def __init__(self, server, max_mem):
super(MemoryWatch, self).__init__() super().__init__()
self.daemon = True self.daemon = True
self.server = server self.server = server
self.max_mem = max_mem self.max_mem = max_mem

View File

@ -3,6 +3,7 @@
# This file is part of gunicorn released under the MIT license. # This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information. # See the NOTICE for more information.
version_info = (19, 9, 0) version_info = (21, 2, 0)
__version__ = ".".join([str(v) for v in version_info]) __version__ = ".".join([str(v) for v in version_info])
SERVER_SOFTWARE = "gunicorn/%s" % __version__ SERVER = "gunicorn"
SERVER_SOFTWARE = "%s/%s" % (SERVER, __version__)

7
gunicorn/__main__.py Normal file
View File

@ -0,0 +1,7 @@
# -*- coding: utf-8 -
#
# This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information.
from gunicorn.app.wsgiapp import run
run()

View File

@ -1,65 +0,0 @@
def _check_if_pyc(fname):
"""Return True if the extension is .pyc, False if .py
and None if otherwise"""
from imp import find_module
from os.path import realpath, dirname, basename, splitext
# Normalize the file-path for the find_module()
filepath = realpath(fname)
dirpath = dirname(filepath)
module_name = splitext(basename(filepath))[0]
# Validate and fetch
try:
fileobj, fullpath, (_, _, pytype) = find_module(module_name, [dirpath])
except ImportError:
raise IOError("Cannot find config file. "
"Path maybe incorrect! : {0}".format(filepath))
return pytype, fileobj, fullpath
def _get_codeobj(pyfile):
""" Returns the code object, given a python file """
from imp import PY_COMPILED, PY_SOURCE
result, fileobj, fullpath = _check_if_pyc(pyfile)
# WARNING:
# fp.read() can blowup if the module is extremely large file.
# Lookout for overflow errors.
try:
data = fileobj.read()
finally:
fileobj.close()
# This is a .pyc file. Treat accordingly.
if result is PY_COMPILED:
# .pyc format is as follows:
# 0 - 4 bytes: Magic number, which changes with each create of .pyc file.
# First 2 bytes change with each marshal of .pyc file. Last 2 bytes is "\r\n".
# 4 - 8 bytes: Datetime value, when the .py was last changed.
# 8 - EOF: Marshalled code object data.
# So to get code object, just read the 8th byte onwards till EOF, and
# UN-marshal it.
import marshal
code_obj = marshal.loads(data[8:])
elif result is PY_SOURCE:
# This is a .py file.
code_obj = compile(data, fullpath, 'exec')
else:
# Unsupported extension
raise Exception("Input file is unknown format: {0}".format(fullpath))
# Return code object
return code_obj
def execfile_(fname, *args):
if fname.endswith(".pyc"):
code = _get_codeobj(fname)
else:
with open(fname, 'rb') as file:
code = compile(file.read(), fname, 'exec')
return exec(code, *args)

View File

@ -2,16 +2,18 @@
# #
# This file is part of gunicorn released under the MIT license. # This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information. # See the NOTICE for more information.
import importlib.util
import importlib.machinery
import os import os
import sys import sys
import traceback import traceback
from gunicorn._compat import execfile_
from gunicorn import util from gunicorn import util
from gunicorn.arbiter import Arbiter from gunicorn.arbiter import Arbiter
from gunicorn.config import Config, get_default_config_file from gunicorn.config import Config, get_default_config_file
from gunicorn import debug from gunicorn import debug
class BaseApplication(object): class BaseApplication(object):
""" """
An application interface for configuring and loading An application interface for configuring and loading
@ -93,25 +95,30 @@ class Application(BaseApplication):
if not os.path.exists(filename): if not os.path.exists(filename):
raise RuntimeError("%r doesn't exist" % filename) raise RuntimeError("%r doesn't exist" % filename)
cfg = { ext = os.path.splitext(filename)[1]
"__builtins__": __builtins__,
"__name__": "__config__",
"__file__": filename,
"__doc__": None,
"__package__": None
}
try: try:
execfile_(filename, cfg, cfg) module_name = '__config__'
if ext in [".py", ".pyc"]:
spec = importlib.util.spec_from_file_location(module_name, filename)
else:
msg = "configuration file should have a valid Python extension.\n"
util.warn(msg)
loader_ = importlib.machinery.SourceFileLoader(module_name, filename)
spec = importlib.util.spec_from_file_location(module_name, filename, loader=loader_)
mod = importlib.util.module_from_spec(spec)
sys.modules[module_name] = mod
spec.loader.exec_module(mod)
except Exception: except Exception:
print("Failed to read config file: %s" % filename, file=sys.stderr) print("Failed to read config file: %s" % filename, file=sys.stderr)
traceback.print_exc() traceback.print_exc()
sys.stderr.flush() sys.stderr.flush()
sys.exit(1) sys.exit(1)
return cfg return vars(mod)
def get_config_from_module_name(self, module_name): def get_config_from_module_name(self, module_name):
return vars(util.import_module(module_name)) return vars(importlib.import_module(module_name))
def load_config_from_module_name_or_filename(self, location): def load_config_from_module_name_or_filename(self, location):
""" """
@ -135,7 +142,7 @@ class Application(BaseApplication):
continue continue
try: try:
self.cfg.set(k.lower(), v) self.cfg.set(k.lower(), v)
except: except Exception:
print("Invalid value for %s: %s\n" % (k, v), file=sys.stderr) print("Invalid value for %s: %s\n" % (k, v), file=sys.stderr)
sys.stderr.flush() sys.stderr.flush()
raise raise
@ -193,10 +200,13 @@ class Application(BaseApplication):
self.chdir() self.chdir()
def run(self): def run(self):
if self.cfg.check_config: if self.cfg.print_config:
print(self.cfg)
if self.cfg.print_config or self.cfg.check_config:
try: try:
self.load() self.load()
except: except Exception:
msg = "\nError while loading the application:\n" msg = "\nError while loading the application:\n"
print(msg, file=sys.stderr) print(msg, file=sys.stderr)
traceback.print_exc() traceback.print_exc()
@ -208,6 +218,11 @@ class Application(BaseApplication):
debug.spew() debug.spew()
if self.cfg.daemon: if self.cfg.daemon:
if os.environ.get('NOTIFY_SOCKET'):
msg = "Warning: you shouldn't specify `daemon = True`" \
" when launching by systemd with `Type = notify`"
print(msg, file=sys.stderr, flush=True)
util.daemonize(self.cfg.enable_stdio_inheritance) util.daemonize(self.cfg.enable_stdio_inheritance)
# set python paths # set python paths
@ -218,4 +233,4 @@ class Application(BaseApplication):
if pythonpath not in sys.path: if pythonpath not in sys.path:
sys.path.insert(0, pythonpath) sys.path.insert(0, pythonpath)
super(Application, self).run() super().run()

View File

@ -3,206 +3,73 @@
# This file is part of gunicorn released under the MIT license. # This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information. # See the NOTICE for more information.
# pylint: skip-file import configparser
import os import os
import pkg_resources
import sys
try: from paste.deploy import loadapp
import configparser as ConfigParser
except ImportError:
import ConfigParser
from paste.deploy import loadapp, loadwsgi from gunicorn.app.wsgiapp import WSGIApplication
SERVER = loadwsgi.SERVER from gunicorn.config import get_default_config_file
from gunicorn.app.base import Application
from gunicorn.config import Config, get_default_config_file
from gunicorn import util
def _has_logging_config(paste_file): def get_wsgi_app(config_uri, name=None, defaults=None):
cfg_parser = ConfigParser.ConfigParser() if ':' not in config_uri:
cfg_parser.read([paste_file]) config_uri = "config:%s" % config_uri
return cfg_parser.has_section('loggers')
return loadapp(
config_uri,
name=name,
relative_to=os.getcwd(),
global_conf=defaults,
)
def paste_config(gconfig, config_url, relative_to, global_conf=None): def has_logging_config(config_file):
# add entry to pkg_resources parser = configparser.ConfigParser()
sys.path.insert(0, relative_to) parser.read([config_file])
pkg_resources.working_set.add_entry(relative_to) return parser.has_section('loggers')
config_url = config_url.split('#')[0]
cx = loadwsgi.loadcontext(SERVER, config_url, relative_to=relative_to,
global_conf=global_conf)
gc, lc = cx.global_conf.copy(), cx.local_conf.copy()
cfg = {}
host, port = lc.pop('host', ''), lc.pop('port', '') def serve(app, global_conf, **local_conf):
"""\
A Paste Deployment server runner.
Example configuration:
[server:main]
use = egg:gunicorn#main
host = 127.0.0.1
port = 5000
"""
config_file = global_conf['__file__']
gunicorn_config_file = local_conf.pop('config', None)
host = local_conf.pop('host', '')
port = local_conf.pop('port', '')
if host and port: if host and port:
cfg['bind'] = '%s:%s' % (host, port) local_conf['bind'] = '%s:%s' % (host, port)
elif host: elif host:
cfg['bind'] = host.split(',') local_conf['bind'] = host.split(',')
cfg['default_proc_name'] = gc.get('__file__') class PasterServerApplication(WSGIApplication):
def load_config(self):
self.cfg.set("default_proc_name", config_file)
# init logging configuration if has_logging_config(config_file):
config_file = config_url.split(':')[1] self.cfg.set("logconfig", config_file)
if _has_logging_config(config_file):
cfg.setdefault('logconfig', config_file)
for k, v in gc.items(): if gunicorn_config_file:
if k not in gconfig.settings: self.load_config_from_file(gunicorn_config_file)
continue else:
cfg[k] = v default_gunicorn_config_file = get_default_config_file()
if default_gunicorn_config_file is not None:
self.load_config_from_file(default_gunicorn_config_file)
for k, v in lc.items(): for k, v in local_conf.items():
if k not in gconfig.settings: if v is not None:
continue
cfg[k] = v
return cfg
def load_pasteapp(config_url, relative_to, global_conf=None):
return loadapp(config_url, relative_to=relative_to,
global_conf=global_conf)
class PasterBaseApplication(Application):
gcfg = None
def app_config(self):
return paste_config(self.cfg, self.cfgurl, self.relpath,
global_conf=self.gcfg)
def load_config(self):
super(PasterBaseApplication, self).load_config()
# reload logging conf
if hasattr(self, "cfgfname"):
parser = ConfigParser.ConfigParser()
parser.read([self.cfgfname])
if parser.has_section('loggers'):
from logging.config import fileConfig
config_file = os.path.abspath(self.cfgfname)
fileConfig(config_file, dict(__file__=config_file,
here=os.path.dirname(config_file)))
class PasterApplication(PasterBaseApplication):
def init(self, parser, opts, args):
if len(args) != 1:
parser.error("No application name specified.")
cwd = util.getcwd()
cfgfname = os.path.normpath(os.path.join(cwd, args[0]))
cfgfname = os.path.abspath(cfgfname)
if not os.path.exists(cfgfname):
parser.error("Config file not found: %s" % cfgfname)
self.cfgurl = 'config:%s' % cfgfname
self.relpath = os.path.dirname(cfgfname)
self.cfgfname = cfgfname
sys.path.insert(0, self.relpath)
pkg_resources.working_set.add_entry(self.relpath)
return self.app_config()
def load(self):
# chdir to the configured path before loading,
# default is the current dir
os.chdir(self.cfg.chdir)
return load_pasteapp(self.cfgurl, self.relpath, global_conf=self.gcfg)
class PasterServerApplication(PasterBaseApplication):
def __init__(self, app, gcfg=None, host="127.0.0.1", port=None, **kwargs):
# pylint: disable=super-init-not-called
self.cfg = Config()
self.gcfg = gcfg # need to hold this for app_config
self.app = app
self.callable = None
gcfg = gcfg or {}
cfgfname = gcfg.get("__file__")
if cfgfname is not None:
self.cfgurl = 'config:%s' % cfgfname
self.relpath = os.path.dirname(cfgfname)
self.cfgfname = cfgfname
cfg = kwargs.copy()
if port and not host.startswith("unix:"):
bind = "%s:%s" % (host, port)
else:
bind = host
cfg["bind"] = bind.split(',')
if gcfg:
for k, v in gcfg.items():
cfg[k] = v
cfg["default_proc_name"] = cfg['__file__']
try:
for k, v in cfg.items():
if k.lower() in self.cfg.settings and v is not None:
self.cfg.set(k.lower(), v) self.cfg.set(k.lower(), v)
except Exception as e:
print("\nConfig error: %s" % str(e), file=sys.stderr)
sys.stderr.flush()
sys.exit(1)
if cfg.get("config"): def load(self):
self.load_config_from_file(cfg["config"]) return app
else:
default_config = get_default_config_file()
if default_config is not None:
self.load_config_from_file(default_config)
def load(self): PasterServerApplication().run()
return self.app
def run():
"""\
The ``gunicorn_paster`` command for launching Paster compatible
applications like Pylons or Turbogears2
"""
util.warn("""This command is deprecated.
You should now use the `--paste` option. Ex.:
gunicorn --paste development.ini
""")
from gunicorn.app.pasterapp import PasterApplication
PasterApplication("%(prog)s [OPTIONS] pasteconfig.ini").run()
def paste_server(app, gcfg=None, host="127.0.0.1", port=None, **kwargs):
"""\
A paster server.
Then entry point in your paster ini file should looks like this:
[server:main]
use = egg:gunicorn#main
host = 127.0.0.1
port = 5000
"""
util.warn("""This command is deprecated.
You should now use the `--paste` option. Ex.:
gunicorn --paste development.ini
""")
from gunicorn.app.pasterapp import PasterServerApplication
PasterServerApplication(app, gcfg=gcfg, host=host, port=port, **kwargs).run()

View File

@ -12,38 +12,44 @@ from gunicorn import util
class WSGIApplication(Application): class WSGIApplication(Application):
def init(self, parser, opts, args): def init(self, parser, opts, args):
self.app_uri = None
if opts.paste: if opts.paste:
app_name = 'main' from .pasterapp import has_logging_config
path = opts.paste
if '#' in path:
path, app_name = path.split('#')
path = os.path.abspath(os.path.normpath(
os.path.join(util.getcwd(), path)))
if not os.path.exists(path): config_uri = os.path.abspath(opts.paste)
raise ConfigError("%r not found" % path) config_file = config_uri.split('#')[0]
# paste application, load the config if not os.path.exists(config_file):
self.cfgurl = 'config:%s#%s' % (path, app_name) raise ConfigError("%r not found" % config_file)
self.relpath = os.path.dirname(path)
from .pasterapp import paste_config self.cfg.set("default_proc_name", config_file)
return paste_config(self.cfg, self.cfgurl, self.relpath) self.app_uri = config_uri
if not args: if has_logging_config(config_file):
parser.error("No application module specified.") self.cfg.set("logconfig", config_file)
self.cfg.set("default_proc_name", args[0]) return
self.app_uri = args[0]
if len(args) > 0:
self.cfg.set("default_proc_name", args[0])
self.app_uri = args[0]
def load_config(self):
super().load_config()
if self.app_uri is None:
if self.cfg.wsgi_app is not None:
self.app_uri = self.cfg.wsgi_app
else:
raise ConfigError("No application module specified.")
def load_wsgiapp(self): def load_wsgiapp(self):
# load the app
return util.import_app(self.app_uri) return util.import_app(self.app_uri)
def load_pasteapp(self): def load_pasteapp(self):
# load the paste app from .pasterapp import get_wsgi_app
from .pasterapp import load_pasteapp return get_wsgi_app(self.app_uri, defaults=self.cfg.paste_global_conf)
return load_pasteapp(self.cfgurl, self.relpath, global_conf=self.cfg.paste_global_conf)
def load(self): def load(self):
if self.cfg.paste is not None: if self.cfg.paste is not None:

View File

@ -154,7 +154,7 @@ class Arbiter(object):
self.LISTENERS = sock.create_sockets(self.cfg, self.log, fds) self.LISTENERS = sock.create_sockets(self.cfg, self.log, fds)
listeners_str = ",".join([str(l) for l in self.LISTENERS]) listeners_str = ",".join([str(lnr) for lnr in self.LISTENERS])
self.log.debug("Arbiter booted") self.log.debug("Arbiter booted")
self.log.info("Listening at: %s (%s)", listeners_str, self.pid) self.log.info("Listening at: %s (%s)", listeners_str, self.pid)
self.log.info("Using worker: %s", self.cfg.worker_class_str) self.log.info("Using worker: %s", self.cfg.worker_class_str)
@ -223,17 +223,15 @@ class Arbiter(object):
self.log.info("Handling signal: %s", signame) self.log.info("Handling signal: %s", signame)
handler() handler()
self.wakeup() self.wakeup()
except StopIteration: except (StopIteration, KeyboardInterrupt):
self.halt()
except KeyboardInterrupt:
self.halt() self.halt()
except HaltServer as inst: except HaltServer as inst:
self.halt(reason=inst.reason, exit_status=inst.exit_status) self.halt(reason=inst.reason, exit_status=inst.exit_status)
except SystemExit: except SystemExit:
raise raise
except Exception: except Exception:
self.log.info("Unhandled exception in main loop", self.log.error("Unhandled exception in main loop",
exc_info=True) exc_info=True)
self.stop(False) self.stop(False)
if self.pidfile is not None: if self.pidfile is not None:
self.pidfile.unlink() self.pidfile.unlink()
@ -297,8 +295,8 @@ class Arbiter(object):
def handle_usr2(self): def handle_usr2(self):
"""\ """\
SIGUSR2 handling. SIGUSR2 handling.
Creates a new master/worker set as a slave of the current Creates a new arbiter/worker set as a fork of the current
master without affecting old workers. Use this to do live arbiter without affecting old workers. Use this to do live
deployment with the ability to backout a change. deployment with the ability to backout a change.
""" """
self.reexec() self.reexec()
@ -342,9 +340,12 @@ class Arbiter(object):
def halt(self, reason=None, exit_status=0): def halt(self, reason=None, exit_status=0):
""" halt arbiter """ """ halt arbiter """
self.stop() self.stop()
self.log.info("Shutting down: %s", self.master_name)
log_func = self.log.info if exit_status == 0 else self.log.error
log_func("Shutting down: %s", self.master_name)
if reason is not None: if reason is not None:
self.log.info("Reason: %s", reason) log_func("Reason: %s", reason)
if self.pidfile is not None: if self.pidfile is not None:
self.pidfile.unlink() self.pidfile.unlink()
self.cfg.on_exit(self) self.cfg.on_exit(self)
@ -423,7 +424,7 @@ class Arbiter(object):
environ['LISTEN_FDS'] = str(len(self.LISTENERS)) environ['LISTEN_FDS'] = str(len(self.LISTENERS))
else: else:
environ['GUNICORN_FD'] = ','.join( environ['GUNICORN_FD'] = ','.join(
str(l.fileno()) for l in self.LISTENERS) str(lnr.fileno()) for lnr in self.LISTENERS)
os.chdir(self.START_CTX['cwd']) os.chdir(self.START_CTX['cwd'])
@ -456,11 +457,11 @@ class Arbiter(object):
# do we need to change listener ? # do we need to change listener ?
if old_address != self.cfg.address: if old_address != self.cfg.address:
# close all listeners # close all listeners
for l in self.LISTENERS: for lnr in self.LISTENERS:
l.close() lnr.close()
# init new listeners # init new listeners
self.LISTENERS = sock.create_sockets(self.cfg, self.log) self.LISTENERS = sock.create_sockets(self.cfg, self.log)
listeners_str = ",".join([str(l) for l in self.LISTENERS]) listeners_str = ",".join([str(lnr) for lnr in self.LISTENERS])
self.log.info("Listening at: %s", listeners_str) self.log.info("Listening at: %s", listeners_str)
# do some actions on reload # do some actions on reload
@ -522,6 +523,8 @@ class Arbiter(object):
# that it could not boot, we'll shut it down to avoid # that it could not boot, we'll shut it down to avoid
# infinite start/stop cycles. # infinite start/stop cycles.
exitcode = status >> 8 exitcode = status >> 8
if exitcode != 0:
self.log.error('Worker (pid:%s) exited with code %s', wpid, exitcode)
if exitcode == self.WORKER_BOOT_ERROR: if exitcode == self.WORKER_BOOT_ERROR:
reason = "Worker failed to boot." reason = "Worker failed to boot."
raise HaltServer(reason, self.WORKER_BOOT_ERROR) raise HaltServer(reason, self.WORKER_BOOT_ERROR)
@ -529,6 +532,27 @@ class Arbiter(object):
reason = "App failed to load." reason = "App failed to load."
raise HaltServer(reason, self.APP_LOAD_ERROR) raise HaltServer(reason, self.APP_LOAD_ERROR)
if exitcode > 0:
# If the exit code of the worker is greater than 0,
# let the user know.
self.log.error("Worker (pid:%s) exited with code %s.",
wpid, exitcode)
elif status > 0:
# If the exit code of the worker is 0 and the status
# is greater than 0, then it was most likely killed
# via a signal.
try:
sig_name = signal.Signals(status).name
except ValueError:
sig_name = "code {}".format(status)
msg = "Worker (pid:{}) was sent {}!".format(
wpid, sig_name)
# Additional hint for SIGKILL
if status == signal.SIGKILL:
msg += " Perhaps out of memory?"
self.log.error(msg)
worker = self.WORKERS.pop(wpid, None) worker = self.WORKERS.pop(wpid, None)
if not worker: if not worker:
continue continue
@ -592,7 +616,7 @@ class Arbiter(object):
print("%s" % e, file=sys.stderr) print("%s" % e, file=sys.stderr)
sys.stderr.flush() sys.stderr.flush()
sys.exit(self.APP_LOAD_ERROR) sys.exit(self.APP_LOAD_ERROR)
except: except Exception:
self.log.exception("Exception in worker process") self.log.exception("Exception in worker process")
if not worker.booted: if not worker.booted:
sys.exit(self.WORKER_BOOT_ERROR) sys.exit(self.WORKER_BOOT_ERROR)
@ -602,9 +626,9 @@ class Arbiter(object):
try: try:
worker.tmp.close() worker.tmp.close()
self.cfg.worker_exit(self, worker) self.cfg.worker_exit(self, worker)
except: except Exception:
self.log.warning("Exception during worker exit:\n%s", self.log.warning("Exception during worker exit:\n%s",
traceback.format_exc()) traceback.format_exc())
def spawn_workers(self): def spawn_workers(self):
"""\ """\

View File

@ -51,6 +51,16 @@ class Config(object):
self.prog = prog or os.path.basename(sys.argv[0]) self.prog = prog or os.path.basename(sys.argv[0])
self.env_orig = os.environ.copy() self.env_orig = os.environ.copy()
def __str__(self):
lines = []
kmax = max(len(k) for k in self.settings)
for k in sorted(self.settings):
v = self.settings[k].value
if callable(v):
v = "<{}()>".format(v.__qualname__)
lines.append("{k:{kmax}} = {v}".format(k=k, v=v, kmax=kmax))
return "\n".join(lines)
def __getattr__(self, name): def __getattr__(self, name):
if name not in self.settings: if name not in self.settings:
raise AttributeError("No configuration setting for: %s" % name) raise AttributeError("No configuration setting for: %s" % name)
@ -59,7 +69,7 @@ class Config(object):
def __setattr__(self, name, value): def __setattr__(self, name, value):
if name != "settings" and name in self.settings: if name != "settings" and name in self.settings:
raise AttributeError("Invalid access!") raise AttributeError("Invalid access!")
super(Config, self).__setattr__(name, value) super().__setattr__(name, value)
def set(self, name, value): def set(self, name, value):
if name not in self.settings: if name not in self.settings:
@ -78,9 +88,9 @@ class Config(object):
} }
parser = argparse.ArgumentParser(**kwargs) parser = argparse.ArgumentParser(**kwargs)
parser.add_argument("-v", "--version", parser.add_argument("-v", "--version",
action="version", default=argparse.SUPPRESS, action="version", default=argparse.SUPPRESS,
version="%(prog)s (version " + __version__ + ")\n", version="%(prog)s (version " + __version__ + ")\n",
help="show program's version number and exit") help="show program's version number and exit")
parser.add_argument("args", nargs="*", help=argparse.SUPPRESS) parser.add_argument("args", nargs="*", help=argparse.SUPPRESS)
keys = sorted(self.settings, key=self.settings.__getitem__) keys = sorted(self.settings, key=self.settings.__getitem__)
@ -93,17 +103,17 @@ class Config(object):
def worker_class_str(self): def worker_class_str(self):
uri = self.settings['worker_class'].get() uri = self.settings['worker_class'].get()
## are we using a threaded worker? # are we using a threaded worker?
is_sync = uri.endswith('SyncWorker') or uri == 'sync' is_sync = uri.endswith('SyncWorker') or uri == 'sync'
if is_sync and self.threads > 1: if is_sync and self.threads > 1:
return "threads" return "gthread"
return uri return uri
@property @property
def worker_class(self): def worker_class(self):
uri = self.settings['worker_class'].get() uri = self.settings['worker_class'].get()
## are we using a threaded worker? # are we using a threaded worker?
is_sync = uri.endswith('SyncWorker') or uri == 'sync' is_sync = uri.endswith('SyncWorker') or uri == 'sync'
if is_sync and self.threads > 1: if is_sync and self.threads > 1:
uri = "gunicorn.workers.gthread.ThreadWorker" uri = "gunicorn.workers.gthread.ThreadWorker"
@ -224,7 +234,7 @@ class Config(object):
class SettingMeta(type): class SettingMeta(type):
def __new__(cls, name, bases, attrs): def __new__(cls, name, bases, attrs):
super_new = super(SettingMeta, cls).__new__ super_new = super().__new__
parents = [b for b in bases if isinstance(b, SettingMeta)] parents = [b for b in bases if isinstance(b, SettingMeta)]
if not parents: if not parents:
return super_new(cls, name, bases, attrs) return super_new(cls, name, bases, attrs)
@ -308,6 +318,15 @@ class Setting(object):
self.order < other.order) self.order < other.order)
__cmp__ = __lt__ __cmp__ = __lt__
def __repr__(self):
return "<%s.%s object at %x with value %r>" % (
self.__class__.__module__,
self.__class__.__name__,
id(self),
self.value,
)
Setting = SettingMeta('Setting', (Setting,), {}) Setting = SettingMeta('Setting', (Setting,), {})
@ -345,25 +364,9 @@ def validate_pos_int(val):
def validate_ssl_version(val): def validate_ssl_version(val):
ssl_versions = {} if val != SSLVersion.default:
for protocol in [p for p in dir(ssl) if p.startswith("PROTOCOL_")]: sys.stderr.write("Warning: option `ssl_version` is deprecated and it is ignored. Use ssl_context instead.\n")
ssl_versions[protocol[9:]] = getattr(ssl, protocol) return val
if val in ssl_versions:
# string matching PROTOCOL_...
return ssl_versions[val]
try:
intval = validate_pos_int(val)
if intval in ssl_versions.values():
# positive int matching a protocol int constant
return intval
except (ValueError, TypeError):
# negative integer or not an integer
# drop this in favour of the more descriptive ValueError below
pass
raise ValueError("Invalid ssl_version: %s. Valid options: %s"
% (val, ', '.join(ssl_versions)))
def validate_string(val): def validate_string(val):
@ -429,7 +432,7 @@ def validate_callable(arity):
raise TypeError(str(e)) raise TypeError(str(e))
except AttributeError: except AttributeError:
raise TypeError("Can not load '%s' from '%s'" raise TypeError("Can not load '%s' from '%s'"
"" % (obj_name, mod_name)) "" % (obj_name, mod_name))
if not callable(val): if not callable(val):
raise TypeError("Value is not callable: %s" % val) raise TypeError("Value is not callable: %s" % val)
if arity != -1 and arity != util.get_arity(val): if arity != -1 and arity != util.get_arity(val):
@ -495,15 +498,25 @@ def validate_chdir(val):
return path return path
def validate_hostport(val): def validate_statsd_address(val):
val = validate_string(val) val = validate_string(val)
if val is None: if val is None:
return None return None
elements = val.split(":")
if len(elements) == 2: # As of major release 20, util.parse_address would recognize unix:PORT
return (elements[0], int(elements[1])) # as a UDS address, breaking backwards compatibility. We defend against
else: # that regression here (this is also unit-tested).
raise TypeError("Value must consist of: hostname:port") # Feel free to remove in the next major release.
unix_hostname_regression = re.match(r'^unix:(\d+)$', val)
if unix_hostname_regression:
return ('unix', int(unix_hostname_regression.group(1)))
try:
address = util.parse_address(val, default_port='8125')
except RuntimeError:
raise TypeError("Value must be one of ('host:port', 'unix://PATH')")
return address
def validate_reload_engine(val): def validate_reload_engine(val):
@ -515,7 +528,7 @@ def validate_reload_engine(val):
def get_default_config_file(): def get_default_config_file():
config_path = os.path.join(os.path.abspath(os.getcwd()), config_path = os.path.join(os.path.abspath(os.getcwd()),
'gunicorn.conf.py') 'gunicorn.conf.py')
if os.path.exists(config_path): if os.path.exists(config_path):
return config_path return config_path
return None return None
@ -527,20 +540,37 @@ class ConfigFile(Setting):
cli = ["-c", "--config"] cli = ["-c", "--config"]
meta = "CONFIG" meta = "CONFIG"
validator = validate_string validator = validate_string
default = None default = "./gunicorn.conf.py"
desc = """\ desc = """\
The Gunicorn config file. :ref:`The Gunicorn config file<configuration_file>`.
A string of the form ``PATH``, ``file:PATH``, or ``python:MODULE_NAME``. A string of the form ``PATH``, ``file:PATH``, or ``python:MODULE_NAME``.
Only has an effect when specified on the command line or as part of an Only has an effect when specified on the command line or as part of an
application specific configuration. application specific configuration.
By default, a file named ``gunicorn.conf.py`` will be read from the same
directory where gunicorn is being run.
.. versionchanged:: 19.4 .. versionchanged:: 19.4
Loading the config from a Python module requires the ``python:`` Loading the config from a Python module requires the ``python:``
prefix. prefix.
""" """
class WSGIApp(Setting):
name = "wsgi_app"
section = "Config File"
meta = "STRING"
validator = validate_string
default = None
desc = """\
A WSGI application path in pattern ``$(MODULE_NAME):$(VARIABLE_NAME)``.
.. versionadded:: 20.1.0
"""
class Bind(Setting): class Bind(Setting):
name = "bind" name = "bind"
action = "append" action = "append"
@ -569,6 +599,10 @@ class Bind(Setting):
will bind the `test:app` application on localhost both on ipv6 will bind the `test:app` application on localhost both on ipv6
and ipv4 interfaces. and ipv4 interfaces.
If the ``PORT`` environment variable is defined, the default
is ``['0.0.0.0:$PORT']``. If it is not defined, the default
is ``['127.0.0.1:8000']``.
""" """
@ -607,8 +641,9 @@ class Workers(Setting):
You'll want to vary this a bit to find the best for your particular You'll want to vary this a bit to find the best for your particular
application's work load. application's work load.
By default, the value of the ``WEB_CONCURRENCY`` environment variable. By default, the value of the ``WEB_CONCURRENCY`` environment variable,
If it is not defined, the default is ``1``. which is set by some Platform-as-a-Service providers such as Heroku. If
it is not defined, the default is ``1``.
""" """
@ -625,32 +660,27 @@ class WorkerClass(Setting):
The default class (``sync``) should handle most "normal" types of The default class (``sync``) should handle most "normal" types of
workloads. You'll want to read :doc:`design` for information on when workloads. You'll want to read :doc:`design` for information on when
you might want to choose one of the other worker classes. Required you might want to choose one of the other worker classes. Required
libraries may be installed using setuptools' ``extra_require`` feature. libraries may be installed using setuptools' ``extras_require`` feature.
A string referring to one of the following bundled classes: A string referring to one of the following bundled classes:
* ``sync`` * ``sync``
* ``eventlet`` - Requires eventlet >= 0.9.7 (or install it via * ``eventlet`` - Requires eventlet >= 0.24.1 (or install it via
``pip install gunicorn[eventlet]``) ``pip install gunicorn[eventlet]``)
* ``gevent`` - Requires gevent >= 0.13 (or install it via * ``gevent`` - Requires gevent >= 1.4 (or install it via
``pip install gunicorn[gevent]``) ``pip install gunicorn[gevent]``)
* ``tornado`` - Requires tornado >= 0.2 (or install it via * ``tornado`` - Requires tornado >= 0.2 (or install it via
``pip install gunicorn[tornado]``) ``pip install gunicorn[tornado]``)
* ``gthread`` - Python 2 requires the futures package to be installed * ``gthread`` - Python 2 requires the futures package to be installed
(or install it via ``pip install gunicorn[gthread]``) (or install it via ``pip install gunicorn[gthread]``)
* ``gaiohttp`` - Deprecated.
Optionally, you can provide your own worker by giving Gunicorn a Optionally, you can provide your own worker by giving Gunicorn a
Python path to a subclass of ``gunicorn.workers.base.Worker``. Python path to a subclass of ``gunicorn.workers.base.Worker``.
This alternative syntax will load the gevent class: This alternative syntax will load the gevent class:
``gunicorn.workers.ggevent.GeventWorker``. ``gunicorn.workers.ggevent.GeventWorker``.
.. deprecated:: 19.8
The ``gaiohttp`` worker is deprecated. Please use
``aiohttp.worker.GunicornWebWorker`` instead. See
:ref:`asyncio-workers` for more information on how to use it.
""" """
class WorkerThreads(Setting): class WorkerThreads(Setting):
name = "threads" name = "threads"
section = "Worker Processes" section = "Worker Processes"
@ -671,7 +701,7 @@ class WorkerThreads(Setting):
If it is not defined, the default is ``1``. If it is not defined, the default is ``1``.
This setting only affects the Gthread worker type. This setting only affects the Gthread worker type.
.. note:: .. note::
If you try to use the ``sync`` worker type and set the ``threads`` If you try to use the ``sync`` worker type and set the ``threads``
setting to more than 1, the ``gthread`` worker type will be used setting to more than 1, the ``gthread`` worker type will be used
@ -690,7 +720,7 @@ class WorkerConnections(Setting):
desc = """\ desc = """\
The maximum number of simultaneous clients. The maximum number of simultaneous clients.
This setting only affects the Eventlet and Gevent worker types. This setting only affects the ``gthread``, ``eventlet`` and ``gevent`` worker types.
""" """
@ -744,10 +774,14 @@ class Timeout(Setting):
desc = """\ desc = """\
Workers silent for more than this many seconds are killed and restarted. Workers silent for more than this many seconds are killed and restarted.
Generally set to thirty seconds. Only set this noticeably higher if Value is a positive number or 0. Setting it to 0 has the effect of
you're sure of the repercussions for sync workers. For the non sync infinite timeouts by disabling timeouts for all workers entirely.
workers it just means that the worker process is still communicating and
is not tied to the length of time required to handle a single request. Generally, the default of thirty seconds should suffice. Only set this
noticeably higher if you're sure of the repercussions for sync workers.
For the non sync workers it just means that the worker process is still
communicating and is not tied to the length of time required to handle a
single request.
""" """
@ -892,9 +926,9 @@ class ReloadEngine(Setting):
Valid engines are: Valid engines are:
* 'auto' * ``'auto'``
* 'poll' * ``'poll'``
* 'inotify' (requires inotify) * ``'inotify'`` (requires inotify)
.. versionadded:: 19.7 .. versionadded:: 19.7
""" """
@ -938,7 +972,20 @@ class ConfigCheck(Setting):
action = "store_true" action = "store_true"
default = False default = False
desc = """\ desc = """\
Check the configuration. Check the configuration and exit. The exit status is 0 if the
configuration is correct, and 1 if the configuration is incorrect.
"""
class PrintConfig(Setting):
name = "print_config"
section = "Debugging"
cli = ["--print-config"]
validator = validate_bool
action = "store_true"
default = False
desc = """\
Print the configuration settings as fully resolved. Implies :ref:`check-config`.
""" """
@ -1003,8 +1050,9 @@ class Chdir(Setting):
cli = ["--chdir"] cli = ["--chdir"]
validator = validate_chdir validator = validate_chdir
default = util.getcwd() default = util.getcwd()
default_doc = "``'.'``"
desc = """\ desc = """\
Chdir to specified directory before apps loading. Change directory to specified directory before loading apps.
""" """
@ -1022,6 +1070,7 @@ class Daemon(Setting):
background. background.
""" """
class Env(Setting): class Env(Setting):
name = "raw_env" name = "raw_env"
action = "append" action = "append"
@ -1032,13 +1081,21 @@ class Env(Setting):
default = [] default = []
desc = """\ desc = """\
Set environment variable (key=value). Set environment variables in the execution environment.
Pass variables to the execution environment. Ex.:: Should be a list of strings in the ``key=value`` format.
For example on the command line:
.. code-block:: console
$ gunicorn -b 127.0.0.1:8000 --env FOO=1 test:app $ gunicorn -b 127.0.0.1:8000 --env FOO=1 test:app
and test for the foo variable environment in your application. Or in the configuration file:
.. code-block:: python
raw_env = ["FOO=1"]
""" """
@ -1055,6 +1112,7 @@ class Pidfile(Setting):
If not set, no PID file will be written. If not set, no PID file will be written.
""" """
class WorkerTmpDir(Setting): class WorkerTmpDir(Setting):
name = "worker_tmp_dir" name = "worker_tmp_dir"
section = "Server Mechanics" section = "Server Mechanics"
@ -1084,6 +1142,7 @@ class User(Setting):
meta = "USER" meta = "USER"
validator = validate_user validator = validate_user
default = os.geteuid() default = os.geteuid()
default_doc = "``os.geteuid()``"
desc = """\ desc = """\
Switch worker processes to run as this user. Switch worker processes to run as this user.
@ -1100,6 +1159,7 @@ class Group(Setting):
meta = "GROUP" meta = "GROUP"
validator = validate_group validator = validate_group
default = os.getegid() default = os.getegid()
default_doc = "``os.getegid()``"
desc = """\ desc = """\
Switch worker process to run as this group. Switch worker process to run as this group.
@ -1108,6 +1168,7 @@ class Group(Setting):
change the worker processes group. change the worker processes group.
""" """
class Umask(Setting): class Umask(Setting):
name = "umask" name = "umask"
section = "Server Mechanics" section = "Server Mechanics"
@ -1174,10 +1235,16 @@ class SecureSchemeHeader(Setting):
desc = """\ desc = """\
A dictionary containing headers and values that the front-end proxy A dictionary containing headers and values that the front-end proxy
uses to indicate HTTPS requests. These tell Gunicorn to set uses to indicate HTTPS requests. If the source IP is permitted by
``forwarded-allow-ips`` (below), *and* at least one request header matches
a key-value pair listed in this dictionary, then Gunicorn will set
``wsgi.url_scheme`` to ``https``, so your application can tell that the ``wsgi.url_scheme`` to ``https``, so your application can tell that the
request is secure. request is secure.
If the other headers listed in this dictionary are not present in the request, they will be ignored,
but if the other headers are present and do not match the provided values, then
the request will fail to parse. See the note below for more detailed examples of this behaviour.
The dictionary should map upper-case header names to exact string The dictionary should map upper-case header names to exact string
values. The value comparisons are case-sensitive, unlike the header values. The value comparisons are case-sensitive, unlike the header
names, so make sure they're exactly what your front-end proxy sends names, so make sure they're exactly what your front-end proxy sends
@ -1205,6 +1272,71 @@ class ForwardedAllowIPS(Setting):
By default, the value of the ``FORWARDED_ALLOW_IPS`` environment By default, the value of the ``FORWARDED_ALLOW_IPS`` environment
variable. If it is not defined, the default is ``"127.0.0.1"``. variable. If it is not defined, the default is ``"127.0.0.1"``.
.. note::
The interplay between the request headers, the value of ``forwarded_allow_ips``, and the value of
``secure_scheme_headers`` is complex. Various scenarios are documented below to further elaborate.
In each case, we have a request from the remote address 134.213.44.18, and the default value of
``secure_scheme_headers``:
.. code::
secure_scheme_headers = {
'X-FORWARDED-PROTOCOL': 'ssl',
'X-FORWARDED-PROTO': 'https',
'X-FORWARDED-SSL': 'on'
}
.. list-table::
:header-rows: 1
:align: center
:widths: auto
* - ``forwarded-allow-ips``
- Secure Request Headers
- Result
- Explanation
* - .. code::
["127.0.0.1"]
- .. code::
X-Forwarded-Proto: https
- .. code::
wsgi.url_scheme = "http"
- IP address was not allowed
* - .. code::
"*"
- <none>
- .. code::
wsgi.url_scheme = "http"
- IP address allowed, but no secure headers provided
* - .. code::
"*"
- .. code::
X-Forwarded-Proto: https
- .. code::
wsgi.url_scheme = "https"
- IP address allowed, one request header matched
* - .. code::
["134.213.44.18"]
- .. code::
X-Forwarded-Ssl: on
X-Forwarded-Proto: http
- ``InvalidSchemeHeaders()`` raised
- IP address allowed, but the two secure headers disagreed on if HTTPS was used
""" """
@ -1221,6 +1353,7 @@ class AccessLog(Setting):
``'-'`` means log to stdout. ``'-'`` means log to stdout.
""" """
class DisableRedirectAccessToSyslog(Setting): class DisableRedirectAccessToSyslog(Setting):
name = "disable_redirect_access_to_syslog" name = "disable_redirect_access_to_syslog"
section = "Logging" section = "Logging"
@ -1263,6 +1396,7 @@ class AccessLogFormat(Setting):
f referer f referer
a user agent a user agent
T request time in seconds T request time in seconds
M request time in milliseconds
D request time in microseconds D request time in microseconds
L request time in decimal seconds L request time in decimal seconds
p process ID p process ID
@ -1308,11 +1442,11 @@ class Loglevel(Setting):
Valid level names are: Valid level names are:
* debug * ``'debug'``
* info * ``'info'``
* warning * ``'warning'``
* error * ``'error'``
* critical * ``'critical'``
""" """
@ -1340,11 +1474,11 @@ class LoggerClass(Setting):
desc = """\ desc = """\
The logger you want to use to log events in Gunicorn. The logger you want to use to log events in Gunicorn.
The default class (``gunicorn.glogging.Logger``) handle most of The default class (``gunicorn.glogging.Logger``) handles most
normal usages in logging. It provides error and access logging. normal usages in logging. It provides error and access logging.
You can provide your own logger by giving Gunicorn a You can provide your own logger by giving Gunicorn a Python path to a
Python path to a subclass like ``gunicorn.glogging.Logger``. class that quacks like ``gunicorn.glogging.Logger``.
""" """
@ -1365,21 +1499,40 @@ class LogConfig(Setting):
class LogConfigDict(Setting): class LogConfigDict(Setting):
name = "logconfig_dict" name = "logconfig_dict"
section = "Logging" section = "Logging"
cli = ["--log-config-dict"]
validator = validate_dict validator = validate_dict
default = {} default = {}
desc = """\ desc = """\
The log config dictionary to use, using the standard Python The log config dictionary to use, using the standard Python
logging module's dictionary configuration format. This option logging module's dictionary configuration format. This option
takes precedence over the :ref:`logconfig` option, which uses the takes precedence over the :ref:`logconfig` and :ref:`logConfigJson` options,
older file configuration format. which uses the older file configuration format and JSON
respectively.
Format: https://docs.python.org/3/library/logging.config.html#logging.config.dictConfig Format: https://docs.python.org/3/library/logging.config.html#logging.config.dictConfig
For more context you can look at the default configuration dictionary for logging,
which can be found at ``gunicorn.glogging.CONFIG_DEFAULTS``.
.. versionadded:: 19.8 .. versionadded:: 19.8
""" """
class LogConfigJson(Setting):
name = "logconfig_json"
section = "Logging"
cli = ["--log-config-json"]
meta = "FILE"
validator = validate_string
default = None
desc = """\
The log config to read config from a JSON file
Format: https://docs.python.org/3/library/logging.config.html#logging.config.jsonConfig
.. versionadded:: 20.0
"""
class SyslogTo(Setting): class SyslogTo(Setting):
name = "syslog_addr" name = "syslog_addr"
section = "Logging" section = "Logging"
@ -1477,13 +1630,35 @@ class StatsdHost(Setting):
cli = ["--statsd-host"] cli = ["--statsd-host"]
meta = "STATSD_ADDR" meta = "STATSD_ADDR"
default = None default = None
validator = validate_hostport validator = validate_statsd_address
desc = """\ desc = """\
``host:port`` of the statsd server to log to. The address of the StatsD server to log to.
Address is a string of the form:
* ``unix://PATH`` : for a unix domain socket.
* ``HOST:PORT`` : for a network address
.. versionadded:: 19.1 .. versionadded:: 19.1
""" """
# Datadog Statsd (dogstatsd) tags. https://docs.datadoghq.com/developers/dogstatsd/
class DogstatsdTags(Setting):
name = "dogstatsd_tags"
section = "Logging"
cli = ["--dogstatsd-tags"]
meta = "DOGSTATSD_TAGS"
default = ""
validator = validate_string
desc = """\
A comma-delimited list of datadog statsd (dogstatsd) tags to append to
statsd metrics.
.. versionadded:: 20
"""
class StatsdPrefix(Setting): class StatsdPrefix(Setting):
name = "statsd_prefix" name = "statsd_prefix"
section = "Logging" section = "Logging"
@ -1659,6 +1834,7 @@ class PostWorkerInit(Setting):
Worker. Worker.
""" """
class WorkerInt(Setting): class WorkerInt(Setting):
name = "worker_int" name = "worker_int"
section = "Server Hooks" section = "Server Hooks"
@ -1720,7 +1896,7 @@ class PreRequest(Setting):
type = callable type = callable
def pre_request(worker, req): def pre_request(worker, req):
worker.log.debug("%s %s" % (req.method, req.path)) worker.log.debug("%s %s", req.method, req.path)
default = staticmethod(pre_request) default = staticmethod(pre_request)
desc = """\ desc = """\
Called just before a worker processes the request. Called just before a worker processes the request.
@ -1802,6 +1978,7 @@ class NumWorkersChanged(Setting):
be ``None``. be ``None``.
""" """
class OnExit(Setting): class OnExit(Setting):
name = "on_exit" name = "on_exit"
section = "Server Hooks" section = "Server Hooks"
@ -1818,6 +1995,41 @@ class OnExit(Setting):
""" """
class NewSSLContext(Setting):
name = "ssl_context"
section = "Server Hooks"
validator = validate_callable(2)
type = callable
def ssl_context(config, default_ssl_context_factory):
return default_ssl_context_factory()
default = staticmethod(ssl_context)
desc = """\
Called when SSLContext is needed.
Allows customizing SSL context.
The callable needs to accept an instance variable for the Config and
a factory function that returns default SSLContext which is initialized
with certificates, private key, cert_reqs, and ciphers according to
config and can be further customized by the callable.
The callable needs to return SSLContext object.
Following example shows a configuration file that sets the minimum TLS version to 1.3:
.. code-block:: python
def ssl_context(conf, default_ssl_context_factory):
import ssl
context = default_ssl_context_factory()
context.minimum_version = ssl.TLSVersion.TLSv1_3
return context
.. versionadded:: 20.2
"""
class ProxyProtocol(Setting): class ProxyProtocol(Setting):
name = "proxy_protocol" name = "proxy_protocol"
section = "Server Mechanics" section = "Server Mechanics"
@ -1882,14 +2094,24 @@ class CertFile(Setting):
SSL certificate file SSL certificate file
""" """
class SSLVersion(Setting): class SSLVersion(Setting):
name = "ssl_version" name = "ssl_version"
section = "SSL" section = "SSL"
cli = ["--ssl-version"] cli = ["--ssl-version"]
validator = validate_ssl_version validator = validate_ssl_version
if hasattr(ssl, "PROTOCOL_TLS"):
default = ssl.PROTOCOL_TLS
else:
default = ssl.PROTOCOL_SSLv23
default = ssl.PROTOCOL_SSLv23 default = ssl.PROTOCOL_SSLv23
desc = """\ desc = """\
SSL version to use. SSL version to use (see stdlib ssl module's).
.. deprecated:: 20.2
The option is deprecated and it is currently ignored. Use :ref:`ssl-context` instead.
============= ============ ============= ============
--ssl-version Description --ssl-version Description
@ -1912,8 +2134,12 @@ class SSLVersion(Setting):
.. versionchanged:: 20.0 .. versionchanged:: 20.0
This setting now accepts string names based on ``ssl.PROTOCOL_`` This setting now accepts string names based on ``ssl.PROTOCOL_``
constants. constants.
.. versionchanged:: 20.0.1
The default value has been changed from ``ssl.PROTOCOL_SSLv23`` to
``ssl.PROTOCOL_TLS`` when Python >= 3.6 .
""" """
class CertReqs(Setting): class CertReqs(Setting):
name = "cert_reqs" name = "cert_reqs"
section = "SSL" section = "SSL"
@ -1922,8 +2148,17 @@ class CertReqs(Setting):
default = ssl.CERT_NONE default = ssl.CERT_NONE
desc = """\ desc = """\
Whether client certificate is required (see stdlib ssl module's) Whether client certificate is required (see stdlib ssl module's)
=========== ===========================
--cert-reqs Description
=========== ===========================
`0` no client veirifcation
`1` ssl.CERT_OPTIONAL
`2` ssl.CERT_REQUIRED
=========== ===========================
""" """
class CACerts(Setting): class CACerts(Setting):
name = "ca_certs" name = "ca_certs"
section = "SSL" section = "SSL"
@ -1935,6 +2170,7 @@ class CACerts(Setting):
CA certificates file CA certificates file
""" """
class SuppressRaggedEOFs(Setting): class SuppressRaggedEOFs(Setting):
name = "suppress_ragged_eofs" name = "suppress_ragged_eofs"
section = "SSL" section = "SSL"
@ -1946,6 +2182,7 @@ class SuppressRaggedEOFs(Setting):
Suppress ragged EOFs (see stdlib ssl module's) Suppress ragged EOFs (see stdlib ssl module's)
""" """
class DoHandshakeOnConnect(Setting): class DoHandshakeOnConnect(Setting):
name = "do_handshake_on_connect" name = "do_handshake_on_connect"
section = "SSL" section = "SSL"
@ -1963,9 +2200,22 @@ class Ciphers(Setting):
section = "SSL" section = "SSL"
cli = ["--ciphers"] cli = ["--ciphers"]
validator = validate_string validator = validate_string
default = 'TLSv1' default = None
desc = """\ desc = """\
Ciphers to use (see stdlib ssl module's) SSL Cipher suite to use, in the format of an OpenSSL cipher list.
By default we use the default cipher list from Python's ``ssl`` module,
which contains ciphers considered strong at the time of each Python
release.
As a recommended alternative, the Open Web App Security Project (OWASP)
offers `a vetted set of strong cipher strings rated A+ to C-
<https://www.owasp.org/index.php/TLS_Cipher_String_Cheat_Sheet>`_.
OWASP provides details on user-agent compatibility at each security level.
See the `OpenSSL Cipher List Format Documentation
<https://www.openssl.org/docs/manmaster/man1/ciphers.html#CIPHER-LIST-FORMAT>`_
for details on the format of an OpenSSL cipher list.
""" """
@ -1989,3 +2239,146 @@ class PasteGlobalConf(Setting):
.. versionadded:: 19.7 .. versionadded:: 19.7
""" """
class StripHeaderSpaces(Setting):
name = "strip_header_spaces"
section = "Server Mechanics"
cli = ["--strip-header-spaces"]
validator = validate_bool
action = "store_true"
default = False
desc = """\
Strip spaces present between the header name and the the ``:``.
This is known to induce vulnerabilities and is not compliant with the HTTP/1.1 standard.
See https://portswigger.net/research/http-desync-attacks-request-smuggling-reborn.
Use with care and only if necessary. May be removed in a future version.
.. versionadded:: 20.0.1
"""
class PermitUnconventionalHTTPMethod(Setting):
name = "permit_unconventional_http_method"
section = "Server Mechanics"
cli = ["--permit-unconventional-http-method"]
validator = validate_bool
action = "store_true"
default = False
desc = """\
Permit HTTP methods not matching conventions, such as IANA registration guidelines
This permits request methods of length less than 3 or more than 20,
methods with lowercase characters or methods containing the # character.
HTTP methods are case sensitive by definition, and merely uppercase by convention.
This option is provided to diagnose backwards-incompatible changes.
Use with care and only if necessary. May be removed in a future version.
.. versionadded:: 22.0.0
"""
class PermitUnconventionalHTTPVersion(Setting):
name = "permit_unconventional_http_version"
section = "Server Mechanics"
cli = ["--permit-unconventional-http-version"]
validator = validate_bool
action = "store_true"
default = False
desc = """\
Permit HTTP version not matching conventions of 2023
This disables the refusal of likely malformed request lines.
It is unusual to specify HTTP 1 versions other than 1.0 and 1.1.
This option is provided to diagnose backwards-incompatible changes.
Use with care and only if necessary. May be removed in a future version.
.. versionadded:: 22.0.0
"""
class CasefoldHTTPMethod(Setting):
name = "casefold_http_method"
section = "Server Mechanics"
cli = ["--casefold-http-method"]
validator = validate_bool
action = "store_true"
default = False
desc = """\
Transform received HTTP methods to uppercase
HTTP methods are case sensitive by definition, and merely uppercase by convention.
This option is provided because previous versions of gunicorn defaulted to this behaviour.
Use with care and only if necessary. May be removed in a future version.
.. versionadded:: 22.0.0
"""
def validate_header_map_behaviour(val):
# FIXME: refactor all of this subclassing stdlib argparse
if val is None:
return
if not isinstance(val, str):
raise TypeError("Invalid type for casting: %s" % val)
if val.lower().strip() == "drop":
return "drop"
elif val.lower().strip() == "refuse":
return "refuse"
elif val.lower().strip() == "dangerous":
return "dangerous"
else:
raise ValueError("Invalid header map behaviour: %s" % val)
class HeaderMap(Setting):
name = "header_map"
section = "Server Mechanics"
cli = ["--header-map"]
validator = validate_header_map_behaviour
default = "drop"
desc = """\
Configure how header field names are mapped into environ
Headers containing underscores are permitted by RFC9110,
but gunicorn joining headers of different names into
the same environment variable will dangerously confuse applications as to which is which.
The safe default ``drop`` is to silently drop headers that cannot be unambiguously mapped.
The value ``refuse`` will return an error if a request contains *any* such header.
The value ``dangerous`` matches the previous, not advisabble, behaviour of mapping different
header field names into the same environ name.
Use with care and only if necessary and after considering if your problem could
instead be solved by specifically renaming or rewriting only the intended headers
on a proxy in front of Gunicorn.
.. versionadded:: 22.0.0
"""
class TolerateDangerousFraming(Setting):
name = "tolerate_dangerous_framing"
section = "Server Mechanics"
cli = ["--tolerate-dangerous-framing"]
validator = validate_bool
action = "store_true"
default = False
desc = """\
Process requests with both Transfer-Encoding and Content-Length
This is known to induce vulnerabilities, but not strictly forbidden by RFC9112.
Use with care and only if necessary. May be removed in a future version.
.. versionadded:: 22.0.0
"""

View File

@ -28,7 +28,7 @@ class Spew(object):
if '__file__' in frame.f_globals: if '__file__' in frame.f_globals:
filename = frame.f_globals['__file__'] filename = frame.f_globals['__file__']
if (filename.endswith('.pyc') or if (filename.endswith('.pyc') or
filename.endswith('.pyo')): filename.endswith('.pyo')):
filename = filename[:-1] filename = filename[:-1]
name = frame.f_globals['__name__'] name = frame.f_globals['__name__']
line = linecache.getline(filename, lineno) line = linecache.getline(filename, lineno)

View File

@ -5,9 +5,10 @@
import base64 import base64
import binascii import binascii
import json
import time import time
import logging import logging
logging.Logger.manager.emittedNoHandlerWarning = 1 logging.Logger.manager.emittedNoHandlerWarning = 1 # noqa
from logging.config import dictConfig from logging.config import dictConfig
from logging.config import fileConfig from logging.config import fileConfig
import os import os
@ -21,76 +22,74 @@ from gunicorn import util
# syslog facility codes # syslog facility codes
SYSLOG_FACILITIES = { SYSLOG_FACILITIES = {
"auth": 4, "auth": 4,
"authpriv": 10, "authpriv": 10,
"cron": 9, "cron": 9,
"daemon": 3, "daemon": 3,
"ftp": 11, "ftp": 11,
"kern": 0, "kern": 0,
"lpr": 6, "lpr": 6,
"mail": 2, "mail": 2,
"news": 7, "news": 7,
"security": 4, # DEPRECATED "security": 4, # DEPRECATED
"syslog": 5, "syslog": 5,
"user": 1, "user": 1,
"uucp": 8, "uucp": 8,
"local0": 16, "local0": 16,
"local1": 17, "local1": 17,
"local2": 18, "local2": 18,
"local3": 19, "local3": 19,
"local4": 20, "local4": 20,
"local5": 21, "local5": 21,
"local6": 22, "local6": 22,
"local7": 23 "local7": 23
} }
CONFIG_DEFAULTS = {
CONFIG_DEFAULTS = dict( "version": 1,
version=1, "disable_existing_loggers": False,
disable_existing_loggers=False, "root": {"level": "INFO", "handlers": ["console"]},
"loggers": {
root={"level": "INFO", "handlers": ["console"]}, "gunicorn.error": {
loggers={ "level": "INFO",
"gunicorn.error": { "handlers": ["error_console"],
"level": "INFO", "propagate": True,
"handlers": ["error_console"], "qualname": "gunicorn.error"
"propagate": True,
"qualname": "gunicorn.error"
},
"gunicorn.access": {
"level": "INFO",
"handlers": ["console"],
"propagate": True,
"qualname": "gunicorn.access"
}
}, },
handlers={
"console": { "gunicorn.access": {
"class": "logging.StreamHandler", "level": "INFO",
"formatter": "generic", "handlers": ["console"],
"stream": "ext://sys.stdout" "propagate": True,
}, "qualname": "gunicorn.access"
"error_console": {
"class": "logging.StreamHandler",
"formatter": "generic",
"stream": "ext://sys.stderr"
},
},
formatters={
"generic": {
"format": "%(asctime)s [%(process)d] [%(levelname)s] %(message)s",
"datefmt": "[%Y-%m-%d %H:%M:%S %z]",
"class": "logging.Formatter"
}
} }
) },
"handlers": {
"console": {
"class": "logging.StreamHandler",
"formatter": "generic",
"stream": "ext://sys.stdout"
},
"error_console": {
"class": "logging.StreamHandler",
"formatter": "generic",
"stream": "ext://sys.stderr"
},
},
"formatters": {
"generic": {
"format": "%(asctime)s [%(process)d] [%(levelname)s] %(message)s",
"datefmt": "[%Y-%m-%d %H:%M:%S %z]",
"class": "logging.Formatter"
}
}
}
def loggers(): def loggers():
""" get list of all loggers """ """ get list of all loggers """
root = logging.root root = logging.root
existing = root.manager.loggerDict.keys() existing = list(root.manager.loggerDict.keys())
return [logging.getLogger(name) for name in existing] return [logging.getLogger(name) for name in existing]
@ -108,11 +107,11 @@ class SafeAtoms(dict):
if k.startswith("{"): if k.startswith("{"):
kl = k.lower() kl = k.lower()
if kl in self: if kl in self:
return super(SafeAtoms, self).__getitem__(kl) return super().__getitem__(kl)
else: else:
return "-" return "-"
if k in self: if k in self:
return super(SafeAtoms, self).__getitem__(k) return super().__getitem__(k)
else: else:
return '-' return '-'
@ -213,8 +212,10 @@ class Logger(object):
# set gunicorn.access handler # set gunicorn.access handler
if cfg.accesslog is not None: if cfg.accesslog is not None:
self._set_handler(self.access_log, cfg.accesslog, self._set_handler(
fmt=logging.Formatter(self.access_fmt), stream=sys.stdout) self.access_log, cfg.accesslog,
fmt=logging.Formatter(self.access_fmt), stream=sys.stdout
)
# set syslog handler # set syslog handler
if cfg.syslog: if cfg.syslog:
@ -238,6 +239,21 @@ class Logger(object):
TypeError TypeError
) as exc: ) as exc:
raise RuntimeError(str(exc)) raise RuntimeError(str(exc))
elif cfg.logconfig_json:
config = CONFIG_DEFAULTS.copy()
if os.path.exists(cfg.logconfig_json):
try:
config_json = json.load(open(cfg.logconfig_json))
config.update(config_json)
dictConfig(config)
except (
json.JSONDecodeError,
AttributeError,
ImportError,
ValueError,
TypeError
) as exc:
raise RuntimeError(str(exc))
elif cfg.logconfig: elif cfg.logconfig:
if os.path.exists(cfg.logconfig): if os.path.exists(cfg.logconfig):
defaults = CONFIG_DEFAULTS.copy() defaults = CONFIG_DEFAULTS.copy()
@ -273,7 +289,7 @@ class Logger(object):
self.error_log.log(lvl, msg, *args, **kwargs) self.error_log.log(lvl, msg, *args, **kwargs)
def atoms(self, resp, req, environ, request_time): def atoms(self, resp, req, environ, request_time):
""" Gets atoms for log formating. """ Gets atoms for log formatting.
""" """
status = resp.status status = resp.status
if isinstance(status, str): if isinstance(status, str):
@ -284,7 +300,8 @@ class Logger(object):
'u': self._get_user(environ) or '-', 'u': self._get_user(environ) or '-',
't': self.now(), 't': self.now(),
'r': "%s %s %s" % (environ['REQUEST_METHOD'], 'r': "%s %s %s" % (environ['REQUEST_METHOD'],
environ['RAW_URI'], environ["SERVER_PROTOCOL"]), environ['RAW_URI'],
environ["SERVER_PROTOCOL"]),
's': status, 's': status,
'm': environ.get('REQUEST_METHOD'), 'm': environ.get('REQUEST_METHOD'),
'U': environ.get('PATH_INFO'), 'U': environ.get('PATH_INFO'),
@ -295,7 +312,8 @@ class Logger(object):
'f': environ.get('HTTP_REFERER', '-'), 'f': environ.get('HTTP_REFERER', '-'),
'a': environ.get('HTTP_USER_AGENT', '-'), 'a': environ.get('HTTP_USER_AGENT', '-'),
'T': request_time.seconds, 'T': request_time.seconds,
'D': (request_time.seconds*1000000) + request_time.microseconds, 'D': (request_time.seconds * 1000000) + request_time.microseconds,
'M': (request_time.seconds * 1000) + int(request_time.microseconds / 1000),
'L': "%d.%06d" % (request_time.seconds, request_time.microseconds), 'L': "%d.%06d" % (request_time.seconds, request_time.microseconds),
'p': "<%s>" % os.getpid() 'p': "<%s>" % os.getpid()
} }
@ -330,19 +348,20 @@ class Logger(object):
""" """
if not (self.cfg.accesslog or self.cfg.logconfig or if not (self.cfg.accesslog or self.cfg.logconfig or
self.cfg.logconfig_dict or self.cfg.logconfig_dict or self.cfg.logconfig_json or
(self.cfg.syslog and not self.cfg.disable_redirect_access_to_syslog)): (self.cfg.syslog and not self.cfg.disable_redirect_access_to_syslog)):
return return
# wrap atoms: # wrap atoms:
# - make sure atoms will be test case insensitively # - make sure atoms will be test case insensitively
# - if atom doesn't exist replace it by '-' # - if atom doesn't exist replace it by '-'
safe_atoms = self.atoms_wrapper_class(self.atoms(resp, req, environ, safe_atoms = self.atoms_wrapper_class(
request_time)) self.atoms(resp, req, environ, request_time)
)
try: try:
self.access_log.info(self.cfg.access_log_format, safe_atoms) self.access_log.info(self.cfg.access_log_format, safe_atoms)
except: except Exception:
self.error(traceback.format_exc()) self.error(traceback.format_exc())
def now(self): def now(self):
@ -361,7 +380,6 @@ class Logger(object):
os.dup2(self.logfile.fileno(), sys.stdout.fileno()) os.dup2(self.logfile.fileno(), sys.stdout.fileno())
os.dup2(self.logfile.fileno(), sys.stderr.fileno()) os.dup2(self.logfile.fileno(), sys.stderr.fileno())
for log in loggers(): for log in loggers():
for handler in log.handlers: for handler in log.handlers:
if isinstance(handler, logging.FileHandler): if isinstance(handler, logging.FileHandler):
@ -399,7 +417,7 @@ class Logger(object):
if output == "-": if output == "-":
h = logging.StreamHandler(stream) h = logging.StreamHandler(stream)
else: else:
util.check_is_writeable(output) util.check_is_writable(output)
h = logging.FileHandler(output) h = logging.FileHandler(output)
# make sure the user can reopen the file # make sure the user can reopen the file
try: try:
@ -415,10 +433,7 @@ class Logger(object):
def _set_syslog_handler(self, log, cfg, fmt, name): def _set_syslog_handler(self, log, cfg, fmt, name):
# setup format # setup format
if not cfg.syslog_prefix: prefix = cfg.syslog_prefix or cfg.proc_name.replace(":", ".")
prefix = cfg.proc_name.replace(":", ".")
else:
prefix = cfg.syslog_prefix
prefix = "gunicorn.%s.%s" % (prefix, name) prefix = "gunicorn.%s.%s" % (prefix, name)
@ -436,7 +451,7 @@ class Logger(object):
# finally setup the syslog handler # finally setup the syslog handler
h = logging.handlers.SysLogHandler(address=addr, h = logging.handlers.SysLogHandler(address=addr,
facility=facility, socktype=socktype) facility=facility, socktype=socktype)
h.setFormatter(fmt) h.setFormatter(fmt)
h._gunicorn = True h._gunicorn = True
@ -445,7 +460,7 @@ class Logger(object):
def _get_user(self, environ): def _get_user(self, environ):
user = None user = None
http_auth = environ.get("HTTP_AUTHORIZATION") http_auth = environ.get("HTTP_AUTHORIZATION")
if http_auth and http_auth.startswith('Basic'): if http_auth and http_auth.lower().startswith('basic'):
auth = http_auth.split(" ", 1) auth = http_auth.split(" ", 1)
if len(auth) == 2: if len(auth) == 2:
try: try:
@ -453,11 +468,7 @@ class Logger(object):
# so we need to convert it to a byte string # so we need to convert it to a byte string
auth = base64.b64decode(auth[1].strip().encode('utf-8')) auth = base64.b64decode(auth[1].strip().encode('utf-8'))
# b64decode returns a byte string # b64decode returns a byte string
auth = auth.decode('utf-8') user = auth.split(b":", 1)[0].decode("UTF-8")
auth = auth.split(":", 1)
except (TypeError, binascii.Error, UnicodeDecodeError) as exc: except (TypeError, binascii.Error, UnicodeDecodeError) as exc:
self.debug("Couldn't get username: %s", exc) self.debug("Couldn't get username: %s", exc)
return user
if len(auth) == 2:
user = auth[0]
return user return user

View File

@ -7,7 +7,7 @@ import io
import sys import sys
from gunicorn.http.errors import (NoMoreData, ChunkMissingTerminator, from gunicorn.http.errors import (NoMoreData, ChunkMissingTerminator,
InvalidChunkSize) InvalidChunkSize)
class ChunkedReader(object): class ChunkedReader(object):
@ -18,7 +18,7 @@ class ChunkedReader(object):
def read(self, size): def read(self, size):
if not isinstance(size, int): if not isinstance(size, int):
raise TypeError("size must be an integral type") raise TypeError("size must be an integer type")
if size < 0: if size < 0:
raise ValueError("Size must be positive.") raise ValueError("Size must be positive.")
if size == 0: if size == 0:
@ -51,7 +51,7 @@ class ChunkedReader(object):
if done: if done:
unreader.unread(buf.getvalue()[2:]) unreader.unread(buf.getvalue()[2:])
return b"" return b""
self.req.trailers = self.req.parse_headers(buf.getvalue()[:idx]) self.req.trailers = self.req.parse_headers(buf.getvalue()[:idx], from_trailer=True)
unreader.unread(buf.getvalue()[idx + 4:]) unreader.unread(buf.getvalue()[idx + 4:])
def parse_chunked(self, unreader): def parse_chunked(self, unreader):
@ -85,11 +85,13 @@ class ChunkedReader(object):
data = buf.getvalue() data = buf.getvalue()
line, rest_chunk = data[:idx], data[idx + 2:] line, rest_chunk = data[:idx], data[idx + 2:]
chunk_size = line.split(b";", 1)[0].strip() # RFC9112 7.1.1: BWS before chunk-ext - but ONLY then
try: chunk_size, *chunk_ext = line.split(b";", 1)
chunk_size = int(chunk_size, 16) if chunk_ext:
except ValueError: chunk_size = chunk_size.rstrip(b" \t")
if any(n not in b"0123456789abcdefABCDEF" for n in chunk_size):
raise InvalidChunkSize(chunk_size) raise InvalidChunkSize(chunk_size)
chunk_size = int(chunk_size, 16)
if chunk_size == 0: if chunk_size == 0:
try: try:
@ -187,6 +189,7 @@ class Body(object):
if not ret: if not ret:
raise StopIteration() raise StopIteration()
return ret return ret
next = __next__ next = __next__
def getsize(self, size): def getsize(self, size):

View File

@ -22,6 +22,15 @@ class NoMoreData(IOError):
return "No more data after: %r" % self.buf return "No more data after: %r" % self.buf
class ConfigurationProblem(ParseException):
def __init__(self, info):
self.info = info
self.code = 500
def __str__(self):
return "Configuration problem: %s" % self.info
class InvalidRequestLine(ParseException): class InvalidRequestLine(ParseException):
def __init__(self, req): def __init__(self, req):
self.req = req self.req = req
@ -64,6 +73,15 @@ class InvalidHeaderName(ParseException):
return "Invalid HTTP header name: %r" % self.hdr return "Invalid HTTP header name: %r" % self.hdr
class UnsupportedTransferCoding(ParseException):
def __init__(self, hdr):
self.hdr = hdr
self.code = 501
def __str__(self):
return "Unsupported transfer coding: %r" % self.hdr
class InvalidChunkSize(IOError): class InvalidChunkSize(IOError):
def __init__(self, data): def __init__(self, data):
self.data = data self.data = data

View File

@ -6,13 +6,14 @@
import io import io
import re import re
import socket import socket
from errno import ENOTCONN
from gunicorn.http.unreader import SocketUnreader
from gunicorn.http.body import ChunkedReader, LengthReader, EOFReader, Body from gunicorn.http.body import ChunkedReader, LengthReader, EOFReader, Body
from gunicorn.http.errors import (InvalidHeader, InvalidHeaderName, NoMoreData, from gunicorn.http.errors import (
InvalidHeader, InvalidHeaderName, NoMoreData,
InvalidRequestLine, InvalidRequestMethod, InvalidHTTPVersion, InvalidRequestLine, InvalidRequestMethod, InvalidHTTPVersion,
LimitRequestLine, LimitRequestHeaders) LimitRequestLine, LimitRequestHeaders,
UnsupportedTransferCoding,
)
from gunicorn.http.errors import InvalidProxyLine, ForbiddenProxyRequest from gunicorn.http.errors import InvalidProxyLine, ForbiddenProxyRequest
from gunicorn.http.errors import InvalidSchemeHeaders from gunicorn.http.errors import InvalidSchemeHeaders
from gunicorn.util import bytes_to_str, split_request_uri from gunicorn.util import bytes_to_str, split_request_uri
@ -21,25 +22,31 @@ MAX_REQUEST_LINE = 8190
MAX_HEADERS = 32768 MAX_HEADERS = 32768
DEFAULT_MAX_HEADERFIELD_SIZE = 8190 DEFAULT_MAX_HEADERFIELD_SIZE = 8190
HEADER_RE = re.compile(r"[\x00-\x1F\x7F()<>@,;:\[\]={} \t\\\"]") # verbosely on purpose, avoid backslash ambiguity
METH_RE = re.compile(r"[A-Z0-9$-_.]{3,20}") RFC9110_5_6_2_TOKEN_SPECIALS = r"!#$%&'*+-.^_`|~"
VERSION_RE = re.compile(r"HTTP/(\d+)\.(\d+)") TOKEN_RE = re.compile(r"[%s0-9a-zA-Z]+" % (re.escape(RFC9110_5_6_2_TOKEN_SPECIALS)))
METHOD_BADCHAR_RE = re.compile("[a-z#]")
# usually 1.0 or 1.1 - RFC9112 permits restricting to single-digit versions
VERSION_RE = re.compile(r"HTTP/(\d)\.(\d)")
class Message(object): class Message(object):
def __init__(self, cfg, unreader): def __init__(self, cfg, unreader, peer_addr):
self.cfg = cfg self.cfg = cfg
self.unreader = unreader self.unreader = unreader
self.peer_addr = peer_addr
self.remote_addr = peer_addr
self.version = None self.version = None
self.headers = [] self.headers = []
self.trailers = [] self.trailers = []
self.body = None self.body = None
self.scheme = "https" if cfg.is_ssl else "http" self.scheme = "https" if cfg.is_ssl else "http"
self.must_close = False
# set headers limits # set headers limits
self.limit_request_fields = cfg.limit_request_fields self.limit_request_fields = cfg.limit_request_fields
if (self.limit_request_fields <= 0 if (self.limit_request_fields <= 0
or self.limit_request_fields > MAX_HEADERS): or self.limit_request_fields > MAX_HEADERS):
self.limit_request_fields = MAX_HEADERS self.limit_request_fields = MAX_HEADERS
self.limit_request_field_size = cfg.limit_request_field_size self.limit_request_field_size = cfg.limit_request_field_size
if self.limit_request_field_size < 0: if self.limit_request_field_size < 0:
@ -54,29 +61,30 @@ class Message(object):
self.unreader.unread(unused) self.unreader.unread(unused)
self.set_body_reader() self.set_body_reader()
def force_close(self):
self.must_close = True
def parse(self, unreader): def parse(self, unreader):
raise NotImplementedError() raise NotImplementedError()
def parse_headers(self, data): def parse_headers(self, data, from_trailer=False):
cfg = self.cfg cfg = self.cfg
headers = [] headers = []
# Split lines on \r\n keeping the \r\n on each line # Split lines on \r\n
lines = [bytes_to_str(line) + "\r\n" for line in data.split(b"\r\n")] lines = [bytes_to_str(line) for line in data.split(b"\r\n")]
# handle scheme headers # handle scheme headers
scheme_header = False scheme_header = False
secure_scheme_headers = {} secure_scheme_headers = {}
if '*' in cfg.forwarded_allow_ips: if from_trailer:
# nonsense. either a request is https from the beginning
# .. or we are just behind a proxy who does not remove conflicting trailers
pass
elif ('*' in cfg.forwarded_allow_ips or
not isinstance(self.peer_addr, tuple)
or self.peer_addr[0] in cfg.forwarded_allow_ips):
secure_scheme_headers = cfg.secure_scheme_headers secure_scheme_headers = cfg.secure_scheme_headers
elif isinstance(self.unreader, SocketUnreader):
remote_addr = self.unreader.sock.getpeername()
if self.unreader.sock.family in (socket.AF_INET, socket.AF_INET6):
remote_host = remote_addr[0]
if remote_host in cfg.forwarded_allow_ips:
secure_scheme_headers = cfg.secure_scheme_headers
elif self.unreader.sock.family == socket.AF_UNIX:
secure_scheme_headers = cfg.secure_scheme_headers
# Parse headers into key/value pairs paying attention # Parse headers into key/value pairs paying attention
# to continuation lines. # to continuation lines.
@ -84,27 +92,34 @@ class Message(object):
if len(headers) >= self.limit_request_fields: if len(headers) >= self.limit_request_fields:
raise LimitRequestHeaders("limit request headers fields") raise LimitRequestHeaders("limit request headers fields")
# Parse initial header name : value pair. # Parse initial header name: value pair.
curr = lines.pop(0) curr = lines.pop(0)
header_length = len(curr) header_length = len(curr) + len("\r\n")
if curr.find(":") < 0: if curr.find(":") <= 0:
raise InvalidHeader(curr.strip()) raise InvalidHeader(curr)
name, value = curr.split(":", 1) name, value = curr.split(":", 1)
name = name.rstrip(" \t").upper() if self.cfg.strip_header_spaces:
if HEADER_RE.search(name): name = name.rstrip(" \t")
if not TOKEN_RE.fullmatch(name):
raise InvalidHeaderName(name) raise InvalidHeaderName(name)
name, value = name.strip(), [value.lstrip()] # this is still a dangerous place to do this
# but it is more correct than doing it before the pattern match:
# after we entered Unicode wonderland, 8bits could case-shift into ASCII:
# b"\xDF".decode("latin-1").upper().encode("ascii") == b"SS"
name = name.upper()
value = [value.lstrip(" \t")]
# Consume value continuation lines # Consume value continuation lines
while lines and lines[0].startswith((" ", "\t")): while lines and lines[0].startswith((" ", "\t")):
curr = lines.pop(0) curr = lines.pop(0)
header_length += len(curr) header_length += len(curr) + len("\r\n")
if header_length > self.limit_request_field_size > 0: if header_length > self.limit_request_field_size > 0:
raise LimitRequestHeaders("limit request headers " raise LimitRequestHeaders("limit request headers "
+ "fields size") "fields size")
value.append(curr) value.append(curr.strip("\t "))
value = ''.join(value).rstrip() value = " ".join(value)
if header_length > self.limit_request_field_size > 0: if header_length > self.limit_request_field_size > 0:
raise LimitRequestHeaders("limit request headers fields size") raise LimitRequestHeaders("limit request headers fields size")
@ -119,6 +134,23 @@ class Message(object):
scheme_header = True scheme_header = True
self.scheme = scheme self.scheme = scheme
# ambiguous mapping allows fooling downstream, e.g. merging non-identical headers:
# X-Forwarded-For: 2001:db8::ha:cc:ed
# X_Forwarded_For: 127.0.0.1,::1
# HTTP_X_FORWARDED_FOR = 2001:db8::ha:cc:ed,127.0.0.1,::1
# Only modify after fixing *ALL* header transformations; network to wsgi env
if "_" in name:
if self.cfg.header_map == "dangerous":
# as if we did not know we cannot safely map this
pass
elif self.cfg.header_map == "drop":
# almost as if it never had been there
# but still counts against resource limits
continue
else:
# fail-safe fallthrough: refuse
raise InvalidHeaderName(name)
headers.append((name, value)) headers.append((name, value))
return headers return headers
@ -126,19 +158,62 @@ class Message(object):
def set_body_reader(self): def set_body_reader(self):
chunked = False chunked = False
content_length = None content_length = None
for (name, value) in self.headers: for (name, value) in self.headers:
if name == "CONTENT-LENGTH": if name == "CONTENT-LENGTH":
if content_length is not None:
raise InvalidHeader("CONTENT-LENGTH", req=self)
content_length = value content_length = value
elif name == "TRANSFER-ENCODING": elif name == "TRANSFER-ENCODING":
chunked = value.lower() == "chunked" if value.lower() == "chunked":
elif name == "SEC-WEBSOCKET-KEY1": # DANGER: transer codings stack, and stacked chunking is never intended
content_length = 8 if chunked:
raise InvalidHeader("TRANSFER-ENCODING", req=self)
chunked = True
elif value.lower() == "identity":
# does not do much, could still plausibly desync from what the proxy does
# safe option: nuke it, its never needed
if chunked:
raise InvalidHeader("TRANSFER-ENCODING", req=self)
elif value.lower() == "":
# lacking security review on this case
# offer the option to restore previous behaviour, but refuse by default, for now
self.force_close()
if not self.cfg.tolerate_dangerous_framing:
raise UnsupportedTransferCoding(value)
# DANGER: do not change lightly; ref: request smuggling
# T-E is a list and we *could* support correctly parsing its elements
# .. but that is only safe after getting all the edge cases right
# .. for which no real-world need exists, so best to NOT open that can of worms
else:
self.force_close()
# even if parser is extended, retain this branch:
# the "chunked not last" case remains to be rejected!
raise UnsupportedTransferCoding(value)
if chunked: if chunked:
# two potentially dangerous cases:
# a) CL + TE (TE overrides CL.. only safe if the recipient sees it that way too)
# b) chunked HTTP/1.0 (always faulty)
if self.version < (1, 1):
# framing wonky, see RFC 9112 Section 6.1
self.force_close()
if not self.cfg.tolerate_dangerous_framing:
raise InvalidHeader("TRANSFER-ENCODING", req=self)
if content_length is not None:
# we cannot be certain the message framing we understood matches proxy intent
# -> whatever happens next, remaining input must not be trusted
self.force_close()
# either processing or rejecting is permitted in RFC 9112 Section 6.1
if not self.cfg.tolerate_dangerous_framing:
raise InvalidHeader("CONTENT-LENGTH", req=self)
self.body = Body(ChunkedReader(self, self.unreader)) self.body = Body(ChunkedReader(self, self.unreader))
elif content_length is not None: elif content_length is not None:
try: try:
content_length = int(content_length) if str(content_length).isnumeric():
content_length = int(content_length)
else:
raise InvalidHeader("CONTENT-LENGTH", req=self)
except ValueError: except ValueError:
raise InvalidHeader("CONTENT-LENGTH", req=self) raise InvalidHeader("CONTENT-LENGTH", req=self)
@ -150,9 +225,11 @@ class Message(object):
self.body = Body(EOFReader(self.unreader)) self.body = Body(EOFReader(self.unreader))
def should_close(self): def should_close(self):
if self.must_close:
return True
for (h, v) in self.headers: for (h, v) in self.headers:
if h == "CONNECTION": if h == "CONNECTION":
v = v.lower().strip() v = v.lower().strip(" \t")
if v == "close": if v == "close":
return True return True
elif v == "keep-alive": elif v == "keep-alive":
@ -162,7 +239,7 @@ class Message(object):
class Request(Message): class Request(Message):
def __init__(self, cfg, unreader, req_number=1): def __init__(self, cfg, unreader, peer_addr, req_number=1):
self.method = None self.method = None
self.uri = None self.uri = None
self.path = None self.path = None
@ -172,12 +249,12 @@ class Request(Message):
# get max request line size # get max request line size
self.limit_request_line = cfg.limit_request_line self.limit_request_line = cfg.limit_request_line
if (self.limit_request_line < 0 if (self.limit_request_line < 0
or self.limit_request_line >= MAX_REQUEST_LINE): or self.limit_request_line >= MAX_REQUEST_LINE):
self.limit_request_line = MAX_REQUEST_LINE self.limit_request_line = MAX_REQUEST_LINE
self.req_number = req_number self.req_number = req_number
self.proxy_protocol_info = None self.proxy_protocol_info = None
super(Request, self).__init__(cfg, unreader) super().__init__(cfg, unreader, peer_addr)
def get_data(self, unreader, buf, stop=False): def get_data(self, unreader, buf, stop=False):
data = unreader.read() data = unreader.read()
@ -226,7 +303,7 @@ class Request(Message):
self.unreader.unread(data[2:]) self.unreader.unread(data[2:])
return b"" return b""
self.headers = self.parse_headers(data[:idx]) self.headers = self.parse_headers(data[:idx], from_trailer=False)
ret = data[idx + 4:] ret = data[idx + 4:]
buf = None buf = None
@ -242,7 +319,7 @@ class Request(Message):
if idx > limit > 0: if idx > limit > 0:
raise LimitRequestLine(idx, limit) raise LimitRequestLine(idx, limit)
break break
elif len(data) - 2 > limit > 0: if len(data) - 2 > limit > 0:
raise LimitRequestLine(len(data), limit) raise LimitRequestLine(len(data), limit)
self.get_data(unreader, buf) self.get_data(unreader, buf)
data = buf.getvalue() data = buf.getvalue()
@ -273,19 +350,13 @@ class Request(Message):
def proxy_protocol_access_check(self): def proxy_protocol_access_check(self):
# check in allow list # check in allow list
if isinstance(self.unreader, SocketUnreader): if ("*" not in self.cfg.proxy_allow_ips and
try: isinstance(self.peer_addr, tuple) and
remote_host = self.unreader.sock.getpeername()[0] self.peer_addr[0] not in self.cfg.proxy_allow_ips):
except socket.error as e: raise ForbiddenProxyRequest(self.peer_addr[0])
if e.args[0] == ENOTCONN:
raise ForbiddenProxyRequest("UNKNOW")
raise
if ("*" not in self.cfg.proxy_allow_ips and
remote_host not in self.cfg.proxy_allow_ips):
raise ForbiddenProxyRequest(remote_host)
def parse_proxy_protocol(self, line): def parse_proxy_protocol(self, line):
bits = line.split() bits = line.split(" ")
if len(bits) != 6: if len(bits) != 6:
raise InvalidProxyLine(line) raise InvalidProxyLine(line)
@ -330,14 +401,27 @@ class Request(Message):
} }
def parse_request_line(self, line_bytes): def parse_request_line(self, line_bytes):
bits = [bytes_to_str(bit) for bit in line_bytes.split(None, 2)] bits = [bytes_to_str(bit) for bit in line_bytes.split(b" ", 2)]
if len(bits) != 3: if len(bits) != 3:
raise InvalidRequestLine(bytes_to_str(line_bytes)) raise InvalidRequestLine(bytes_to_str(line_bytes))
# Method # Method: RFC9110 Section 9
if not METH_RE.match(bits[0]): self.method = bits[0]
raise InvalidRequestMethod(bits[0])
self.method = bits[0].upper() # nonstandard restriction, suitable for all IANA registered methods
# partially enforced in previous gunicorn versions
if not self.cfg.permit_unconventional_http_method:
if METHOD_BADCHAR_RE.search(self.method):
raise InvalidRequestMethod(self.method)
if not 3 <= len(bits[0]) <= 20:
raise InvalidRequestMethod(self.method)
# standard restriction: RFC9110 token
if not TOKEN_RE.fullmatch(self.method):
raise InvalidRequestMethod(self.method)
# nonstandard and dangerous
# methods are merely uppercase by convention, no case-insensitive treatment is intended
if self.cfg.casefold_http_method:
self.method = self.method.upper()
# URI # URI
self.uri = bits[1] self.uri = bits[1]
@ -351,12 +435,16 @@ class Request(Message):
self.fragment = parts.fragment or "" self.fragment = parts.fragment or ""
# Version # Version
match = VERSION_RE.match(bits[2]) match = VERSION_RE.fullmatch(bits[2])
if match is None: if match is None:
raise InvalidHTTPVersion(bits[2]) raise InvalidHTTPVersion(bits[2])
self.version = (int(match.group(1)), int(match.group(2))) self.version = (int(match.group(1)), int(match.group(2)))
if not (1, 0) <= self.version < (2, 0):
# if ever relaxing this, carefully review Content-Encoding processing
if not self.cfg.permit_unconventional_http_version:
raise InvalidHTTPVersion(self.version)
def set_body_reader(self): def set_body_reader(self):
super(Request, self).set_body_reader() super().set_body_reader()
if isinstance(self.body.reader, EOFReader): if isinstance(self.body.reader, EOFReader):
self.body = Body(LengthReader(self.unreader, 0)) self.body = Body(LengthReader(self.unreader, 0))

View File

@ -11,13 +11,14 @@ class Parser(object):
mesg_class = None mesg_class = None
def __init__(self, cfg, source): def __init__(self, cfg, source, source_addr):
self.cfg = cfg self.cfg = cfg
if hasattr(source, "recv"): if hasattr(source, "recv"):
self.unreader = SocketUnreader(source) self.unreader = SocketUnreader(source)
else: else:
self.unreader = IterUnreader(source) self.unreader = IterUnreader(source)
self.mesg = None self.mesg = None
self.source_addr = source_addr
# request counter (for keepalive connetions) # request counter (for keepalive connetions)
self.req_count = 0 self.req_count = 0
@ -38,7 +39,7 @@ class Parser(object):
# Parse the next request # Parse the next request
self.req_count += 1 self.req_count += 1
self.mesg = self.mesg_class(self.cfg, self.unreader, self.req_count) self.mesg = self.mesg_class(self.cfg, self.unreader, self.source_addr, self.req_count)
if not self.mesg: if not self.mesg:
raise StopIteration() raise StopIteration()
return self.mesg return self.mesg

View File

@ -56,7 +56,7 @@ class Unreader(object):
class SocketUnreader(Unreader): class SocketUnreader(Unreader):
def __init__(self, sock, max_chunk=8192): def __init__(self, sock, max_chunk=8192):
super(SocketUnreader, self).__init__() super().__init__()
self.sock = sock self.sock = sock
self.mxchunk = max_chunk self.mxchunk = max_chunk
@ -66,7 +66,7 @@ class SocketUnreader(Unreader):
class IterUnreader(Unreader): class IterUnreader(Unreader):
def __init__(self, iterable): def __init__(self, iterable):
super(IterUnreader, self).__init__() super().__init__()
self.iter = iter(iterable) self.iter = iter(iterable)
def chunk(self): def chunk(self):

View File

@ -9,16 +9,18 @@ import os
import re import re
import sys import sys
from gunicorn.http.message import HEADER_RE from gunicorn.http.message import TOKEN_RE
from gunicorn.http.errors import InvalidHeader, InvalidHeaderName from gunicorn.http.errors import ConfigurationProblem, InvalidHeader, InvalidHeaderName
from gunicorn import SERVER_SOFTWARE from gunicorn import SERVER_SOFTWARE, SERVER
import gunicorn.util as util from gunicorn import util
# Send files in at most 1GB blocks as some operating systems can have problems # Send files in at most 1GB blocks as some operating systems can have problems
# with sending files in blocks over 2GB. # with sending files in blocks over 2GB.
BLKSIZE = 0x3FFFFFFF BLKSIZE = 0x3FFFFFFF
HEADER_VALUE_RE = re.compile(r'[\x00-\x1F\x7F]') # RFC9110 5.5: field-vchar = VCHAR / obs-text
# RFC4234 B.1: VCHAR = 0x21-x07E = printable ASCII
HEADER_VALUE_RE = re.compile(r'[ \t\x21-\x7e\x80-\xff]*')
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -73,6 +75,7 @@ def base_environ(cfg):
"wsgi.multiprocess": (cfg.workers > 1), "wsgi.multiprocess": (cfg.workers > 1),
"wsgi.run_once": False, "wsgi.run_once": False,
"wsgi.file_wrapper": FileWrapper, "wsgi.file_wrapper": FileWrapper,
"wsgi.input_terminated": True,
"SERVER_SOFTWARE": SERVER_SOFTWARE, "SERVER_SOFTWARE": SERVER_SOFTWARE,
} }
@ -132,6 +135,8 @@ def create(req, sock, client, server, cfg):
environ['CONTENT_LENGTH'] = hdr_value environ['CONTENT_LENGTH'] = hdr_value
continue continue
# do not change lightly, this is a common source of security problems
# RFC9110 Section 17.10 discourages ambiguous or incomplete mappings
key = 'HTTP_' + hdr_name.replace('-', '_') key = 'HTTP_' + hdr_name.replace('-', '_')
if key in environ: if key in environ:
hdr_value = "%s,%s" % (environ[key], hdr_value) hdr_value = "%s,%s" % (environ[key], hdr_value)
@ -179,7 +184,11 @@ def create(req, sock, client, server, cfg):
# set the path and script name # set the path and script name
path_info = req.path path_info = req.path
if script_name: if script_name:
path_info = path_info.split(script_name, 1)[1] if not path_info.startswith(script_name):
raise ConfigurationProblem(
"Request path %r does not start with SCRIPT_NAME %r" %
(path_info, script_name))
path_info = path_info[len(script_name):]
environ['PATH_INFO'] = util.unquote_to_wsgi_str(path_info) environ['PATH_INFO'] = util.unquote_to_wsgi_str(path_info)
environ['SCRIPT_NAME'] = script_name environ['SCRIPT_NAME'] = script_name
@ -194,7 +203,7 @@ class Response(object):
def __init__(self, req, sock, cfg): def __init__(self, req, sock, cfg):
self.req = req self.req = req
self.sock = sock self.sock = sock
self.version = SERVER_SOFTWARE self.version = SERVER
self.status = None self.status = None
self.chunked = False self.chunked = False
self.must_close = False self.must_close = False
@ -248,28 +257,32 @@ class Response(object):
if not isinstance(name, str): if not isinstance(name, str):
raise TypeError('%r is not a string' % name) raise TypeError('%r is not a string' % name)
if HEADER_RE.search(name): if not TOKEN_RE.fullmatch(name):
raise InvalidHeaderName('%r' % name) raise InvalidHeaderName('%r' % name)
if HEADER_VALUE_RE.search(value): if not isinstance(value, str):
raise TypeError('%r is not a string' % value)
if not HEADER_VALUE_RE.fullmatch(value):
raise InvalidHeader('%r' % value) raise InvalidHeader('%r' % value)
value = str(value).strip() # RFC9110 5.5
lname = name.lower().strip() value = value.strip(" \t")
lname = name.lower()
if lname == "content-length": if lname == "content-length":
self.response_length = int(value) self.response_length = int(value)
elif util.is_hoppish(name): elif util.is_hoppish(name):
if lname == "connection": if lname == "connection":
# handle websocket # handle websocket
if value.lower().strip() == "upgrade": if value.lower() == "upgrade":
self.upgrade = True self.upgrade = True
elif lname == "upgrade": elif lname == "upgrade":
if value.lower().strip() == "websocket": if value.lower() == "websocket":
self.headers.append((name.strip(), value)) self.headers.append((name, value))
# ignore hopbyhop headers # ignore hopbyhop headers
continue continue
self.headers.append((name.strip(), value)) self.headers.append((name, value))
def is_chunked(self): def is_chunked(self):
# Only use chunked responses when the client is # Only use chunked responses when the client is
@ -299,7 +312,7 @@ class Response(object):
headers = [ headers = [
"HTTP/%s.%s %s\r\n" % (self.req.version[0], "HTTP/%s.%s %s\r\n" % (self.req.version[0],
self.req.version[1], self.status), self.req.version[1], self.status),
"Server: %s\r\n" % self.version, "Server: %s\r\n" % self.version,
"Date: %s\r\n" % util.http_date(), "Date: %s\r\n" % util.http_date(),
"Connection: %s\r\n" % connection "Connection: %s\r\n" % connection
@ -315,7 +328,7 @@ class Response(object):
tosend.extend(["%s: %s\r\n" % (k, v) for k, v in self.headers]) tosend.extend(["%s: %s\r\n" % (k, v) for k, v in self.headers])
header_str = "%s\r\n" % "".join(tosend) header_str = "%s\r\n" % "".join(tosend)
util.write(self.sock, util.to_bytestring(header_str, "ascii")) util.write(self.sock, util.to_bytestring(header_str, "latin-1"))
self.headers_sent = True self.headers_sent = True
def write(self, arg): def write(self, arg):
@ -356,12 +369,6 @@ class Response(object):
offset = os.lseek(fileno, 0, os.SEEK_CUR) offset = os.lseek(fileno, 0, os.SEEK_CUR)
if self.response_length is None: if self.response_length is None:
filesize = os.fstat(fileno).st_size filesize = os.fstat(fileno).st_size
# The file may be special and sendfile will fail.
# It may also be zero-length, but that is okay.
if filesize == 0:
return False
nbytes = filesize - offset nbytes = filesize - offset
else: else:
nbytes = self.response_length nbytes = self.response_length
@ -373,13 +380,8 @@ class Response(object):
if self.is_chunked(): if self.is_chunked():
chunk_size = "%X\r\n" % nbytes chunk_size = "%X\r\n" % nbytes
self.sock.sendall(chunk_size.encode('utf-8')) self.sock.sendall(chunk_size.encode('utf-8'))
if nbytes > 0:
sockno = self.sock.fileno() self.sock.sendfile(respiter.filelike, offset=offset, count=nbytes)
sent = 0
while sent != nbytes:
count = min(nbytes - sent, BLKSIZE)
sent += os.sendfile(sockno, fileno, offset + sent, count)
if self.is_chunked(): if self.is_chunked():
self.sock.sendall(b"\r\n") self.sock.sendall(b"\r\n")

View File

@ -19,21 +19,27 @@ GAUGE_TYPE = "gauge"
COUNTER_TYPE = "counter" COUNTER_TYPE = "counter"
HISTOGRAM_TYPE = "histogram" HISTOGRAM_TYPE = "histogram"
class Statsd(Logger): class Statsd(Logger):
"""statsD-based instrumentation, that passes as a logger """statsD-based instrumentation, that passes as a logger
""" """
def __init__(self, cfg): def __init__(self, cfg):
"""host, port: statsD server
"""
Logger.__init__(self, cfg) Logger.__init__(self, cfg)
self.prefix = sub(r"^(.+[^.]+)\.*$", "\\g<1>.", cfg.statsd_prefix) self.prefix = sub(r"^(.+[^.]+)\.*$", "\\g<1>.", cfg.statsd_prefix)
if isinstance(cfg.statsd_host, str):
address_family = socket.AF_UNIX
else:
address_family = socket.AF_INET
try: try:
host, port = cfg.statsd_host self.sock = socket.socket(address_family, socket.SOCK_DGRAM)
self.sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.sock.connect(cfg.statsd_host)
self.sock.connect((host, int(port)))
except Exception: except Exception:
self.sock = None self.sock = None
self.dogstatsd_tags = cfg.dogstatsd_tags
# Log errors and warnings # Log errors and warnings
def critical(self, msg, *args, **kwargs): def critical(self, msg, *args, **kwargs):
Logger.critical(self, msg, *args, **kwargs) Logger.critical(self, msg, *args, **kwargs)
@ -51,7 +57,7 @@ class Statsd(Logger):
Logger.exception(self, msg, *args, **kwargs) Logger.exception(self, msg, *args, **kwargs)
self.increment("gunicorn.log.exception", 1) self.increment("gunicorn.log.exception", 1)
# Special treatement for info, the most common log level # Special treatment for info, the most common log level
def info(self, msg, *args, **kwargs): def info(self, msg, *args, **kwargs):
self.log(logging.INFO, msg, *args, **kwargs) self.log(logging.INFO, msg, *args, **kwargs)
@ -116,6 +122,11 @@ class Statsd(Logger):
try: try:
if isinstance(msg, str): if isinstance(msg, str):
msg = msg.encode("ascii") msg = msg.encode("ascii")
# http://docs.datadoghq.com/guides/dogstatsd/#datagram-format
if self.dogstatsd_tags:
msg = msg + b"|#" + self.dogstatsd_tags.encode('ascii')
if self.sock: if self.sock:
self.sock.send(msg) self.sock.send(msg)
except Exception: except Exception:

View File

@ -57,7 +57,7 @@ class Pidfile(object):
if pid1 == self.pid: if pid1 == self.pid:
os.unlink(self.fname) os.unlink(self.fname)
except: except Exception:
pass pass
def validate(self): def validate(self):

View File

@ -2,6 +2,7 @@
# #
# This file is part of gunicorn released under the MIT license. # This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information. # See the NOTICE for more information.
# pylint: disable=no-else-continue
import os import os
import os.path import os.path
@ -15,16 +16,14 @@ COMPILED_EXT_RE = re.compile(r'py[co]$')
class Reloader(threading.Thread): class Reloader(threading.Thread):
def __init__(self, extra_files=None, interval=1, callback=None): def __init__(self, extra_files=None, interval=1, callback=None):
super(Reloader, self).__init__() super().__init__()
self.setDaemon(True) self.daemon = True
self._extra_files = set(extra_files or ()) self._extra_files = set(extra_files or ())
self._extra_files_lock = threading.RLock()
self._interval = interval self._interval = interval
self._callback = callback self._callback = callback
def add_extra_file(self, filename): def add_extra_file(self, filename):
with self._extra_files_lock: self._extra_files.add(filename)
self._extra_files.add(filename)
def get_files(self): def get_files(self):
fnames = [ fnames = [
@ -33,8 +32,7 @@ class Reloader(threading.Thread):
if getattr(module, '__file__', None) if getattr(module, '__file__', None)
] ]
with self._extra_files_lock: fnames.extend(self._extra_files)
fnames.extend(self._extra_files)
return fnames return fnames
@ -55,6 +53,7 @@ class Reloader(threading.Thread):
self._callback(filename) self._callback(filename)
time.sleep(self._interval) time.sleep(self._interval)
has_inotify = False has_inotify = False
if sys.platform.startswith('linux'): if sys.platform.startswith('linux'):
try: try:
@ -74,8 +73,8 @@ if has_inotify:
| inotify.constants.IN_MOVED_TO) | inotify.constants.IN_MOVED_TO)
def __init__(self, extra_files=None, callback=None): def __init__(self, extra_files=None, callback=None):
super(InotifyReloader, self).__init__() super().__init__()
self.setDaemon(True) self.daemon = True
self._callback = callback self._callback = callback
self._dirs = set() self._dirs = set()
self._watcher = Inotify() self._watcher = Inotify()
@ -94,7 +93,7 @@ if has_inotify:
def get_dirs(self): def get_dirs(self):
fnames = [ fnames = [
os.path.dirname(COMPILED_EXT_RE.sub('py', module.__file__)) os.path.dirname(os.path.abspath(COMPILED_EXT_RE.sub('py', module.__file__)))
for module in tuple(sys.modules.values()) for module in tuple(sys.modules.values())
if getattr(module, '__file__', None) if getattr(module, '__file__', None)
] ]
@ -105,7 +104,8 @@ if has_inotify:
self._dirs = self.get_dirs() self._dirs = self.get_dirs()
for dirname in self._dirs: for dirname in self._dirs:
self._watcher.add_watch(dirname, mask=self.event_mask) if os.path.isdir(dirname):
self._watcher.add_watch(dirname, mask=self.event_mask)
for event in self._watcher.event_gen(): for event in self._watcher.event_gen():
if event is None: if event is None:
@ -118,7 +118,7 @@ if has_inotify:
else: else:
class InotifyReloader(object): class InotifyReloader(object):
def __init__(self, callback=None): def __init__(self, extra_files=None, callback=None):
raise ImportError('You must have the inotify module installed to ' raise ImportError('You must have the inotify module installed to '
'use the inotify reloader') 'use the inotify reloader')

View File

@ -6,12 +6,12 @@
import errno import errno
import os import os
import socket import socket
import ssl
import stat import stat
import sys import sys
import time import time
from gunicorn import util from gunicorn import util
from gunicorn.socketfromfd import fromfd
class BaseSocket(object): class BaseSocket(object):
@ -40,7 +40,7 @@ class BaseSocket(object):
def set_options(self, sock, bound=False): def set_options(self, sock, bound=False):
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
if (self.conf.reuse_port if (self.conf.reuse_port
and hasattr(socket, 'SO_REUSEPORT')): # pragma: no cover and hasattr(socket, 'SO_REUSEPORT')): # pragma: no cover
try: try:
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1)
except socket.error as err: except socket.error as err:
@ -87,7 +87,7 @@ class TCPSocket(BaseSocket):
def set_options(self, sock, bound=False): def set_options(self, sock, bound=False):
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
return super(TCPSocket, self).set_options(sock, bound=bound) return super().set_options(sock, bound=bound)
class TCP6Socket(TCPSocket): class TCP6Socket(TCPSocket):
@ -115,7 +115,7 @@ class UnixSocket(BaseSocket):
os.remove(addr) os.remove(addr)
else: else:
raise ValueError("%r is not a socket" % addr) raise ValueError("%r is not a socket" % addr)
super(UnixSocket, self).__init__(addr, conf, log, fd=fd) super().__init__(addr, conf, log, fd=fd)
def __str__(self): def __str__(self):
return "unix:%s" % self.cfg_addr return "unix:%s" % self.cfg_addr
@ -168,7 +168,7 @@ def create_sockets(conf, log, fds=None):
# sockets are already bound # sockets are already bound
if fdaddr: if fdaddr:
for fd in fdaddr: for fd in fdaddr:
sock = fromfd(fd) sock = socket.fromfd(fd, socket.AF_UNIX, socket.SOCK_STREAM)
sock_name = sock.getsockname() sock_name = sock.getsockname()
sock_type = _sock_type(sock_name) sock_type = _sock_type(sock_name)
listener = sock_type(sock_name, conf, log, fd=fd) listener = sock_type(sock_name, conf, log, fd=fd)
@ -211,3 +211,22 @@ def close_sockets(listeners, unlink=True):
sock.close() sock.close()
if unlink and _sock_type(sock_name) is UnixSocket: if unlink and _sock_type(sock_name) is UnixSocket:
os.unlink(sock_name) os.unlink(sock_name)
def ssl_context(conf):
def default_ssl_context_factory():
context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH, cafile=conf.ca_certs)
context.load_cert_chain(certfile=conf.certfile, keyfile=conf.keyfile)
context.verify_mode = conf.cert_reqs
if conf.ciphers:
context.set_ciphers(conf.ciphers)
return context
return conf.ssl_context(conf, default_ssl_context_factory)
def ssl_wrap_socket(sock, conf):
return ssl_context(conf).wrap_socket(sock,
server_side=True,
suppress_ragged_eofs=conf.suppress_ragged_eofs,
do_handshake_on_connect=conf.do_handshake_on_connect)

View File

@ -1,96 +0,0 @@
# Copyright (C) 2016 Christian Heimes
"""socketfromfd -- socket.fromd() with auto-discovery
ATTENTION: Do not remove this backport till the minimum required version is
Python 3.7. See https://bugs.python.org/issue28134 for details.
"""
from __future__ import print_function
import ctypes
import os
import socket
import sys
from ctypes.util import find_library
__all__ = ('fromfd',)
SO_DOMAIN = getattr(socket, 'SO_DOMAIN', 39)
SO_TYPE = getattr(socket, 'SO_TYPE', 3)
SO_PROTOCOL = getattr(socket, 'SO_PROTOCOL', 38)
_libc_name = find_library('c')
if _libc_name is not None:
libc = ctypes.CDLL(_libc_name, use_errno=True)
else:
raise OSError('libc not found')
def _errcheck_errno(result, func, arguments):
"""Raise OSError by errno for -1
"""
if result == -1:
errno = ctypes.get_errno()
raise OSError(errno, os.strerror(errno))
return arguments
_libc_getsockopt = libc.getsockopt
_libc_getsockopt.argtypes = [
ctypes.c_int, # int sockfd
ctypes.c_int, # int level
ctypes.c_int, # int optname
ctypes.c_void_p, # void *optval
ctypes.POINTER(ctypes.c_uint32) # socklen_t *optlen
]
_libc_getsockopt.restype = ctypes.c_int # 0: ok, -1: err
_libc_getsockopt.errcheck = _errcheck_errno
def _raw_getsockopt(fd, level, optname):
"""Make raw getsockopt() call for int32 optval
:param fd: socket fd
:param level: SOL_*
:param optname: SO_*
:return: value as int
"""
optval = ctypes.c_int(0)
optlen = ctypes.c_uint32(4)
_libc_getsockopt(fd, level, optname,
ctypes.byref(optval), ctypes.byref(optlen))
return optval.value
def fromfd(fd, keep_fd=True):
"""Create a socket from a file descriptor
socket domain (family), type and protocol are auto-detected. By default
the socket uses a dup()ed fd. The original fd can be closed.
The parameter `keep_fd` influences fd duplication. Under Python 2 the
fd is still duplicated but the input fd is closed. Under Python 3 and
with `keep_fd=True`, the new socket object uses the same fd.
:param fd: socket fd
:type fd: int
:param keep_fd: keep input fd
:type keep_fd: bool
:return: socket.socket instance
:raises OSError: for invalid socket fd
"""
family = _raw_getsockopt(fd, socket.SOL_SOCKET, SO_DOMAIN)
typ = _raw_getsockopt(fd, socket.SOL_SOCKET, SO_TYPE)
proto = _raw_getsockopt(fd, socket.SOL_SOCKET, SO_PROTOCOL)
if sys.version_info.major == 2:
# Python 2 has no fileno argument and always duplicates the fd
sockobj = socket.fromfd(fd, family, typ, proto)
sock = socket.socket(None, None, None, _sock=sockobj)
if not keep_fd:
os.close(fd)
return sock
else:
if keep_fd:
return socket.fromfd(fd, family, typ, proto)
else:
return socket.socket(family, typ, proto, fileno=fd)

View File

@ -58,7 +58,6 @@ def sd_notify(state, logger, unset_environment=False):
child processes. child processes.
""" """
addr = os.environ.get('NOTIFY_SOCKET') addr = os.environ.get('NOTIFY_SOCKET')
if addr is None: if addr is None:
# not run in a service, just a noop # not run in a service, just a noop
@ -69,7 +68,7 @@ def sd_notify(state, logger, unset_environment=False):
addr = '\0' + addr[1:] addr = '\0' + addr[1:]
sock.connect(addr) sock.connect(addr)
sock.sendall(state.encode('utf-8')) sock.sendall(state.encode('utf-8'))
except: except Exception:
logger.debug("Exception while invoking sd_notify()", exc_info=True) logger.debug("Exception while invoking sd_notify()", exc_info=True)
finally: finally:
if unset_environment: if unset_environment:

View File

@ -2,11 +2,12 @@
# #
# This file is part of gunicorn released under the MIT license. # This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information. # See the NOTICE for more information.
import ast
import email.utils import email.utils
import errno import errno
import fcntl import fcntl
import html import html
import importlib
import inspect import inspect
import io import io
import logging import logging
@ -21,7 +22,10 @@ import time
import traceback import traceback
import warnings import warnings
import pkg_resources try:
import importlib.metadata as importlib_metadata
except (ModuleNotFoundError, ImportError):
import importlib_metadata
from gunicorn.errors import AppImportError from gunicorn.errors import AppImportError
from gunicorn.workers import SUPPORTED_WORKERS from gunicorn.workers import SUPPORTED_WORKERS
@ -53,45 +57,17 @@ except ImportError:
pass pass
try: def load_entry_point(distribution, group, name):
from importlib import import_module dist_obj = importlib_metadata.distribution(distribution)
except ImportError: eps = [ep for ep in dist_obj.entry_points
def _resolve_name(name, package, level): if ep.group == group and ep.name == name]
"""Return the absolute name of the module to be imported.""" if not eps:
if not hasattr(package, 'rindex'): raise ImportError("Entry point %r not found" % ((group, name),))
raise ValueError("'package' not set to a string") return eps[0].load()
dot = len(package)
for _ in range(level, 1, -1):
try:
dot = package.rindex('.', 0, dot)
except ValueError:
msg = "attempted relative import beyond top-level package"
raise ValueError(msg)
return "%s.%s" % (package[:dot], name)
def import_module(name, package=None):
"""Import a module.
The 'package' argument is required when performing a relative import. It
specifies the package to use as the anchor point from which to resolve the
relative import to an absolute import.
"""
if name.startswith('.'):
if not package:
raise TypeError("relative imports require the 'package' argument")
level = 0
for character in name:
if character != '.':
break
level += 1
name = _resolve_name(name[level:], package, level)
__import__(name)
return sys.modules[name]
def load_class(uri, default="gunicorn.workers.sync.SyncWorker", def load_class(uri, default="gunicorn.workers.sync.SyncWorker",
section="gunicorn.workers"): section="gunicorn.workers"):
if inspect.isclass(uri): if inspect.isclass(uri):
return uri return uri
if uri.startswith("egg:"): if uri.startswith("egg:"):
@ -104,8 +80,8 @@ def load_class(uri, default="gunicorn.workers.sync.SyncWorker",
name = default name = default
try: try:
return pkg_resources.load_entry_point(dist, section, name) return load_entry_point(dist, section, name)
except: except Exception:
exc = traceback.format_exc() exc = traceback.format_exc()
msg = "class uri %r invalid or not found: \n\n[%s]" msg = "class uri %r invalid or not found: \n\n[%s]"
raise RuntimeError(msg % (uri, exc)) raise RuntimeError(msg % (uri, exc))
@ -121,9 +97,10 @@ def load_class(uri, default="gunicorn.workers.sync.SyncWorker",
break break
try: try:
return pkg_resources.load_entry_point("gunicorn", return load_entry_point(
section, uri) "gunicorn", section, uri
except: )
except Exception:
exc = traceback.format_exc() exc = traceback.format_exc()
msg = "class uri %r invalid or not found: \n\n[%s]" msg = "class uri %r invalid or not found: \n\n[%s]"
raise RuntimeError(msg % (uri, exc)) raise RuntimeError(msg % (uri, exc))
@ -131,8 +108,8 @@ def load_class(uri, default="gunicorn.workers.sync.SyncWorker",
klass = components.pop(-1) klass = components.pop(-1)
try: try:
mod = import_module('.'.join(components)) mod = importlib.import_module('.'.join(components))
except: except Exception:
exc = traceback.format_exc() exc = traceback.format_exc()
msg = "class uri %r invalid or not found: \n\n[%s]" msg = "class uri %r invalid or not found: \n\n[%s]"
raise RuntimeError(msg % (uri, exc)) raise RuntimeError(msg % (uri, exc))
@ -180,7 +157,7 @@ def set_owner_process(uid, gid, initgroups=False):
elif gid != os.getgid(): elif gid != os.getgid():
os.setgid(gid) os.setgid(gid)
if uid: if uid and uid != os.getuid():
os.setuid(uid) os.setuid(uid)
@ -190,7 +167,7 @@ def chown(path, uid, gid):
if sys.platform.startswith("win"): if sys.platform.startswith("win"):
def _waitfor(func, pathname, waitall=False): def _waitfor(func, pathname, waitall=False):
# Peform the operation # Perform the operation
func(pathname) func(pathname)
# Now setup the wait loop # Now setup the wait loop
if waitall: if waitall:
@ -247,7 +224,7 @@ def is_ipv6(addr):
return True return True
def parse_address(netloc, default_port=8000): def parse_address(netloc, default_port='8000'):
if re.match(r'unix:(//)?', netloc): if re.match(r'unix:(//)?', netloc):
return re.split(r'unix:(//)?', netloc)[-1] return re.split(r'unix:(//)?', netloc)[-1]
@ -260,27 +237,22 @@ def parse_address(netloc, default_port=8000):
if netloc.startswith("tcp://"): if netloc.startswith("tcp://"):
netloc = netloc.split("tcp://")[1] netloc = netloc.split("tcp://")[1]
host, port = netloc, default_port
# get host
if '[' in netloc and ']' in netloc: if '[' in netloc and ']' in netloc:
host = netloc.split(']')[0][1:].lower() host = netloc.split(']')[0][1:]
port = (netloc.split(']:') + [default_port])[1]
elif ':' in netloc: elif ':' in netloc:
host = netloc.split(':')[0].lower() host, port = (netloc.split(':') + [default_port])[:2]
elif netloc == "": elif netloc == "":
host = "0.0.0.0" host, port = "0.0.0.0", default_port
else:
host = netloc.lower()
#get port try:
netloc = netloc.split(']')[-1]
if ":" in netloc:
port = netloc.split(':', 1)[1]
if not port.isdigit():
raise RuntimeError("%r is not a valid port number." % port)
port = int(port) port = int(port)
else: except ValueError:
port = default_port raise RuntimeError("%r is not a valid port number." % port)
return (host, port)
return host.lower(), port
def close_on_exec(fd): def close_on_exec(fd):
@ -300,6 +272,7 @@ def close(sock):
except socket.error: except socket.error:
pass pass
try: try:
from os import closerange from os import closerange
except ImportError: except ImportError:
@ -361,31 +334,106 @@ def write_error(sock, status_int, reason, mesg):
write_nonblock(sock, http.encode('latin1')) write_nonblock(sock, http.encode('latin1'))
def _called_with_wrong_args(f):
"""Check whether calling a function raised a ``TypeError`` because
the call failed or because something in the function raised the
error.
:param f: The function that was called.
:return: ``True`` if the call failed.
"""
tb = sys.exc_info()[2]
try:
while tb is not None:
if tb.tb_frame.f_code is f.__code__:
# In the function, it was called successfully.
return False
tb = tb.tb_next
# Didn't reach the function.
return True
finally:
# Delete tb to break a circular reference in Python 2.
# https://docs.python.org/2/library/sys.html#sys.exc_info
del tb
def import_app(module): def import_app(module):
parts = module.split(":", 1) parts = module.split(":", 1)
if len(parts) == 1: if len(parts) == 1:
module, obj = module, "application" obj = "application"
else: else:
module, obj = parts[0], parts[1] module, obj = parts[0], parts[1]
try: try:
__import__(module) mod = importlib.import_module(module)
except ImportError: except ImportError:
if module.endswith(".py") and os.path.exists(module): if module.endswith(".py") and os.path.exists(module):
msg = "Failed to find application, did you mean '%s:%s'?" msg = "Failed to find application, did you mean '%s:%s'?"
raise ImportError(msg % (module.rsplit(".", 1)[0], obj)) raise ImportError(msg % (module.rsplit(".", 1)[0], obj))
else: raise
raise
mod = sys.modules[module] # Parse obj as a single expression to determine if it's a valid
# attribute name or function call.
try:
expression = ast.parse(obj, mode="eval").body
except SyntaxError:
raise AppImportError(
"Failed to parse %r as an attribute name or function call." % obj
)
if isinstance(expression, ast.Name):
name = expression.id
args = kwargs = None
elif isinstance(expression, ast.Call):
# Ensure the function name is an attribute name only.
if not isinstance(expression.func, ast.Name):
raise AppImportError("Function reference must be a simple name: %r" % obj)
name = expression.func.id
# Parse the positional and keyword arguments as literals.
try:
args = [ast.literal_eval(arg) for arg in expression.args]
kwargs = {kw.arg: ast.literal_eval(kw.value) for kw in expression.keywords}
except ValueError:
# literal_eval gives cryptic error messages, show a generic
# message with the full expression instead.
raise AppImportError(
"Failed to parse arguments as literal values: %r" % obj
)
else:
raise AppImportError(
"Failed to parse %r as an attribute name or function call." % obj
)
is_debug = logging.root.level == logging.DEBUG is_debug = logging.root.level == logging.DEBUG
try: try:
app = eval(obj, vars(mod)) app = getattr(mod, name)
except NameError: except AttributeError:
if is_debug: if is_debug:
traceback.print_exception(*sys.exc_info()) traceback.print_exception(*sys.exc_info())
raise AppImportError("Failed to find application object %r in %r" % (obj, module)) raise AppImportError("Failed to find attribute %r in %r." % (name, module))
# If the expression was a function call, call the retrieved object
# to get the real application.
if args is not None:
try:
app = app(*args, **kwargs)
except TypeError as e:
# If the TypeError was due to bad arguments to the factory
# function, show Python's nice error message without a
# traceback.
if _called_with_wrong_args(app):
raise AppImportError(
"".join(traceback.format_exception_only(TypeError, e)).strip()
)
# Otherwise it was raised from within the function, show the
# full traceback.
raise
if app is None: if app is None:
raise AppImportError("Failed to find application object: %r" % obj) raise AppImportError("Failed to find application object: %r" % obj)
@ -404,7 +452,7 @@ def getcwd():
cwd = os.environ['PWD'] cwd = os.environ['PWD']
else: else:
cwd = os.getcwd() cwd = os.getcwd()
except: except Exception:
cwd = os.getcwd() cwd = os.getcwd()
return cwd return cwd
@ -424,7 +472,7 @@ def is_hoppish(header):
def daemonize(enable_stdio_inheritance=False): def daemonize(enable_stdio_inheritance=False):
"""\ """\
Standard daemonization of a process. Standard daemonization of a process.
http://www.svbug.com/documentation/comp.unix.programmer-FAQ/faq_2.html#SEC16 http://www.faqs.org/faqs/unix-faq/programmer/faq/ section 1.7
""" """
if 'GUNICORN_FD' not in os.environ: if 'GUNICORN_FD' not in os.environ:
if os.fork(): if os.fork():
@ -450,7 +498,10 @@ def daemonize(enable_stdio_inheritance=False):
closerange(0, 3) closerange(0, 3)
fd_null = os.open(REDIRECT_TO, os.O_RDWR) fd_null = os.open(REDIRECT_TO, os.O_RDWR)
# PEP 446, make fd for /dev/null inheritable
os.set_inheritable(fd_null, True)
# expect fd_null to be always 0 here, but in-case not ...
if fd_null != 0: if fd_null != 0:
os.dup2(fd_null, 0) os.dup2(fd_null, 0)
@ -511,12 +562,12 @@ def seed():
random.seed('%s.%s' % (time.time(), os.getpid())) random.seed('%s.%s' % (time.time(), os.getpid()))
def check_is_writeable(path): def check_is_writable(path):
try: try:
f = open(path, 'a') with open(path, 'a') as f:
f.close()
except IOError as e: except IOError as e:
raise RuntimeError("Error: '%s' isn't writable [%r]" % (path, e)) raise RuntimeError("Error: '%s' isn't writable [%r]" % (path, e))
f.close()
def to_bytestring(value, encoding="utf8"): def to_bytestring(value, encoding="utf8"):
@ -528,6 +579,7 @@ def to_bytestring(value, encoding="utf8"):
return value.encode(encoding) return value.encode(encoding)
def has_fileno(obj): def has_fileno(obj):
if not hasattr(obj, "fileno"): if not hasattr(obj, "fileno"):
return False return False

View File

@ -7,7 +7,6 @@
SUPPORTED_WORKERS = { SUPPORTED_WORKERS = {
"sync": "gunicorn.workers.sync.SyncWorker", "sync": "gunicorn.workers.sync.SyncWorker",
"eventlet": "gunicorn.workers.geventlet.EventletWorker", "eventlet": "gunicorn.workers.geventlet.EventletWorker",
"gaiohttp": "gunicorn.workers.gaiohttp.AiohttpWorker",
"gevent": "gunicorn.workers.ggevent.GeventWorker", "gevent": "gunicorn.workers.ggevent.GeventWorker",
"gevent_wsgi": "gunicorn.workers.ggevent.GeventPyWSGIWorker", "gevent_wsgi": "gunicorn.workers.ggevent.GeventPyWSGIWorker",
"gevent_pywsgi": "gunicorn.workers.ggevent.GeventPyWSGIWorker", "gevent_pywsgi": "gunicorn.workers.ggevent.GeventPyWSGIWorker",

View File

@ -1,168 +0,0 @@
# -*- coding: utf-8 -
#
# This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information.
import asyncio
import datetime
import functools
import logging
import os
try:
import ssl
except ImportError:
ssl = None
import gunicorn.workers.base as base
from aiohttp.wsgi import WSGIServerHttpProtocol as OldWSGIServerHttpProtocol
class WSGIServerHttpProtocol(OldWSGIServerHttpProtocol):
def log_access(self, request, environ, response, time):
self.logger.access(response, request, environ, datetime.timedelta(0, 0, time))
class AiohttpWorker(base.Worker):
def __init__(self, *args, **kw): # pragma: no cover
super().__init__(*args, **kw)
cfg = self.cfg
if cfg.is_ssl:
self.ssl_context = self._create_ssl_context(cfg)
else:
self.ssl_context = None
self.servers = []
self.connections = {}
def init_process(self):
# create new event_loop after fork
asyncio.get_event_loop().close()
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.loop)
super().init_process()
def run(self):
self._runner = asyncio.ensure_future(self._run(), loop=self.loop)
try:
self.loop.run_until_complete(self._runner)
finally:
self.loop.close()
def wrap_protocol(self, proto):
proto.connection_made = _wrp(
proto, proto.connection_made, self.connections)
proto.connection_lost = _wrp(
proto, proto.connection_lost, self.connections, False)
return proto
def factory(self, wsgi, addr):
# are we in debug level
is_debug = self.log.loglevel == logging.DEBUG
proto = WSGIServerHttpProtocol(
wsgi, readpayload=True,
loop=self.loop,
log=self.log,
debug=is_debug,
keep_alive=self.cfg.keepalive,
access_log=self.log.access_log,
access_log_format=self.cfg.access_log_format)
return self.wrap_protocol(proto)
def get_factory(self, sock, addr):
return functools.partial(self.factory, self.wsgi, addr)
@asyncio.coroutine
def close(self):
try:
if hasattr(self.wsgi, 'close'):
yield from self.wsgi.close()
except:
self.log.exception('Process shutdown exception')
@asyncio.coroutine
def _run(self):
for sock in self.sockets:
factory = self.get_factory(sock.sock, sock.cfg_addr)
self.servers.append(
(yield from self._create_server(factory, sock)))
# If our parent changed then we shut down.
pid = os.getpid()
try:
while self.alive or self.connections:
self.notify()
if (self.alive and
pid == os.getpid() and self.ppid != os.getppid()):
self.log.info("Parent changed, shutting down: %s", self)
self.alive = False
# stop accepting requests
if not self.alive:
if self.servers:
self.log.info(
"Stopping server: %s, connections: %s",
pid, len(self.connections))
for server in self.servers:
server.close()
self.servers.clear()
# prepare connections for closing
for conn in self.connections.values():
if hasattr(conn, 'closing'):
conn.closing()
yield from asyncio.sleep(1.0, loop=self.loop)
except KeyboardInterrupt:
pass
if self.servers:
for server in self.servers:
server.close()
yield from self.close()
@asyncio.coroutine
def _create_server(self, factory, sock):
return self.loop.create_server(factory, sock=sock.sock,
ssl=self.ssl_context)
@staticmethod
def _create_ssl_context(cfg):
""" Creates SSLContext instance for usage in asyncio.create_server.
See ssl.SSLSocket.__init__ for more details.
"""
ctx = ssl.SSLContext(cfg.ssl_version)
ctx.load_cert_chain(cfg.certfile, cfg.keyfile)
ctx.verify_mode = cfg.cert_reqs
if cfg.ca_certs:
ctx.load_verify_locations(cfg.ca_certs)
if cfg.ciphers:
ctx.set_ciphers(cfg.ciphers)
return ctx
class _wrp:
def __init__(self, proto, meth, tracking, add=True):
self._proto = proto
self._id = id(proto)
self._meth = meth
self._tracking = tracking
self._add = add
def __call__(self, *args):
if self._add:
self._tracking[self._id] = self._proto
elif self._id in self._tracking:
del self._tracking[self._id]
conn = self._meth(*args)
return conn

View File

@ -28,8 +28,9 @@ from gunicorn.workers.workertmp import WorkerTmp
class Worker(object): class Worker(object):
SIGNALS = [getattr(signal, "SIG%s" % x) SIGNALS = [getattr(signal, "SIG%s" % x) for x in (
for x in "ABRT HUP QUIT INT TERM USR1 USR2 WINCH CHLD".split()] "ABRT HUP QUIT INT TERM USR1 USR2 WINCH CHLD".split()
)]
PIPE = [] PIPE = []
@ -51,8 +52,13 @@ class Worker(object):
self.reloader = None self.reloader = None
self.nr = 0 self.nr = 0
jitter = randint(0, cfg.max_requests_jitter)
self.max_requests = cfg.max_requests + jitter or sys.maxsize if cfg.max_requests > 0:
jitter = randint(0, cfg.max_requests_jitter)
self.max_requests = cfg.max_requests + jitter
else:
self.max_requests = sys.maxsize
self.alive = True self.alive = True
self.log = log self.log = log
self.tmp = WorkerTmp(cfg) self.tmp = WorkerTmp(cfg)
@ -80,8 +86,7 @@ class Worker(object):
"""\ """\
If you override this method in a subclass, the last statement If you override this method in a subclass, the last statement
in the function should be to call this method with in the function should be to call this method with
super(MyWorkerClass, self).init_process() so that the ``run()`` super().init_process() so that the ``run()`` loop is initiated.
loop is initiated.
""" """
# set environment' variables # set environment' variables
@ -117,6 +122,7 @@ class Worker(object):
def changed(fname): def changed(fname):
self.log.info("Worker reloading: %s modified", fname) self.log.info("Worker reloading: %s modified", fname)
self.alive = False self.alive = False
os.write(self.PIPE[1], b"1")
self.cfg.worker_int(self) self.cfg.worker_int(self)
time.sleep(0.1) time.sleep(0.1)
sys.exit(0) sys.exit(0)
@ -124,9 +130,11 @@ class Worker(object):
reloader_cls = reloader_engines[self.cfg.reload_engine] reloader_cls = reloader_engines[self.cfg.reload_engine]
self.reloader = reloader_cls(extra_files=self.cfg.reload_extra_files, self.reloader = reloader_cls(extra_files=self.cfg.reload_extra_files,
callback=changed) callback=changed)
self.reloader.start()
self.load_wsgi() self.load_wsgi()
if self.reloader:
self.reloader.start()
self.cfg.post_worker_init(self) self.cfg.post_worker_init(self)
# Enter main run loop # Enter main run loop
@ -197,12 +205,14 @@ class Worker(object):
def handle_error(self, req, client, addr, exc): def handle_error(self, req, client, addr, exc):
request_start = datetime.now() request_start = datetime.now()
addr = addr or ('', -1) # unix socket case addr = addr or ('', -1) # unix socket case
if isinstance(exc, (InvalidRequestLine, InvalidRequestMethod, if isinstance(exc, (
InvalidHTTPVersion, InvalidHeader, InvalidHeaderName, InvalidRequestLine, InvalidRequestMethod,
LimitRequestLine, LimitRequestHeaders, InvalidHTTPVersion, InvalidHeader, InvalidHeaderName,
InvalidProxyLine, ForbiddenProxyRequest, LimitRequestLine, LimitRequestHeaders,
InvalidSchemeHeaders, InvalidProxyLine, ForbiddenProxyRequest,
SSLError)): InvalidSchemeHeaders,
SSLError,
)):
status_int = 400 status_int = 400
reason = "Bad Request" reason = "Bad Request"
@ -220,7 +230,9 @@ class Worker(object):
elif isinstance(exc, LimitRequestLine): elif isinstance(exc, LimitRequestLine):
mesg = "%s" % str(exc) mesg = "%s" % str(exc)
elif isinstance(exc, LimitRequestHeaders): elif isinstance(exc, LimitRequestHeaders):
reason = "Request Header Fields Too Large"
mesg = "Error parsing headers: '%s'" % str(exc) mesg = "Error parsing headers: '%s'" % str(exc)
status_int = 431
elif isinstance(exc, InvalidProxyLine): elif isinstance(exc, InvalidProxyLine):
mesg = "'%s'" % str(exc) mesg = "'%s'" % str(exc)
elif isinstance(exc, ForbiddenProxyRequest): elif isinstance(exc, ForbiddenProxyRequest):
@ -235,10 +247,12 @@ class Worker(object):
status_int = 403 status_int = 403
msg = "Invalid request from ip={ip}: {error}" msg = "Invalid request from ip={ip}: {error}"
self.log.debug(msg.format(ip=addr[0], error=str(exc))) self.log.warning(msg.format(ip=addr[0], error=str(exc)))
else: else:
if hasattr(req, "uri"): if hasattr(req, "uri"):
self.log.exception("Error handling request %s", req.uri) self.log.exception("Error handling request %s", req.uri)
else:
self.log.exception("Error handling request (no URI read)")
status_int = 500 status_int = 500
reason = "Internal Server Error" reason = "Internal Server Error"
mesg = "" mesg = ""
@ -255,7 +269,7 @@ class Worker(object):
try: try:
util.write_error(client, status_int, reason, mesg) util.write_error(client, status_int, reason, mesg)
except: except Exception:
self.log.debug("Failed to send error message.") self.log.debug("Failed to send error message.")
def handle_winch(self, sig, fname): def handle_winch(self, sig, fname):

View File

@ -9,10 +9,10 @@ import socket
import ssl import ssl
import sys import sys
import gunicorn.http as http from gunicorn import http
import gunicorn.http.wsgi as wsgi from gunicorn.http import wsgi
import gunicorn.util as util from gunicorn import util
import gunicorn.workers.base as base from gunicorn.workers import base
ALREADY_HANDLED = object() ALREADY_HANDLED = object()
@ -20,7 +20,7 @@ ALREADY_HANDLED = object()
class AsyncWorker(base.Worker): class AsyncWorker(base.Worker):
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
super(AsyncWorker, self).__init__(*args, **kwargs) super().__init__(*args, **kwargs)
self.worker_connections = self.cfg.worker_connections self.worker_connections = self.cfg.worker_connections
def timeout_ctx(self): def timeout_ctx(self):
@ -33,7 +33,7 @@ class AsyncWorker(base.Worker):
def handle(self, listener, client, addr): def handle(self, listener, client, addr):
req = None req = None
try: try:
parser = http.RequestParser(self.cfg, client) parser = http.RequestParser(self.cfg, client, addr)
try: try:
listener_name = listener.getsockname() listener_name = listener.getsockname()
if not self.cfg.keepalive: if not self.cfg.keepalive:
@ -73,14 +73,16 @@ class AsyncWorker(base.Worker):
self.log.debug("Error processing SSL request.") self.log.debug("Error processing SSL request.")
self.handle_error(req, client, addr, e) self.handle_error(req, client, addr, e)
except EnvironmentError as e: except EnvironmentError as e:
if e.errno not in (errno.EPIPE, errno.ECONNRESET): if e.errno not in (errno.EPIPE, errno.ECONNRESET, errno.ENOTCONN):
self.log.exception("Socket error processing request.") self.log.exception("Socket error processing request.")
else: else:
if e.errno == errno.ECONNRESET: if e.errno == errno.ECONNRESET:
self.log.debug("Ignoring connection reset") self.log.debug("Ignoring connection reset")
elif e.errno == errno.ENOTCONN:
self.log.debug("Ignoring socket not connected")
else: else:
self.log.debug("Ignoring EPIPE") self.log.debug("Ignoring EPIPE")
except Exception as e: except BaseException as e:
self.handle_error(req, client, addr, e) self.handle_error(req, client, addr, e)
finally: finally:
util.close(client) util.close(client)
@ -92,15 +94,15 @@ class AsyncWorker(base.Worker):
try: try:
self.cfg.pre_request(self, req) self.cfg.pre_request(self, req)
resp, environ = wsgi.create(req, sock, addr, resp, environ = wsgi.create(req, sock, addr,
listener_name, self.cfg) listener_name, self.cfg)
environ["wsgi.multithread"] = True environ["wsgi.multithread"] = True
self.nr += 1 self.nr += 1
if self.alive and self.nr >= self.max_requests: if self.nr >= self.max_requests:
self.log.info("Autorestarting worker after current request.") if self.alive:
resp.force_close() self.log.info("Autorestarting worker after current request.")
self.alive = False self.alive = False
if not self.cfg.keepalive: if not self.alive or not self.cfg.keepalive:
resp.force_close() resp.force_close()
respiter = self.wsgi(environ, resp.start_response) respiter = self.wsgi(environ, resp.start_response)
@ -113,9 +115,9 @@ class AsyncWorker(base.Worker):
for item in respiter: for item in respiter:
resp.write(item) resp.write(item)
resp.close() resp.close()
finally:
request_time = datetime.now() - request_start request_time = datetime.now() - request_start
self.log.access(resp, req, environ, request_time) self.log.access(resp, req, environ, request_time)
finally:
if hasattr(respiter, "close"): if hasattr(respiter, "close"):
respiter.close() respiter.close()
if resp.should_close(): if resp.should_close():

View File

@ -1,22 +0,0 @@
# -*- coding: utf-8 -
#
# This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information.
from gunicorn import util
try:
import aiohttp # pylint: disable=unused-import
except ImportError:
raise RuntimeError("You need aiohttp installed to use this worker.")
else:
try:
from aiohttp.worker import GunicornWebWorker as AiohttpWorker
except ImportError:
from gunicorn.workers._gaiohttp import AiohttpWorker
util.warn(
"The 'gaiohttp' worker is deprecated. See --worker-class "
"documentation for more information."
)
__all__ = ['AiohttpWorker']

View File

@ -4,37 +4,67 @@
# See the NOTICE for more information. # See the NOTICE for more information.
from functools import partial from functools import partial
import errno
import os
import sys import sys
try: try:
import eventlet import eventlet
except ImportError: except ImportError:
raise RuntimeError("You need eventlet installed to use this worker.") raise RuntimeError("eventlet worker requires eventlet 0.24.1 or higher")
else:
# validate the eventlet version from packaging.version import parse as parse_version
if eventlet.version_info < (0, 9, 7): if parse_version(eventlet.__version__) < parse_version('0.24.1'):
raise RuntimeError("You need eventlet >= 0.9.7") raise RuntimeError("eventlet worker requires eventlet 0.24.1 or higher")
from eventlet import hubs, greenthread from eventlet import hubs, greenthread
from eventlet.greenio import GreenSocket from eventlet.greenio import GreenSocket
from eventlet.hubs import trampoline import eventlet.wsgi
from eventlet.wsgi import ALREADY_HANDLED as EVENTLET_ALREADY_HANDLED
import greenlet import greenlet
from gunicorn.workers.base_async import AsyncWorker from gunicorn.workers.base_async import AsyncWorker
from gunicorn.sock import ssl_wrap_socket
def _eventlet_sendfile(fdout, fdin, offset, nbytes): # ALREADY_HANDLED is removed in 0.30.3+ now it's `WSGI_LOCAL.already_handled: bool`
while True: # https://github.com/eventlet/eventlet/pull/544
try: EVENTLET_WSGI_LOCAL = getattr(eventlet.wsgi, "WSGI_LOCAL", None)
return os.sendfile(fdout, fdin, offset, nbytes) EVENTLET_ALREADY_HANDLED = getattr(eventlet.wsgi, "ALREADY_HANDLED", None)
except OSError as e:
if e.args[0] == errno.EAGAIN:
trampoline(fdout, write=True) def _eventlet_socket_sendfile(self, file, offset=0, count=None):
else: # Based on the implementation in gevent which in turn is slightly
raise # modified from the standard library implementation.
if self.gettimeout() == 0:
raise ValueError("non-blocking sockets are not supported")
if offset:
file.seek(offset)
blocksize = min(count, 8192) if count else 8192
total_sent = 0
# localize variable access to minimize overhead
file_read = file.read
sock_send = self.send
try:
while True:
if count:
blocksize = min(count - total_sent, blocksize)
if blocksize <= 0:
break
data = memoryview(file_read(blocksize))
if not data:
break # EOF
while True:
try:
sent = sock_send(data)
except BlockingIOError:
continue
else:
total_sent += sent
if sent < len(data):
data = data[sent:]
else:
break
return total_sent
finally:
if total_sent > 0 and hasattr(file, 'seek'):
file.seek(offset + total_sent)
def _eventlet_serve(sock, handle, concurrency): def _eventlet_serve(sock, handle, concurrency):
@ -79,41 +109,52 @@ def _eventlet_stop(client, server, conn):
def patch_sendfile(): def patch_sendfile():
setattr(os, "sendfile", _eventlet_sendfile) # As of eventlet 0.25.1, GreenSocket.sendfile doesn't exist,
# meaning the native implementations of socket.sendfile will be used.
# If os.sendfile exists, it will attempt to use that, failing explicitly
# if the socket is in non-blocking mode, which the underlying
# socket object /is/. Even the regular _sendfile_use_send will
# fail in that way; plus, it would use the underlying socket.send which isn't
# properly cooperative. So we have to monkey-patch a working socket.sendfile()
# into GreenSocket; in this method, `self.send` will be the GreenSocket's
# send method which is properly cooperative.
if not hasattr(GreenSocket, 'sendfile'):
GreenSocket.sendfile = _eventlet_socket_sendfile
class EventletWorker(AsyncWorker): class EventletWorker(AsyncWorker):
def patch(self): def patch(self):
hubs.use_hub() hubs.use_hub()
eventlet.monkey_patch(os=False) eventlet.monkey_patch()
patch_sendfile() patch_sendfile()
def is_already_handled(self, respiter): def is_already_handled(self, respiter):
# eventlet >= 0.30.3
if getattr(EVENTLET_WSGI_LOCAL, "already_handled", None):
raise StopIteration()
# eventlet < 0.30.3
if respiter == EVENTLET_ALREADY_HANDLED: if respiter == EVENTLET_ALREADY_HANDLED:
raise StopIteration() raise StopIteration()
else: return super().is_already_handled(respiter)
return super(EventletWorker, self).is_already_handled(respiter)
def init_process(self): def init_process(self):
super(EventletWorker, self).init_process()
self.patch() self.patch()
super().init_process()
def handle_quit(self, sig, frame): def handle_quit(self, sig, frame):
eventlet.spawn(super(EventletWorker, self).handle_quit, sig, frame) eventlet.spawn(super().handle_quit, sig, frame)
def handle_usr1(self, sig, frame): def handle_usr1(self, sig, frame):
eventlet.spawn(super(EventletWorker, self).handle_usr1, sig, frame) eventlet.spawn(super().handle_usr1, sig, frame)
def timeout_ctx(self): def timeout_ctx(self):
return eventlet.Timeout(self.cfg.keepalive or None, False) return eventlet.Timeout(self.cfg.keepalive or None, False)
def handle(self, listener, client, addr): def handle(self, listener, client, addr):
if self.cfg.is_ssl: if self.cfg.is_ssl:
client = eventlet.wrap_ssl(client, server_side=True, client = ssl_wrap_socket(client, self.cfg)
**self.cfg.ssl_options) super().handle(listener, client, addr)
super(EventletWorker, self).handle(listener, client, addr)
def run(self): def run(self):
acceptors = [] acceptors = []
@ -132,6 +173,7 @@ class EventletWorker(AsyncWorker):
eventlet.sleep(1.0) eventlet.sleep(1.0)
self.notify() self.notify()
t = None
try: try:
with eventlet.Timeout(self.cfg.graceful_timeout) as t: with eventlet.Timeout(self.cfg.graceful_timeout) as t:
for a in acceptors: for a in acceptors:

View File

@ -3,47 +3,32 @@
# This file is part of gunicorn released under the MIT license. # This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information. # See the NOTICE for more information.
import errno
import os import os
import sys import sys
from datetime import datetime from datetime import datetime
from functools import partial from functools import partial
import time import time
_socket = __import__("socket")
# workaround on osx, disable kqueue
if sys.platform == "darwin":
os.environ['EVENT_NOKQUEUE'] = "1"
try: try:
import gevent import gevent
except ImportError: except ImportError:
raise RuntimeError("You need gevent installed to use this worker.") raise RuntimeError("gevent worker requires gevent 1.4 or higher")
else:
from packaging.version import parse as parse_version
if parse_version(gevent.__version__) < parse_version('1.4'):
raise RuntimeError("gevent worker requires gevent 1.4 or higher")
from gevent.pool import Pool from gevent.pool import Pool
from gevent.server import StreamServer from gevent.server import StreamServer
from gevent.socket import wait_write, socket from gevent import hub, monkey, socket, pywsgi
from gevent import pywsgi
import gunicorn import gunicorn
from gunicorn.http.wsgi import base_environ from gunicorn.http.wsgi import base_environ
from gunicorn.sock import ssl_context
from gunicorn.workers.base_async import AsyncWorker from gunicorn.workers.base_async import AsyncWorker
VERSION = "gevent/%s gunicorn/%s" % (gevent.__version__, gunicorn.__version__) VERSION = "gevent/%s gunicorn/%s" % (gevent.__version__, gunicorn.__version__)
def _gevent_sendfile(fdout, fdin, offset, nbytes):
while True:
try:
return os.sendfile(fdout, fdin, offset, nbytes)
except OSError as e:
if e.args[0] == errno.EAGAIN:
wait_write(fdout)
else:
raise
def patch_sendfile():
setattr(os, "sendfile", _gevent_sendfile)
class GeventWorker(AsyncWorker): class GeventWorker(AsyncWorker):
@ -51,27 +36,17 @@ class GeventWorker(AsyncWorker):
wsgi_handler = None wsgi_handler = None
def patch(self): def patch(self):
from gevent import monkey monkey.patch_all()
monkey.noisy = False
# if the new version is used make sure to patch subprocess
if gevent.version_info[0] == 0:
monkey.patch_all()
else:
monkey.patch_all(subprocess=True)
# monkey patch sendfile to make it none blocking
patch_sendfile()
# patch sockets # patch sockets
sockets = [] sockets = []
for s in self.sockets: for s in self.sockets:
sockets.append(socket(s.FAMILY, _socket.SOCK_STREAM, sockets.append(socket.socket(s.FAMILY, socket.SOCK_STREAM,
fileno=s.sock.fileno())) fileno=s.sock.fileno()))
self.sockets = sockets self.sockets = sockets
def notify(self): def notify(self):
super(GeventWorker, self).notify() super().notify()
if self.ppid != os.getppid(): if self.ppid != os.getppid():
self.log.info("Parent changed, shutting down: %s", self) self.log.info("Parent changed, shutting down: %s", self)
sys.exit(0) sys.exit(0)
@ -84,7 +59,7 @@ class GeventWorker(AsyncWorker):
ssl_args = {} ssl_args = {}
if self.cfg.is_ssl: if self.cfg.is_ssl:
ssl_args = dict(server_side=True, **self.cfg.ssl_options) ssl_args = {"ssl_context": ssl_context(self.cfg)}
for s in self.sockets: for s in self.sockets:
s.setblocking(1) s.setblocking(1)
@ -102,6 +77,8 @@ class GeventWorker(AsyncWorker):
else: else:
hfun = partial(self.handle, s) hfun = partial(self.handle, s)
server = StreamServer(s, handle=hfun, spawn=pool, **ssl_args) server = StreamServer(s, handle=hfun, spawn=pool, **ssl_args)
if self.cfg.workers > 1:
server.max_accept = 1
server.start() server.start()
servers.append(server) servers.append(server)
@ -134,22 +111,21 @@ class GeventWorker(AsyncWorker):
gevent.sleep(1.0) gevent.sleep(1.0)
# Force kill all active the handlers # Force kill all active the handlers
self.log.warning("Worker graceful timeout (pid:%s)" % self.pid) self.log.warning("Worker graceful timeout (pid:%s)", self.pid)
for server in servers: for server in servers:
server.stop(timeout=1) server.stop(timeout=1)
except: except Exception:
pass pass
def handle(self, listener, client, addr): def handle(self, listener, client, addr):
# Connected socket timeout defaults to socket.getdefaulttimeout(). # Connected socket timeout defaults to socket.getdefaulttimeout().
# This forces to blocking mode. # This forces to blocking mode.
client.setblocking(1) client.setblocking(1)
super(GeventWorker, self).handle(listener, client, addr) super().handle(listener, client, addr)
def handle_request(self, listener_name, req, sock, addr): def handle_request(self, listener_name, req, sock, addr):
try: try:
super(GeventWorker, self).handle_request(listener_name, req, sock, super().handle_request(listener_name, req, sock, addr)
addr)
except gevent.GreenletExit: except gevent.GreenletExit:
pass pass
except SystemExit: except SystemExit:
@ -158,41 +134,17 @@ class GeventWorker(AsyncWorker):
def handle_quit(self, sig, frame): def handle_quit(self, sig, frame):
# Move this out of the signal handler so we can use # Move this out of the signal handler so we can use
# blocking calls. See #1126 # blocking calls. See #1126
gevent.spawn(super(GeventWorker, self).handle_quit, sig, frame) gevent.spawn(super().handle_quit, sig, frame)
def handle_usr1(self, sig, frame): def handle_usr1(self, sig, frame):
# Make the gevent workers handle the usr1 signal # Make the gevent workers handle the usr1 signal
# by deferring to a new greenlet. See #1645 # by deferring to a new greenlet. See #1645
gevent.spawn(super(GeventWorker, self).handle_usr1, sig, frame) gevent.spawn(super().handle_usr1, sig, frame)
if gevent.version_info[0] == 0: def init_process(self):
self.patch()
def init_process(self): hub.reinit()
# monkey patch here super().init_process()
self.patch()
# reinit the hub
import gevent.core
gevent.core.reinit()
#gevent 0.13 and older doesn't reinitialize dns for us after forking
#here's the workaround
gevent.core.dns_shutdown(fail_requests=1)
gevent.core.dns_init()
super(GeventWorker, self).init_process()
else:
def init_process(self):
# monkey patch here
self.patch()
# reinit the hub
from gevent import hub
hub.reinit()
# then initialize the process
super(GeventWorker, self).init_process()
class GeventResponse(object): class GeventResponse(object):
@ -222,7 +174,7 @@ class PyWSGIHandler(pywsgi.WSGIHandler):
self.server.log.access(resp, req_headers, self.environ, response_time) self.server.log.access(resp, req_headers, self.environ, response_time)
def get_environ(self): def get_environ(self):
env = super(PyWSGIHandler, self).get_environ() env = super().get_environ()
env['gunicorn.sock'] = self.socket env['gunicorn.sock'] = self.socket
env['RAW_URI'] = self.path env['RAW_URI'] = self.path
return env return env

View File

@ -9,7 +9,9 @@
# Keepalive connections are put back in the loop waiting for an event. # Keepalive connections are put back in the loop waiting for an event.
# If no event happen after the keep alive timeout, the connection is # If no event happen after the keep alive timeout, the connection is
# closed. # closed.
# pylint: disable=no-else-break
from concurrent import futures
import errno import errno
import os import os
import selectors import selectors
@ -25,15 +27,9 @@ from threading import RLock
from . import base from . import base
from .. import http from .. import http
from .. import util from .. import util
from .. import sock
from ..http import wsgi from ..http import wsgi
try:
import concurrent.futures as futures
except ImportError:
raise RuntimeError("""
You need to install the 'futures' package to use this worker with this
Python version.
""")
class TConn(object): class TConn(object):
@ -45,20 +41,22 @@ class TConn(object):
self.timeout = None self.timeout = None
self.parser = None self.parser = None
self.initialized = False
# set the socket to non blocking # set the socket to non blocking
self.sock.setblocking(False) self.sock.setblocking(False)
def init(self): def init(self):
self.initialized = True
self.sock.setblocking(True) self.sock.setblocking(True)
if self.parser is None: if self.parser is None:
# wrap the socket if needed # wrap the socket if needed
if self.cfg.is_ssl: if self.cfg.is_ssl:
self.sock = ssl.wrap_socket(self.sock, server_side=True, self.sock = sock.ssl_wrap_socket(self.sock, self.cfg)
**self.cfg.ssl_options)
# initialize the parser # initialize the parser
self.parser = http.RequestParser(self.cfg, self.sock) self.parser = http.RequestParser(self.cfg, self.sock, self.client)
def set_timeout(self): def set_timeout(self):
# set the timeout # set the timeout
@ -71,7 +69,7 @@ class TConn(object):
class ThreadWorker(base.Worker): class ThreadWorker(base.Worker):
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
super(ThreadWorker, self).__init__(*args, **kwargs) super().__init__(*args, **kwargs)
self.worker_connections = self.cfg.worker_connections self.worker_connections = self.cfg.worker_connections
self.max_keepalived = self.cfg.worker_connections - self.cfg.threads self.max_keepalived = self.cfg.worker_connections - self.cfg.threads
# initialise the pool # initialise the pool
@ -88,13 +86,17 @@ class ThreadWorker(base.Worker):
if max_keepalived <= 0 and cfg.keepalive: if max_keepalived <= 0 and cfg.keepalive:
log.warning("No keepalived connections can be handled. " + log.warning("No keepalived connections can be handled. " +
"Check the number of worker connections and threads.") "Check the number of worker connections and threads.")
def init_process(self): def init_process(self):
self.tpool = futures.ThreadPoolExecutor(max_workers=self.cfg.threads) self.tpool = self.get_thread_pool()
self.poller = selectors.DefaultSelector() self.poller = selectors.DefaultSelector()
self._lock = RLock() self._lock = RLock()
super(ThreadWorker, self).init_process() super().init_process()
def get_thread_pool(self):
"""Override this method to customize how the thread pool is created"""
return futures.ThreadPoolExecutor(max_workers=self.cfg.threads)
def handle_quit(self, sig, frame): def handle_quit(self, sig, frame):
self.alive = False self.alive = False
@ -120,24 +122,29 @@ class ThreadWorker(base.Worker):
sock, client = listener.accept() sock, client = listener.accept()
# initialize the connection object # initialize the connection object
conn = TConn(self.cfg, sock, client, server) conn = TConn(self.cfg, sock, client, server)
self.nr_conns += 1 self.nr_conns += 1
# enqueue the job # wait until socket is readable
self.enqueue_req(conn) with self._lock:
self.poller.register(conn.sock, selectors.EVENT_READ,
partial(self.on_client_socket_readable, conn))
except EnvironmentError as e: except EnvironmentError as e:
if e.errno not in (errno.EAGAIN, if e.errno not in (errno.EAGAIN, errno.ECONNABORTED,
errno.ECONNABORTED, errno.EWOULDBLOCK): errno.EWOULDBLOCK):
raise raise
def reuse_connection(self, conn, client): def on_client_socket_readable(self, conn, client):
with self._lock: with self._lock:
# unregister the client from the poller # unregister the client from the poller
self.poller.unregister(client) self.poller.unregister(client)
# remove the connection from keepalive
try: if conn.initialized:
self._keep.remove(conn) # remove the connection from keepalive
except ValueError: try:
# race condition self._keep.remove(conn)
return except ValueError:
# race condition
return
# submit the connection to a worker # submit the connection to a worker
self.enqueue_req(conn) self.enqueue_req(conn)
@ -170,6 +177,9 @@ class ThreadWorker(base.Worker):
except KeyError: except KeyError:
# already removed by the system, continue # already removed by the system, continue
pass pass
except ValueError:
# already removed by the system continue
pass
# close the socket # close the socket
conn.close() conn.close()
@ -205,11 +215,11 @@ class ThreadWorker(base.Worker):
# check (but do not wait) for finished requests # check (but do not wait) for finished requests
result = futures.wait(self.futures, timeout=0, result = futures.wait(self.futures, timeout=0,
return_when=futures.FIRST_COMPLETED) return_when=futures.FIRST_COMPLETED)
else: else:
# wait for a request to finish # wait for a request to finish
result = futures.wait(self.futures, timeout=1.0, result = futures.wait(self.futures, timeout=1.0,
return_when=futures.FIRST_COMPLETED) return_when=futures.FIRST_COMPLETED)
# clean up finished requests # clean up finished requests
for fut in result.done: for fut in result.done:
@ -218,7 +228,7 @@ class ThreadWorker(base.Worker):
if not self.is_parent_alive(): if not self.is_parent_alive():
break break
# hanle keepalive timeouts # handle keepalive timeouts
self.murder_keepalived() self.murder_keepalived()
self.tpool.shutdown(False) self.tpool.shutdown(False)
@ -239,7 +249,7 @@ class ThreadWorker(base.Worker):
(keepalive, conn) = fs.result() (keepalive, conn) = fs.result()
# if the connection should be kept alived add it # if the connection should be kept alived add it
# to the eventloop and record it # to the eventloop and record it
if keepalive: if keepalive and self.alive:
# flag the socket as non blocked # flag the socket as non blocked
conn.sock.setblocking(False) conn.sock.setblocking(False)
@ -250,11 +260,11 @@ class ThreadWorker(base.Worker):
# add the socket to the event loop # add the socket to the event loop
self.poller.register(conn.sock, selectors.EVENT_READ, self.poller.register(conn.sock, selectors.EVENT_READ,
partial(self.reuse_connection, conn)) partial(self.on_client_socket_readable, conn))
else: else:
self.nr_conns -= 1 self.nr_conns -= 1
conn.close() conn.close()
except: except Exception:
# an exception happened, make sure to close the # an exception happened, make sure to close the
# socket. # socket.
self.nr_conns -= 1 self.nr_conns -= 1
@ -286,11 +296,13 @@ class ThreadWorker(base.Worker):
self.handle_error(req, conn.sock, conn.client, e) self.handle_error(req, conn.sock, conn.client, e)
except EnvironmentError as e: except EnvironmentError as e:
if e.errno not in (errno.EPIPE, errno.ECONNRESET): if e.errno not in (errno.EPIPE, errno.ECONNRESET, errno.ENOTCONN):
self.log.exception("Socket error processing request.") self.log.exception("Socket error processing request.")
else: else:
if e.errno == errno.ECONNRESET: if e.errno == errno.ECONNRESET:
self.log.debug("Ignoring connection reset") self.log.debug("Ignoring connection reset")
elif e.errno == errno.ENOTCONN:
self.log.debug("Ignoring socket not connected")
else: else:
self.log.debug("Ignoring connection epipe") self.log.debug("Ignoring connection epipe")
except Exception as e: except Exception as e:
@ -305,15 +317,16 @@ class ThreadWorker(base.Worker):
self.cfg.pre_request(self, req) self.cfg.pre_request(self, req)
request_start = datetime.now() request_start = datetime.now()
resp, environ = wsgi.create(req, conn.sock, conn.client, resp, environ = wsgi.create(req, conn.sock, conn.client,
conn.server, self.cfg) conn.server, self.cfg)
environ["wsgi.multithread"] = True environ["wsgi.multithread"] = True
self.nr += 1 self.nr += 1
if self.alive and self.nr >= self.max_requests: if self.nr >= self.max_requests:
self.log.info("Autorestarting worker after current request.") if self.alive:
self.log.info("Autorestarting worker after current request.")
self.alive = False
resp.force_close() resp.force_close()
self.alive = False
if not self.cfg.keepalive: if not self.alive or not self.cfg.keepalive:
resp.force_close() resp.force_close()
elif len(self._keep) >= self.max_keepalived: elif len(self._keep) >= self.max_keepalived:
resp.force_close() resp.force_close()
@ -327,9 +340,9 @@ class ThreadWorker(base.Worker):
resp.write(item) resp.write(item)
resp.close() resp.close()
finally:
request_time = datetime.now() - request_start request_time = datetime.now() - request_start
self.log.access(resp, req, environ, request_time) self.log.access(resp, req, environ, request_time)
finally:
if hasattr(respiter, "close"): if hasattr(respiter, "close"):
respiter.close() respiter.close()

View File

@ -3,7 +3,6 @@
# This file is part of gunicorn released under the MIT license. # This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information. # See the NOTICE for more information.
import copy
import os import os
import sys import sys
@ -17,6 +16,7 @@ from tornado.ioloop import IOLoop, PeriodicCallback
from tornado.wsgi import WSGIContainer from tornado.wsgi import WSGIContainer
from gunicorn.workers.base import Worker from gunicorn.workers.base import Worker
from gunicorn import __version__ as gversion from gunicorn import __version__ as gversion
from gunicorn.sock import ssl_context
# Tornado 5.0 updated its IOLoop, and the `io_loop` arguments to many # Tornado 5.0 updated its IOLoop, and the `io_loop` arguments to many
@ -44,7 +44,7 @@ class TornadoWorker(Worker):
def handle_exit(self, sig, frame): def handle_exit(self, sig, frame):
if self.alive: if self.alive:
super(TornadoWorker, self).handle_exit(sig, frame) super().handle_exit(sig, frame)
def handle_request(self): def handle_request(self):
self.nr += 1 self.nr += 1
@ -84,7 +84,7 @@ class TornadoWorker(Worker):
# should create its own IOLoop. We should clear current IOLoop # should create its own IOLoop. We should clear current IOLoop
# if exists before os.fork. # if exists before os.fork.
IOLoop.clear_current() IOLoop.clear_current()
super(TornadoWorker, self).init_process() super().init_process()
def run(self): def run(self):
self.ioloop = IOLoop.instance() self.ioloop = IOLoop.instance()
@ -105,8 +105,13 @@ class TornadoWorker(Worker):
# instance of tornado.web.Application or is an # instance of tornado.web.Application or is an
# instance of tornado.wsgi.WSGIApplication # instance of tornado.wsgi.WSGIApplication
app = self.wsgi app = self.wsgi
if not isinstance(app, tornado.web.Application) or \
isinstance(app, tornado.wsgi.WSGIApplication): if tornado.version_info[0] < 6:
if not isinstance(app, tornado.web.Application) or \
isinstance(app, tornado.wsgi.WSGIApplication):
app = WSGIContainer(app)
elif not isinstance(app, WSGIContainer) and \
not isinstance(app, tornado.web.Application):
app = WSGIContainer(app) app = WSGIContainer(app)
# Monkey-patching HTTPConnection.finish to count the # Monkey-patching HTTPConnection.finish to count the
@ -135,16 +140,11 @@ class TornadoWorker(Worker):
server_class = _HTTPServer server_class = _HTTPServer
if self.cfg.is_ssl: if self.cfg.is_ssl:
_ssl_opt = copy.deepcopy(self.cfg.ssl_options)
# tornado refuses initialization if ssl_options contains following
# options
del _ssl_opt["do_handshake_on_connect"]
del _ssl_opt["suppress_ragged_eofs"]
if TORNADO5: if TORNADO5:
server = server_class(app, ssl_options=_ssl_opt) server = server_class(app, ssl_options=ssl_context(self.cfg))
else: else:
server = server_class(app, io_loop=self.ioloop, server = server_class(app, io_loop=self.ioloop,
ssl_options=_ssl_opt) ssl_options=ssl_context(self.cfg))
else: else:
if TORNADO5: if TORNADO5:
server = server_class(app) server = server_class(app)

View File

@ -12,13 +12,16 @@ import socket
import ssl import ssl
import sys import sys
import gunicorn.http as http from gunicorn import http
import gunicorn.http.wsgi as wsgi from gunicorn.http import wsgi
import gunicorn.util as util from gunicorn import sock
import gunicorn.workers.base as base from gunicorn import util
from gunicorn.workers import base
class StopWaiting(Exception): class StopWaiting(Exception):
""" exception raised to stop waiting for a connnection """ """ exception raised to stop waiting for a connection """
class SyncWorker(base.Worker): class SyncWorker(base.Worker):
@ -72,7 +75,7 @@ class SyncWorker(base.Worker):
except EnvironmentError as e: except EnvironmentError as e:
if e.errno not in (errno.EAGAIN, errno.ECONNABORTED, if e.errno not in (errno.EAGAIN, errno.ECONNABORTED,
errno.EWOULDBLOCK): errno.EWOULDBLOCK):
raise raise
if not self.is_parent_alive(): if not self.is_parent_alive():
@ -101,7 +104,7 @@ class SyncWorker(base.Worker):
self.accept(listener) self.accept(listener)
except EnvironmentError as e: except EnvironmentError as e:
if e.errno not in (errno.EAGAIN, errno.ECONNABORTED, if e.errno not in (errno.EAGAIN, errno.ECONNABORTED,
errno.EWOULDBLOCK): errno.EWOULDBLOCK):
raise raise
if not self.is_parent_alive(): if not self.is_parent_alive():
@ -126,10 +129,8 @@ class SyncWorker(base.Worker):
req = None req = None
try: try:
if self.cfg.is_ssl: if self.cfg.is_ssl:
client = ssl.wrap_socket(client, server_side=True, client = sock.ssl_wrap_socket(client, self.cfg)
**self.cfg.ssl_options) parser = http.RequestParser(self.cfg, client, addr)
parser = http.RequestParser(self.cfg, client)
req = next(parser) req = next(parser)
self.handle_request(listener, req, client, addr) self.handle_request(listener, req, client, addr)
except http.errors.NoMoreData as e: except http.errors.NoMoreData as e:
@ -144,14 +145,16 @@ class SyncWorker(base.Worker):
self.log.debug("Error processing SSL request.") self.log.debug("Error processing SSL request.")
self.handle_error(req, client, addr, e) self.handle_error(req, client, addr, e)
except EnvironmentError as e: except EnvironmentError as e:
if e.errno not in (errno.EPIPE, errno.ECONNRESET): if e.errno not in (errno.EPIPE, errno.ECONNRESET, errno.ENOTCONN):
self.log.exception("Socket error processing request.") self.log.exception("Socket error processing request.")
else: else:
if e.errno == errno.ECONNRESET: if e.errno == errno.ECONNRESET:
self.log.debug("Ignoring connection reset") self.log.debug("Ignoring connection reset")
elif e.errno == errno.ENOTCONN:
self.log.debug("Ignoring socket not connected")
else: else:
self.log.debug("Ignoring EPIPE") self.log.debug("Ignoring EPIPE")
except Exception as e: except BaseException as e:
self.handle_error(req, client, addr, e) self.handle_error(req, client, addr, e)
finally: finally:
util.close(client) util.close(client)
@ -163,7 +166,7 @@ class SyncWorker(base.Worker):
self.cfg.pre_request(self, req) self.cfg.pre_request(self, req)
request_start = datetime.now() request_start = datetime.now()
resp, environ = wsgi.create(req, client, addr, resp, environ = wsgi.create(req, client, addr,
listener.getsockname(), self.cfg) listener.getsockname(), self.cfg)
# Force the connection closed until someone shows # Force the connection closed until someone shows
# a buffering proxy that supports Keep-Alive to # a buffering proxy that supports Keep-Alive to
# the backend. # the backend.
@ -180,9 +183,9 @@ class SyncWorker(base.Worker):
for item in respiter: for item in respiter:
resp.write(item) resp.write(item)
resp.close() resp.close()
finally:
request_time = datetime.now() - request_start request_time = datetime.now() - request_start
self.log.access(resp, req, environ, request_time) self.log.access(resp, req, environ, request_time)
finally:
if hasattr(respiter, "close"): if hasattr(respiter, "close"):
respiter.close() respiter.close()
except EnvironmentError: except EnvironmentError:

View File

@ -22,17 +22,21 @@ class WorkerTmp(object):
if fdir and not os.path.isdir(fdir): if fdir and not os.path.isdir(fdir):
raise RuntimeError("%s doesn't exist. Can't create workertmp." % fdir) raise RuntimeError("%s doesn't exist. Can't create workertmp." % fdir)
fd, name = tempfile.mkstemp(prefix="wgunicorn-", dir=fdir) fd, name = tempfile.mkstemp(prefix="wgunicorn-", dir=fdir)
# allows the process to write to the file
util.chown(name, cfg.uid, cfg.gid)
os.umask(old_umask) os.umask(old_umask)
# unlink the file so we don't leak tempory files # change the owner and group of the file if the worker will run as
# a different user or group, so that the worker can modify the file
if cfg.uid != os.geteuid() or cfg.gid != os.getegid():
util.chown(name, cfg.uid, cfg.gid)
# unlink the file so we don't leak temporary files
try: try:
if not IS_CYGWIN: if not IS_CYGWIN:
util.unlink(name) util.unlink(name)
self._tmp = os.fdopen(fd, 'w+b', 1) # In Python 3.8, open() emits RuntimeWarning if buffering=1 for binary mode.
except: # Because we never write to this file, pass 0 to switch buffering off.
self._tmp = os.fdopen(fd, 'w+b', 0)
except Exception:
os.close(fd) os.close(fd)
raise raise

80
pyproject.toml Normal file
View File

@ -0,0 +1,80 @@
[build-system]
requires = ["setuptools>=61.2"]
build-backend = "setuptools.build_meta"
[project]
name = "gunicorn"
authors = [{name = "Benoit Chesneau", email = "benoitc@gunicorn.org"}]
license = {text = "MIT"}
description = "WSGI HTTP Server for UNIX"
readme = "README.rst"
classifiers = [
"Development Status :: 5 - Production/Stable",
"Environment :: Other Environment",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS :: MacOS X",
"Operating System :: POSIX",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Topic :: Internet",
"Topic :: Utilities",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Internet :: WWW/HTTP :: WSGI",
"Topic :: Internet :: WWW/HTTP :: WSGI :: Server",
"Topic :: Internet :: WWW/HTTP :: Dynamic Content",
]
requires-python = ">=3.5"
dependencies = [
'importlib_metadata; python_version<"3.8"',
"packaging",
]
dynamic = ["version"]
[project.urls]
Homepage = "https://gunicorn.org"
Documentation = "https://docs.gunicorn.org"
"Issue tracker" = "https://github.com/benoitc/gunicorn/issues"
"Source code" = "https://github.com/benoitc/gunicorn"
[project.optional-dependencies]
gevent = ["gevent>=1.4.0"]
eventlet = ["eventlet>=0.24.1"]
tornado = ["tornado>=0.2"]
gthread = []
setproctitle = ["setproctitle"]
testing = [
"gevent",
"eventlet",
"cryptography",
"coverage",
"pytest",
"pytest-cov",
]
[tool.pytest.ini_options]
norecursedirs = ["examples", "lib", "local", "src"]
testpaths = ["tests/"]
addopts = "--assert=plain --cov=gunicorn --cov-report=xml"
[tool.setuptools]
zip-safe = false
include-package-data = true
license-files = ["LICENSE"]
[tool.setuptools.packages]
find = {namespaces = false}
[tool.setuptools.dynamic]
version = {attr = "gunicorn.__version__"}

View File

@ -1,4 +1,6 @@
aiohttp gevent
coverage>=4.0,<4.4 # TODO: https://github.com/benoitc/gunicorn/issues/1548 eventlet
cryptography
coverage
pytest pytest
pytest-cov==2.5.1 pytest-cov

View File

@ -1,16 +0,0 @@
%{__python} setup.py install --skip-build --root=$RPM_BUILD_ROOT
# Build the HTML documentation using the default theme.
%{__python} setup.py build_sphinx
%if ! (0%{?fedora} > 12 || 0%{?rhel} > 5)
%{!?python_sitelib: %global python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())")}
%{!?python_sitearch: %global python_sitearch %(%{__python} -c "from distutils.sysconfig import get_python_lib; print(get_python_lib(1))")}
%endif
INSTALLED_FILES="\
%{python_sitelib}/*
%{_bindir}/*
%doc LICENSE NOTICE README.rst THANKS build/sphinx/html examples/example_config.py
"
echo "$INSTALLED_FILES" > INSTALLED_FILES

View File

@ -1,16 +1,4 @@
[bdist_rpm]
build-requires = python2-devel python-setuptools python-sphinx
requires = python-setuptools >= 0.6c6 python-ctypes
install_script = rpm/install
group = System Environment/Daemons
[tool:pytest] [tool:pytest]
norecursedirs = examples lib local src norecursedirs = examples lib local src
testpaths = tests/ testpaths = tests/
addopts = --assert=plain --cov=gunicorn --cov-report=xml addopts = --assert=plain --cov=gunicorn --cov-report=xml
[wheel]
universal = 1
[metadata]
license_file = LICENSE

113
setup.py
View File

@ -1,113 +0,0 @@
# -*- coding: utf-8 -
#
# This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information.
import os
import sys
from setuptools import setup, find_packages
from setuptools.command.test import test as TestCommand
from gunicorn import __version__
CLASSIFIERS = [
'Development Status :: 4 - Beta',
'Environment :: Other Environment',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
'Operating System :: MacOS :: MacOS X',
'Operating System :: POSIX',
'Programming Language :: Python',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3 :: Only',
'Topic :: Internet',
'Topic :: Utilities',
'Topic :: Software Development :: Libraries :: Python Modules',
'Topic :: Internet :: WWW/HTTP',
'Topic :: Internet :: WWW/HTTP :: WSGI',
'Topic :: Internet :: WWW/HTTP :: WSGI :: Server',
'Topic :: Internet :: WWW/HTTP :: Dynamic Content']
# read long description
with open(os.path.join(os.path.dirname(__file__), 'README.rst')) as f:
long_description = f.read()
# read dev requirements
fname = os.path.join(os.path.dirname(__file__), 'requirements_test.txt')
with open(fname) as f:
tests_require = [l.strip() for l in f.readlines()]
class PyTestCommand(TestCommand):
user_options = [
("cov", None, "measure coverage")
]
def initialize_options(self):
TestCommand.initialize_options(self)
self.cov = None
def finalize_options(self):
TestCommand.finalize_options(self)
self.test_args = ['tests']
if self.cov:
self.test_args += ['--cov', 'gunicorn']
self.test_suite = True
def run_tests(self):
import pytest
errno = pytest.main(self.test_args)
sys.exit(errno)
install_requires = [
# We depend on functioning pkg_resources.working_set.add_entry() and
# pkg_resources.load_entry_point(). These both work as of 3.0 which
# is the first version to support Python 3.4 which we require as a
# floor.
'setuptools>=3.0',
]
extra_require = {
'gevent': ['gevent>=0.13'],
'eventlet': ['eventlet>=0.9.7'],
'tornado': ['tornado>=0.2'],
'gthread': [],
}
setup(
name='gunicorn',
version=__version__,
description='WSGI HTTP Server for UNIX',
long_description=long_description,
author='Benoit Chesneau',
author_email='benoitc@e-engura.com',
license='MIT',
url='http://gunicorn.org',
python_requires='>=3.4',
install_requires=install_requires,
classifiers=CLASSIFIERS,
zip_safe=False,
packages=find_packages(exclude=['examples', 'tests']),
include_package_data=True,
tests_require=tests_require,
cmdclass={'test': PyTestCommand},
entry_points="""
[console_scripts]
gunicorn=gunicorn.app.wsgiapp:run
gunicorn_paster=gunicorn.app.pasterapp:run
[paste.server_runner]
main=gunicorn.app.pasterapp:paste_server
""",
extras_require=extra_require,
)

View File

@ -0,0 +1 @@
wsgi_app = "app1:app1"

View File

@ -1,2 +1,2 @@
-blargh /foo HTTP/1.1\r\n GET\n/\nHTTP/1.1\r\n
\r\n \r\n

View File

@ -1,2 +1,2 @@
from gunicorn.http.errors import InvalidRequestMethod from gunicorn.http.errors import InvalidRequestLine
request = InvalidRequestMethod request = InvalidRequestLine

View File

@ -0,0 +1,2 @@
bla:rgh /foo HTTP/1.1\r\n
\r\n

Some files were not shown because too many files have changed in this diff Show More