urllib3 Documentation


  • Re-use the same socket connection for multiple requests, with optional client-side certificate verification. See: HTTPConnectionPool and HTTPSConnectionPool
  • File posting with multipart encoding. See: encode_multipart_formdata()
  • Built-in redirection and retries (optional).
  • Supports gzip and deflate decoding. See: decode_gzip() and decode_deflate()
  • Thread-safe and sanity-safe.
  • Proxy over HTTP or SOCKS.
  • Tested on Python 2.6+ and Python 3.2+, 100% unit test coverage.
  • Works with AppEngine, gevent, eventlib, and the standard library io module.
  • Small and easy to understand codebase perfect for extending and building upon. For a simplified abstraction, have a look at Requests which is also powered by urllib3.

Getting Started


pip install urllib3 or fetch the latest source from github.com/shazow/urllib3.


>>> import urllib3
>>> http = urllib3.PoolManager()
>>> r = http.request('GET', 'http://example.com/')
>>> r.status
>>> r.headers['server']
'ECS (iad/182A)'
>>> 'data: ' + r.data
'data: ...'

By default, urllib3 does not verify your HTTPS requests. You’ll need to supply a root certificate bundle, or use certifi

>>> import urllib3, certifi
>>> http = urllib3.PoolManager(cert_reqs='CERT_REQUIRED', ca_certs=certifi.where())
>>> r = http.request('GET', 'https://insecure.com/')
Traceback (most recent call last):
SSLError: hostname 'insecure.com' doesn't match 'svn.nmap.org'

For more on making secure SSL/TLS HTTPS requests, read the Security section.

urllib3’s responses respect the io framework from Python’s standard library, allowing use of these standard objects for purposes like buffering:

>>> http = urllib3.PoolManager()
>>> r = http.urlopen('GET','http://example.com/', preload_content=False)
>>> b = io.BufferedReader(r, 2048)
>>> firstpart = b.read(100)
>>> # ... your internet connection fails momentarily ...
>>> secondpart = b.read()

Upgrading & Versioning

urllib3 uses a compatibility-based versioning scheme (let’s call it compatver). For the user, they indicate the required decision for upgrading.

Given a version A.B.C:

C. Strictly backwards-compatible, usually a bug-fix. Always upgrade.

B. Possibly partially incompatible, usually a new feature or a minor API improvement. Read the changelog and upgrade when ready.

A. Major rewrite and possibly breaks everything. Not really an upgrade, basically a new library under the same namespace, decide if you want to switch.

For example, when going from urllib3 v1.2.3 to v1.2.4, you should always upgrade without hesitation. When going from v1.2 to v1.3, you should read the changes to make sure they’re not going to affect you.


urllib3 tries to strike a fine balance between power, extendability, and sanity. To achieve this, the codebase is a collection of small reusable utilities and abstractions composed together in a few helpful layers.


The highest level is the PoolManager(...).

The PoolManager will take care of reusing connections for you whenever you request the same host. This should cover most scenarios without significant loss of efficiency, but you can always drop down to a lower level component for more granular control.

>>> import urllib3
>>> http = urllib3.PoolManager(10)
>>> r1 = http.request('GET', 'http://example.com/')
>>> r2 = http.request('GET', 'http://httpbin.org/')
>>> r3 = http.request('GET', 'http://httpbin.org/get')
>>> len(http.pools)

A PoolManager is a proxy for a collection of ConnectionPool objects. They both inherit from RequestMethods to make sure that their API is similar, so that instances of either can be passed around interchangeably.


HTTP Proxy

The ProxyManager is an HTTP proxy-aware subclass of PoolManager. It produces a single HTTPConnectionPool instance for all HTTP connections and individual per-server:port HTTPSConnectionPool instances for tunnelled HTTPS connections:

>>> proxy = urllib3.ProxyManager('http://localhost:3128/')
>>> r1 = proxy.request('GET', 'http://google.com/')
>>> r2 = proxy.request('GET', 'http://httpbin.org/')
>>> len(proxy.pools)
>>> r3 = proxy.request('GET', 'https://httpbin.org/')
>>> r4 = proxy.request('GET', 'https://twitter.com/')
>>> len(proxy.pools)


The contrib module includes support for a SOCKSProxyManager.


The next layer is the ConnectionPool(...).

The HTTPConnectionPool and HTTPSConnectionPool classes allow you to define a pool of connections to a single host and make requests against this pool with automatic connection reusing and thread safety.

When the ssl module is available, then HTTPSConnectionPool objects can be configured to check SSL certificates against specific provided certificate authorities.

>>> import urllib3
>>> conn = urllib3.connection_from_url('http://httpbin.org/')
>>> r1 = conn.request('GET', 'http://httpbin.org/')
>>> r2 = conn.request('GET', '/user-agent')
>>> r3 = conn.request('GET', 'http://example.com')
Traceback (most recent call last):
urllib3.exceptions.HostChangedError: HTTPConnectionPool(host='httpbin.org', port=None): Tried to open a foreign host with url: http://example.com

Again, a ConnectionPool is a pool of connections to a specific host. Trying to access a different host through the same pool will raise a HostChangedError exception unless you specify assert_same_host=False. Do this at your own risk as the outcome is completely dependent on the behaviour of the host server.

If you need to access multiple hosts and don’t want to manage your own collection of ConnectionPool objects, then you should use a PoolManager.

A ConnectionPool is composed of a collection of httplib.HTTPConnection objects.


A timeout can be set to abort socket operations on individual connections after the specified duration. The timeout can be defined as a float or an instance of Timeout which gives more granular configuration over how much time is allowed for different stages of the request. This can be set for the entire pool or per-request.

>>> from urllib3 import PoolManager, Timeout

>>> # Manager with 3 seconds combined timeout.
>>> http = PoolManager(timeout=3.0)
>>> r = http.request('GET', 'http://httpbin.org/delay/1')

>>> # Manager with 2 second timeout for the read phase, no limit for the rest.
>>> http = PoolManager(timeout=Timeout(read=2.0))
>>> r = http.request('GET', 'http://httpbin.org/delay/1')

>>> # Manager with no timeout but a request with a timeout of 1 seconds for
>>> # the connect phase and 2 seconds for the read phase.
>>> http = PoolManager()
>>> r = http.request('GET', 'http://httpbin.org/delay/1', timeout=Timeout(connect=1.0, read=2.0))

>>> # Same Manager but request with a 5 second total timeout.
>>> r = http.request('GET', 'http://httpbin.org/delay/1', timeout=Timeout(total=5.0))

See the Timeout definition for more details.


Retries can be configured by passing an instance of Retry, or disabled by passing False, to the retries parameter.

Redirects are also considered to be a subset of retries but can be configured or disabled individually.

>>> from urllib3 import PoolManager, Retry

>>> # Allow 3 retries total for all requests in this pool. These are the same:
>>> http = PoolManager(retries=3)
>>> http = PoolManager(retries=Retry(3))
>>> http = PoolManager(retries=Retry(total=3))

>>> r = http.request('GET', 'http://httpbin.org/redirect/2')
>>> # r.status -> 200

>>> # Disable redirects for this request.
>>> r = http.request('GET', 'http://httpbin.org/redirect/2', retries=Retry(3, redirect=False))
>>> # r.status -> 302

>>> # No total limit, but only do 5 connect retries, for this request.
>>> r = http.request('GET', 'http://httpbin.org/', retries=Retry(connect=5))

See the Retry definition for more details.


You may also stream your response and get data as they come (e.g. when using transfer-encoding: chunked). In this case, method stream() will return generator.

>>> import urllib3
>>> http = urllib3.PoolManager()

>>> r = http.request("GET", "http://httpbin.org/stream/3")
>>> r.getheader("transfer-encoding")

>>> for chunk in r.stream():
... print chunk
{"url": "http://httpbin.org/stream/3", ..., "id": 0, ...}
{"url": "http://httpbin.org/stream/3", ..., "id": 1, ...}
{"url": "http://httpbin.org/stream/3", ..., "id": 2, ...}
>>> r.closed

Completely consuming the stream will auto-close the response and release the connection back to the pool. If you’re only partially consuming the consuming a stream, make sure to manually call r.close() on the response.


At the very core, just like its predecessors, urllib3 is built on top of httplib – the lowest level HTTP library included in the Python standard library.

To aid the limited functionality of the httplib module, urllib3 provides various helper methods which are used with the higher level components but can also be used independently.

Contrib Modules

These modules implement various extra features, that may not be ready for prime time or that require optional third-party dependencies.


  1. Check for open issues or open a fresh issue to start a discussion around a feature idea or a bug. There is a Contributor Friendly tag for issues that should be ideal for people who are not very familiar with the codebase yet.
  2. Fork the urllib3 repository on Github to start making your changes.
  3. Write a test which shows that the bug was fixed or that the feature works as expected.
  4. Send a pull request and bug the maintainer until it gets merged and published. :) Make sure to add yourself to CONTRIBUTORS.txt.


Please consider sponsoring urllib3 development, especially if your company benefits from this library.

We welcome your patronage on Bountysource:

Your contribution will go towards adding new features to urllib3 and making sure all functionality continues to meet our high quality standards.

Project Grant

A grant for contiguous full-time development has the biggest impact for progress. Periods of 3 to 10 days allow a contributor to tackle substantial complex issues which are otherwise left to linger until somebody can’t afford to not fix them.

Contact @shazow to arrange a grant for a core contributor.

Huge thanks to all the companies and individuals who financially contributed to the development of urllib3. Please send a PR if you’ve donated and would like to be listed.