>>> import urllib3 >>> http = urllib3.PoolManager() >>> r = http.request('GET', 'http://example.com/') >>> r.status 200 >>> r.headers['server'] 'ECS (iad/182A)' >>> 'data: ' + r.data 'data: ...'
By default, urllib3 does not verify your HTTPS requests. You’ll need to supply a root certificate bundle, or use certifi
>>> import urllib3, certifi >>> http = urllib3.PoolManager(cert_reqs='CERT_REQUIRED', ca_certs=certifi.where()) >>> r = http.request('GET', 'https://insecure.com/') Traceback (most recent call last): ... SSLError: hostname 'insecure.com' doesn't match 'svn.nmap.org'
For more on making secure SSL/TLS HTTPS requests, read the Security section.
urllib3’s responses respect the io framework from Python’s standard library, allowing use of these standard objects for purposes like buffering:
>>> http = urllib3.PoolManager() >>> r = http.urlopen('GET','http://example.com/', preload_content=False) >>> b = io.BufferedReader(r, 2048) >>> firstpart = b.read(100) >>> # ... your internet connection fails momentarily ... >>> secondpart = b.read()
urllib3 tries to strike a fine balance between power, extendability, and sanity. To achieve this, the codebase is a collection of small reusable utilities and abstractions composed together in a few helpful layers.
The highest level is the PoolManager(...).
The PoolManager will take care of reusing connections for you whenever you request the same host. This should cover most scenarios without significant loss of efficiency, but you can always drop down to a lower level component for more granular control.
>>> import urllib3 >>> http = urllib3.PoolManager(10) >>> r1 = http.request('GET', 'http://example.com/') >>> r2 = http.request('GET', 'http://httpbin.org/') >>> r3 = http.request('GET', 'http://httpbin.org/get') >>> len(http.pools) 2
A PoolManager is a proxy for a collection of ConnectionPool objects. They both inherit from RequestMethods to make sure that their API is similar, so that instances of either can be passed around interchangeably.
The ProxyManager is an HTTP proxy-aware subclass of PoolManager. It produces a single HTTPConnectionPool instance for all HTTP connections and individual per-server:port HTTPSConnectionPool instances for tunnelled HTTPS connections:
>>> proxy = urllib3.ProxyManager('http://localhost:3128/') >>> r1 = proxy.request('GET', 'http://google.com/') >>> r2 = proxy.request('GET', 'http://httpbin.org/') >>> len(proxy.pools) 1 >>> r3 = proxy.request('GET', 'https://httpbin.org/') >>> r4 = proxy.request('GET', 'https://twitter.com/') >>> len(proxy.pools) 3
The next layer is the ConnectionPool(...).
The HTTPConnectionPool and HTTPSConnectionPool classes allow you to define a pool of connections to a single host and make requests against this pool with automatic connection reusing and thread safety.
>>> import urllib3 >>> conn = urllib3.connection_from_url('http://httpbin.org/') >>> r1 = conn.request('GET', 'http://httpbin.org/') >>> r2 = conn.request('GET', '/user-agent') >>> r3 = conn.request('GET', 'http://example.com') Traceback (most recent call last): ... urllib3.exceptions.HostChangedError: HTTPConnectionPool(host='httpbin.org', port=None): Tried to open a foreign host with url: http://example.com
Again, a ConnectionPool is a pool of connections to a specific host. Trying to access a different host through the same pool will raise a HostChangedError exception unless you specify assert_same_host=False. Do this at your own risk as the outcome is completely dependent on the behaviour of the host server.
A timeout can be set to abort socket operations on individual connections after the specified duration. The timeout can be defined as a float or an instance of Timeout which gives more granular configuration over how much time is allowed for different stages of the request. This can be set for the entire pool or per-request.
>>> from urllib3 import PoolManager, Timeout >>> # Manager with 3 seconds combined timeout. >>> http = PoolManager(timeout=3.0) >>> r = http.request('GET', 'http://httpbin.org/delay/1') >>> # Manager with 2 second timeout for the read phase, no limit for the rest. >>> http = PoolManager(timeout=Timeout(read=2.0)) >>> r = http.request('GET', 'http://httpbin.org/delay/1') >>> # Manager with no timeout but a request with a timeout of 1 seconds for >>> # the connect phase and 2 seconds for the read phase. >>> http = PoolManager() >>> r = http.request('GET', 'http://httpbin.org/delay/1', timeout=Timeout(connect=1.0, read=2.0)) >>> # Same Manager but request with a 5 second total timeout. >>> r = http.request('GET', 'http://httpbin.org/delay/1', timeout=Timeout(total=5.0))
See the Timeout definition for more details.
Retries can be configured by passing an instance of Retry, or disabled by passing False, to the retries parameter.
Redirects are also considered to be a subset of retries but can be configured or disabled individually.
>>> from urllib3 import PoolManager, Retry >>> # Allow 3 retries total for all requests in this pool. These are the same: >>> http = PoolManager(retries=3) >>> http = PoolManager(retries=Retry(3)) >>> http = PoolManager(retries=Retry(total=3)) >>> r = http.request('GET', 'http://httpbin.org/redirect/2') >>> # r.status -> 200 >>> # Disable redirects for this request. >>> r = http.request('GET', 'http://httpbin.org/redirect/2', retries=Retry(3, redirect=False)) >>> # r.status -> 302 >>> # No total limit, but only do 5 connect retries, for this request. >>> r = http.request('GET', 'http://httpbin.org/', retries=Retry(connect=5))
See the Retry definition for more details.
At the very core, just like its predecessors, urllib3 is built on top of httplib – the lowest level HTTP library included in the Python standard library.
To aid the limited functionality of the httplib module, urllib3 provides various helper methods which are used with the higher level components but can also be used independently.
Please consider sponsoring urllib3 development, especially if your company benefits from this library.
Project Grant: A grant for contiguous full-time development has the biggest impact for progress. Periods of 3 to 10 days allow a contributor to tackle substantial complex issues which are otherwise left to linger until somebody can’t afford to not fix them.
Contact @shazow to arrange a grant for a core contributor.
One-off: Development will continue regardless of funding, but donations help move things further along quicker as the maintainer can allocate more time off to work on urllib3 specifically.Sponsor with Credit Card Sponsor with Bitcoin
Recurring: You’re welcome to support the maintainer on Gittip.