>>> import urllib3 >>> http = urllib3.PoolManager() >>> r = http.request('GET', 'http://google.com/') >>> r.status 200 >>> r.headers['server'] 'gws' >>> r.data ...
urllib3 tries to strike a fine balance between power, extendability, and sanity. To achieve this, the codebase is a collection of small reusable utilities and abstractions composed together in a few helpful layers.
The highest level is the PoolManager(...).
The PoolManager will take care of reusing connections for you whenever you request the same host. This should cover most scenarios without significant loss of efficiency, but you can always drop down to a lower level component for more granular control.
>>> http = urllib3.PoolManager(10) >>> r1 = http.request('GET', 'http://google.com/') >>> r2 = http.request('GET', 'http://google.com/mail') >>> r3 = http.request('GET', 'http://yahoo.com/') >>> len(http.pools) 2
A PoolManager is a proxy for a collection of ConnectionPool objects. They both inherit from RequestMethods to make sure that their API is similar, so that instances of either can be passed around interchangeably.
The ProxyManager is an HTTP proxy-aware subclass of PoolManager. It produces a single HTTPConnectionPool instance for all HTTP connections and individual per-server:port HTTPSConnectionPool instances for tunnelled HTTPS connections:
>>> proxy = urllib3.ProxyManager('http://localhost:3128/') >>> r1 = proxy.request('GET', 'http://google.com/') >>> r2 = proxy.request('GET', 'http://httpbin.org/') >>> len(proxy.pools) 1 >>> r3 = proxy.request('GET', 'https://httpbin.org/') >>> r4 = proxy.request('GET', 'https://twitter.com/') >>> len(proxy.pools) 3
The next layer is the ConnectionPool(...).
The HTTPConnectionPool and HTTPSConnectionPool classes allow you to define a pool of connections to a single host and make requests against this pool with automatic connection reusing and thread safety.
>>> conn = urllib3.connection_from_url('http://www.google.com') >>> r1 = conn.request('GET', 'http://www.google.com/') >>> r2 = conn.request('GET', '/search') >>> r3 = conn.request('GET', 'http://wwww.yahoo.com/') Traceback (most recent call last) ... HostChangedError: Connection pool with host 'http://google.com' tried to open a foreign host: http://yahoo.com/
Again, a ConnectionPool is a pool of connections to a specific host. Trying to access a different host through the same pool will raise a HostChangedError exception unless you specify assert_same_host=False. Do this at your own risk as the outcome is completely dependent on the behaviour of the host server.
A timeout can be set to abort socket operations on individual connections after the specified duration. The timeout can be defined as a float or an instance of Timeout which gives more granular configuration over how much time is allowed for different stages of the request. This can be set for the entire pool or per-request.
>>> from urllib3 import PoolManager, Timeout
>>> # Manager with 3 seconds combined timeout. >>> http = PoolManager(timeout=3.0) >>> r = http.request('GET', 'http://httpbin.org/delay/1')
>>> # Manager with 2 second timeout for the read phase, no limit for the rest. >>> http = PoolManager(timeout=Timeout(read=2.0)) >>> r = http.request('GET', 'http://httpbin.org/delay/1')
>>> # Manager with no timeout but a request with a timeout of 1 seconds for >>> # the connect phase and 2 seconds for the read phase. >>> http = PoolManager() >>> r = http.request('GET', 'http://httpbin.org/delay/1', timeout=Timeout(connect=1.0, read=2.0))
>>> # Same Manager but request with a 5 second total timeout. >>> r = http.request('GET', 'http://httpbin.org/delay/1', timeout=Timeout(total=5.0))
At the very core, just like its predecessors, urllib3 is built on top of httplib – the lowest level HTTP library included in the Python standard library.
To aid the limited functionality of the httplib module, urllib3 provides various helper methods which are used with the higher level components but can also be used independently.