urllib3
Low-level HTTP client library for Python. Functions as foundation for many HTTP clients including Requests. Provides connection pooling, SSL/TLS, compression, streaming, and detailed control features. Excellent customizability with over 150 million monthly downloads.
GitHub Overview
urllib3/urllib3
urllib3 is a user-friendly HTTP client library for Python
Topics
Star History
Library
urllib3
Overview
urllib3 is a low-level HTTP client library for Python. It serves as the foundation for many high-level libraries (such as requests) and provides connection pooling, SSL/TLS verification, and thread safety. In 2025, it continues to play a crucial role in scenarios requiring detailed HTTP communication control.
Details
urllib3 was developed as an improved version of "urllib" to overcome the limitations of the standard library. It has been adopted as the foundation for the requests library and supports the backbone of the Python HTTP ecosystem. It is also progressing towards supporting modern web technologies such as WebAssembly compatibility and HTTP/2 support.
Key Features
- Connection Pooling: Efficient connection reuse
- Thread Safety: Safe usage in multi-threaded environments
- SSL/TLS Verification: Comprehensive security features
- Retry Functionality: Automatic connection retry
- Proxy Support: HTTP/HTTPS/SOCKS proxy compatibility
- Compression Support: Automatic gzip, deflate, br decoding
- Custom Headers: Detailed request control
- Upload Support: Multipart form data
- Timeout Control: Detailed timeout configuration
- WebAssembly Support: Execution support in modern environments
Pros and Cons
Pros
- High performance and efficient resource utilization
- Proven track record as foundation for many libraries including requests
- Detailed HTTP control and customization capabilities
- Thread-safe for multi-threaded environments
- Rich security features and SSL/TLS support
Cons
- Higher learning curve compared to high-level libraries
- Requires verbose code writing
- No async/await support
- JSON processing and session management require manual implementation
- Low-level and complex error handling
Reference Pages
Code Examples
Basic Setup
import urllib3
# Create HTTP connection pool
http = urllib3.PoolManager()
# Disable SSL certificate verification (development only)
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
http_insecure = urllib3.PoolManager(cert_reqs='CERT_NONE')
Basic Requests
import urllib3
import json
http = urllib3.PoolManager()
# GET request
response = http.request('GET', 'https://httpbin.org/get')
print(f"Status: {response.status}")
print(f"Data: {response.data.decode('utf-8')}")
# POST request (JSON)
data = {"name": "John Doe", "age": 30}
encoded_data = json.dumps(data).encode('utf-8')
response = http.request(
'POST',
'https://httpbin.org/post',
body=encoded_data,
headers={'Content-Type': 'application/json'}
)
# Request with headers
headers = {
'User-Agent': 'MyApp/1.0',
'Accept': 'application/json',
'Authorization': 'Bearer your-token-here'
}
response = http.request(
'GET',
'https://api.example.com/data',
headers=headers
)
Session Management (Connection Pooling)
import urllib3
# Customized pool manager
http = urllib3.PoolManager(
num_pools=50, # Number of pools
maxsize=100, # Maximum connections per pool
block=True, # Block when connection limit reached
timeout=urllib3.Timeout(connect=2.0, read=10.0)
)
# Connection pool for specific host
host_pool = urllib3.HTTPSConnectionPool(
'api.example.com',
port=443,
maxsize=20,
timeout=10.0
)
# Request using connection pool
response = host_pool.request('GET', '/api/v1/users')
print(f"Connection pool info: {host_pool.pool}")
Authentication and SSL Configuration
import urllib3
from urllib3.util import make_headers
# Basic authentication
auth_headers = make_headers(basic_auth='username:password')
http = urllib3.PoolManager()
response = http.request(
'GET',
'https://httpbin.org/basic-auth/username/password',
headers=auth_headers
)
# SSL authentication with client certificate
http_with_cert = urllib3.PoolManager(
cert_file='/path/to/client.crt',
key_file='/path/to/client.key',
ca_certs='/path/to/ca.crt'
)
response = http_with_cert.request('GET', 'https://secure-api.example.com')
# Custom SSL configuration
import ssl
ssl_context = ssl.create_default_context()
ssl_context.check_hostname = False
ssl_context.verify_mode = ssl.CERT_NONE
http_custom_ssl = urllib3.PoolManager(
ssl_context=ssl_context
)
Error Handling
import urllib3
from urllib3.exceptions import (
HTTPError,
ConnectTimeoutError,
ReadTimeoutError,
SSLError,
MaxRetryError
)
http = urllib3.PoolManager()
try:
response = http.request(
'GET',
'https://example.com/api/data',
timeout=urllib3.Timeout(connect=5, read=10),
retries=urllib3.Retry(
total=3,
backoff_factor=0.3,
status_forcelist=[500, 502, 503, 504]
)
)
if response.status == 200:
data = response.data.decode('utf-8')
print(f"Success: {data}")
else:
print(f"HTTP Error: {response.status}")
except ConnectTimeoutError:
print("Connection timeout error")
except ReadTimeoutError:
print("Read timeout error")
except SSLError as e:
print(f"SSL/TLS error: {e}")
except MaxRetryError as e:
print(f"Maximum retry attempts reached: {e}")
except HTTPError as e:
print(f"HTTP error: {e}")
Performance Optimization
import urllib3
import threading
from concurrent.futures import ThreadPoolExecutor
# Streaming download
http = urllib3.PoolManager()
response = http.request(
'GET',
'https://example.com/large-file.zip',
preload_content=False # Streaming mode
)
# Read in chunks
with open('downloaded_file.zip', 'wb') as f:
for chunk in response.stream(1024): # Read 1KB at a time
f.write(chunk)
response.release_conn() # Explicitly release connection
# Compression support
response = http.request(
'GET',
'https://api.example.com/data',
headers={'Accept-Encoding': 'gzip, deflate, br'}
)
# Concurrent request processing
def fetch_url(url):
try:
response = http.request('GET', url)
return {'url': url, 'status': response.status, 'length': len(response.data)}
except Exception as e:
return {'url': url, 'error': str(e)}
urls = [
'https://httpbin.org/delay/1',
'https://httpbin.org/delay/2',
'https://httpbin.org/delay/3'
]
# Concurrent processing with ThreadPoolExecutor
with ThreadPoolExecutor(max_workers=5) as executor:
results = list(executor.map(fetch_url, urls))
for result in results:
print(result)