Imagine this: you’ve written a sophisticated Python script to scrape market data or interact with a critical API. You trigger the script, go to grab a coffee, and come back an hour later only to find it has made zero progress. It’s not crashed — it’s just sitting there. This “hanging script” nightmare is almost always caused by a missing Python requests timeout.
In the world of networking, silence isn’t always golden; sometimes, it means your application is trapped in digital limbo. Understanding and implementing timeouts is the difference between a professional, resilient application and one that fails silently under the slightest network pressure.
What is the Python Requests Library?
について リクエスト library is the de facto standard for making HTTP requests in Python. Designed to be “HTTP for Humans,” it replaced the older, more complex urllib2 with a clean, intuitive API. Whether you are sending GET requests to fetch webpage content or POST requests to submit form data, the library handles query strings, form data, multipart files, and much more with ease.
However, its greatest strength — simplicity — can also be a trap. Because it is so easy to use, many developers overlook the underlying socket configurations, leading to serious problems when the network or target server doesn’t behave as expected. Its default behavior regarding time limits is dangerous: if you don’t specify a timeout, Requests will wait indefinitely for a response.
What is a Timeout in Python Requests?
A Python requests timeout is a limit on how long your script will wait for a specific network action to complete. If the action takes longer than the specified duration, the library stops waiting and raises an exception, allowing your code to handle the failure gracefully instead of hanging forever.
Connect Timeout vs. Read Timeout
When you set a timeout, you aren’t just setting a single timer for the entire request. Behind the scenes, the process is split into two distinct phases:
- Connect Timeout: The time allowed for your client to establish a TCP connection to the remote server. Think of it as the time you’re willing to wait for the server to pick up the phone.
- Read Timeout: Once the connection is established, this is the maximum time allowed between two consecutive chunks of data received from the server. If the server “picks up the phone” but stays silent for too long between data packets, a read timeout occurs. Note that this is not a cap on the total download time — it measures inactivity between data chunks.
Why Setting a Reasonable Timeout Value is Critical
Setting a timeout is fundamentally about resource management. Every HTTP request your script makes consumes a thread or a process. If you have 100 requests waiting on a dead server with no timeout, those 100 threads are essentially “zombified” — they aren’t doing useful work, but they’re not free either. This leads to memory leaks, system crashes, and total application failure. A reasonable timeout ensures your application stays responsive even when external services are having a bad day.
How to Set Python Requests Timeout
について リクエスト library provides three primary ways to specify time limits. Depending on whether you need a quick fix or granular control, you can use a single value, a tuple, or a session-level configuration.
Method 1: Using a Single Timeout Value
A single integer or float applies the same time limit to both the connect そして read phases. If either phase exceeds this limit, a requests.exceptions.Timeout exception is raised.
import requests
try:
# Both connect and read must complete within 5 seconds
response = requests.get('https://api.example.com/data', timeout=5)
response.raise_for_status() # Raise an error for 4xx/5xx responses
print(f"Status: {response.status_code}")
except requests.exceptions.Timeout:
print("The request timed out.")
except requests.exceptions.RequestException as e:
print(f"An error occurred: {e}")
Method 2: Setting Connect and Read Timeouts Separately (Recommended)
For production applications, pass a tuple (connect_timeout, read_timeout) to set each phase independently. This is the recommended approach because it lets you diagnose failures more precisely.
import requests
try:
# 3.05 seconds to connect, 10 seconds to receive data between chunks
response = requests.get('https://api.example.com/data', timeout=(3.05, 10))
print(f"Status: {response.status_code}")
except requests.exceptions.ConnectTimeout:
print("Failed to connect: server is unreachable or overloaded.")
except requests.exceptions.ReadTimeout:
print("Connected, but the server took too long to send data.")
except requests.exceptions.RequestException as e:
print(f"An error occurred: {e}")
プロのアドバイス It’s a common industry convention to set the connect timeout slightly above a multiple of 3 (e.g.,
3.05seconds). The TCP packet retransmission window is typically 3 seconds, so using3.05allows for one full retransmission attempt before giving up.
Method 3: Session-Level Default Timeouts
If you’re making many requests to the same API, repeating timeout=(3.05, 10) on every call is error-prone. The requests.Session object does not natively support a timeout attribute, but you can enforce a default by subclassing it or by overriding the リクエスト method. Here’s the correct pattern:
import requests
class TimeoutSession(requests.Session):
"""A Session subclass that enforces a default timeout on every request."""
def __init__(self, timeout=(3.05, 10)):
super().__init__()
self.default_timeout = timeout # Use a distinct attribute name
def request(self, method, url, **kwargs):
# Only apply default if caller hasn't specified their own timeout
kwargs.setdefault("timeout", self.default_timeout)
return super().request(method, url, **kwargs)
# All requests in this session default to (3.05, 10) unless overridden
session = TimeoutSession()
session.get('https://api.example.com/users')
session.post('https://api.example.com/items', json={"name": "widget"})
# Override the default for a specific slow endpoint
session.get('https://api.example.com/heavy-report', timeout=(3.05, 60))
使用 kwargs.setdefault() is the key detail here. It applies the default timeout only when the caller hasn’t already specified one, making it safe to override on a per-request basis.
Recommended Timeout Values by Use Case
| ユースケース | Connect Timeout | Read Timeout | Notes |
|---|---|---|---|
| REST API calls | 3–5s | 10–30s | General-purpose starting point |
| ウェブ・スクレイピング | 5–10s | 15–30s | Some sites load slowly; leave room for variance |
| Scraping via proxies | 10–15s | 30–60s | Extra hops add latency; see proxy section 以下 |
| Automation scripts | 2–4s | 10s | Short connect, slightly longer read |
| Large file downloads / heavy reports | 5s | 60–120s | Server may be slow to generate the response body |
These are starting points, not rigid rules. The best practice is to measure your application’s P95 latency (the threshold below which 95% of requests succeed) and set your timeout slightly above that value to absorb normal network variance.
Timeout Exception Handling
Setting a timeout value is only half the job. You must also catch the resulting exceptions to prevent your program from crashing and to implement appropriate retry or fallback logic. The リクエスト library provides a clear exception hierarchy for this purpose.
| Exception | What It Means | Typical Response |
|---|---|---|
requests.exceptions.ConnectTimeout |
Server failed to accept a connection in time | Server may be down; retry with backoff or alert |
requests.exceptions.ReadTimeout |
Server connected but stopped sending data | Server may be overloaded; retry or increase read timeout |
requests.exceptions.Timeout |
Base class — catches both of the above | Use when you don’t need to distinguish the phase |
requests.exceptions.ConnectionError |
Network-level failure (DNS, refused connection) | Check connectivity or DNS resolution |
requests.exceptions.RequestException |
Ultimate parent of all Requests exceptions | Catch-all for any request failure |
Handling specific exceptions lets you build differentiated logic. A ConnectTimeout might indicate the server is offline (trigger an alert), while a ReadTimeout might indicate it’s just overloaded (retry after a short delay). Here is a complete, production-ready example:
import requests
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
def fetch_data(url: str, timeout: tuple = (3.05, 10)) -> dict | None:
"""
Fetch JSON data from a URL with proper timeout and error handling.
Returns parsed JSON on success, None on failure.
"""
try:
response = requests.get(url, timeout=timeout)
response.raise_for_status()
return response.json()
except requests.exceptions.ConnectTimeout:
logger.error("ConnectTimeout: Could not reach %s — server may be down.", url)
except requests.exceptions.ReadTimeout:
logger.error("ReadTimeout: %s connected but stopped responding.", url)
except requests.exceptions.ConnectionError:
logger.error("ConnectionError: Network issue reaching %s.", url)
except requests.exceptions.HTTPError as e:
logger.error("HTTPError %s for %s", e.response.status_code, url)
except requests.exceptions.RequestException as e:
logger.error("Unexpected request error: %s", e)
return None
Timeouts When Using Proxy Servers
When routing requests through a proxy server, your traffic travels an additional hop: your machine → proxy server → target server. This extra leg adds latency that must be accounted for in your timeout settings. Failing to do so will result in frequent, spurious timeout errors even when the target server is perfectly healthy.
This challenge is most pronounced when using 回転居住用プロキシ, where each request may be routed through a different residential IP address. The time required to establish a clean connection can vary significantly — from under a second to several seconds — depending on the geographic location and connection quality of each node.
When scraping at scale with a large residential proxy pool like オッケープロキシー (which offers 150M+ IPs across 200+ countries), tuning your timeout values is essential to avoid wasting retries on legitimate connections that simply need a bit more time. The following example shows a scraping session correctly configured for proxy use:
import requests
PROXY_CONFIG = {
"http": "http://user:[email protected]:12321",
"https": "http://user:[email protected]:12321",
}
# Direct connections: (3.05, 10) is usually fine
# Proxy connections: increase connect timeout to accommodate routing overhead
PROXY_TIMEOUT = (10, 30)
def scrape_with_proxy(url: str) -> str | None:
try:
response = requests.get(
url,
proxies=PROXY_CONFIG,
timeout=PROXY_TIMEOUT,
headers={"User-Agent": "Mozilla/5.0"}
)
response.raise_for_status()
return response.text
except requests.exceptions.ConnectTimeout:
# The proxy node failed to establish a connection in time — normal with rotating IPs
print(f"Proxy connect timeout for {url}. Will retry with a new node.")
except requests.exceptions.ReadTimeout:
print(f"Read timeout for {url}. Target server is slow.")
except requests.exceptions.RequestException as e:
print(f"Request failed: {e}")
return None
Key guidelines for proxy timeout configuration:
- Set the connect timeout to 10–15 seconds to accommodate the variable latency of rotating IP routing.
- A
ConnectTimeoutwith a proxy often means the node itself was slow or blocked — it is safe to retry automatically, as the next rotation will assign a different IP. - を使用する。
ReadTimeoutof 30–60 seconds if the target site has slow page loads or server-side rendering delays. - Log the proxy endpoint alongside the timeout type to identify patterns in node performance over time.
Common Timeout Issues and Solutions
1. No Timeout Set (The Most Dangerous Mistake)
The number-one mistake is omitting the timeout parameter entirely. A hanging request blocks the thread indefinitely, and in concurrent systems, this can cascade into a full application freeze. Every single requests.get(), requests.post()そして session.request() call must have a timeout.
2. Overly Short Timeouts
A timeout of 0.5 seconds might seem “safe,” but it will produce false positives — legitimate requests that would have succeeded with slightly more time. Start with 3–5 seconds for connections and 10–30 seconds for reads, then tune based on observed P95 latency from your logs.
3. Network Fluctuations and Transient Failures
The internet is not a flat road; routing paths change and temporary congestion is common. A request that times out once will often succeed immediately on a second attempt. This is why retry logic is not optional in production code — it’s a necessity.
4. DNS Resolution Delays
Sometimes the “hang” occurs before the HTTP request even starts, because DNS resolution is slow. If you’re making repeated requests to the same hosts, consider using a DNS caching mechanism or the requests-cache library to eliminate repeated lookup overhead. Your connect timeout also needs to be large enough to absorb DNS resolution time.
5. Long-Running Server-Side Operations
If an API endpoint generates a large report or runs a complex query, the server may take tens of seconds before sending the first byte. In these cases, a short read timeout will produce false failures. Either increase the read timeout for specific endpoints, or switch to an asynchronous/streaming approach.
Best Practices for Python Requests Timeout Management
1. Always Set a Timeout — No Exceptions
Never rely on the default behavior (indefinite wait). In production code, every HTTP call must specify a timeout. Even a permissive global default of 30 seconds is vastly better than none.
2. Use Tuple Values for Better Diagnostics
使用 timeout=(3.05, 30) の代わりに timeout=30 lets you distinguish a server that is offline (ConnectTimeout) from one that is just slow (ReadTimeout). This distinction is invaluable for alerting and debugging.
3. Implement Retry Logic with Exponential Backoff
A timeout doesn’t always mean you should give up. Combine the HTTPAdapter と urllib3‘s Retry utility to automatically retry on transient failures, including network errors and specific HTTP status codes.
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
def build_resilient_session(
retries: int = 3,
backoff_factor: float = 1.0,
status_forcelist: tuple = (429, 500, 502, 503, 504),
timeout: tuple = (3.05, 10),
) -> requests.Session:
"""
Build a requests.Session with automatic retry and default timeout.
- backoff_factor=1.0 means: wait 1s, then 2s, then 4s between retries.
- status_forcelist: HTTP status codes that trigger a retry.
"""
class _TimeoutSession(requests.Session):
def request(self, method, url, **kwargs):
kwargs.setdefault("timeout", timeout)
return super().request(method, url, **kwargs)
retry_strategy = Retry(
total=retries,
backoff_factor=backoff_factor,
status_forcelist=list(status_forcelist),
allowed_methods=["HEAD", "GET", "OPTIONS", "POST"], # POST is opt-in
raise_on_status=False,
)
adapter = HTTPAdapter(max_retries=retry_strategy)
session = _TimeoutSession()
session.mount("https://", adapter)
session.mount("http://", adapter)
return session
# Usage
session = build_resilient_session()
response = session.get("https://api.example.com/data")
Note on
backoff_factor: とbackoff_factor=1.0, urllib3 waitsbackoff_factor * (2 ** (retry_number - 1))seconds between retries: 1s, 2s, 4s. This exponential backoff reduces the risk of overwhelming a struggling server.
4. Log Everything
When a timeout occurs, log the URL, the timestamp, and the exception type. If you notice a spike in ReadTimeout errors at a particular time each day, you may have uncovered a server maintenance window or a peak traffic pattern — intelligence that helps you tune timeouts proactively.
5. Measure Before You Tune
Don’t guess at timeout values. Instrument your requests to track real latency distributions. Define your P95 and P99 latency (the durations below which 95% and 99% of requests succeed), then set your timeouts just above those thresholds. This data-driven approach avoids both over-aggressive timeouts (false failures) and over-generous ones (wasted wait time).
Async Alternatives: HTTPX and AIOHTTP
について リクエスト library is synchronous — each request blocks the thread until it completes. If you’re managing hundreds of concurrent requests, this blocking behavior becomes a bottleneck. Two modern alternatives are worth knowing:
HTTPX — Drop-in Upgrade with Async Support
HTTPX provides an API that is intentionally similar to リクエスト, making migration straightforward. It supports both synchronous and asynchronous modes and offers HTTP/2 support. Timeout configuration mirrors リクエスト:
import httpx
# Synchronous — almost identical to requests
with httpx.Client(timeout=httpx.Timeout(10.0, connect=3.05)) as client:
response = client.get("https://api.example.com/data")
# Asynchronous — run multiple requests concurrently
import asyncio
async def fetch_all(urls: list[str]) -> list:
async with httpx.AsyncClient(timeout=httpx.Timeout(10.0, connect=3.05)) as client:
tasks = [client.get(url) for url in urls]
return await asyncio.gather(*tasks, return_exceptions=True)
AIOHTTP — Maximum Async Performance
AIOHTTP is async-only and purpose-built for high-concurrency workloads. It is significantly faster than both リクエスト そして httpx when handling large volumes of simultaneous requests. Use it when you need to process thousands of requests concurrently:
import asyncio
import aiohttp
async def fetch_all(urls: list[str]) -> list:
timeout = aiohttp.ClientTimeout(total=30, connect=3.05)
async with aiohttp.ClientSession(timeout=timeout) as session:
tasks = [session.get(url) for url in urls]
responses = await asyncio.gather(*tasks, return_exceptions=True)
return responses
asyncio.run(fetch_all(["https://api.example.com/1", "https://api.example.com/2"]))
As a rule of thumb: use リクエスト for simple scripts and low-volume tasks; use httpx when you want async support without abandoning a familiar API; use aiohttp for high-throughput scrapers or services handling thousands of concurrent connections.
よくある質問
What is the default timeout for Python requests?
By default, the リクエスト library has no timeout — it will wait indefinitely for a server response. This means a single unresponsive server can hang your entire script forever. Always set an explicit timeout in production code.
Does the read timeout apply to the total download time?
No. The read timeout measures the maximum time of inactivity between received data chunks, not the total time to download a response. A large file can take minutes to download as long as data keeps arriving within the read timeout window.
How do I set a timeout for all requests in a session?
について requests.Session object does not have a built-in timeout attribute. The correct approach is to subclass Session and override the リクエスト method, using kwargs.setdefault("timeout", your_default) to inject the default only when the caller hasn’t specified one.
What is the difference between ConnectTimeout and ReadTimeout?
ConnectTimeout is raised when the client cannot establish a TCP connection to the server within the allowed time — typically indicating the server is down or unreachable. ReadTimeout is raised when the connection is established but the server stops sending data for longer than the read timeout — typically indicating the server is overloaded or the requested operation is taking too long.
Should I increase my timeout when using proxies?
Yes. Using a proxy adds at least one additional network hop, and rotating residential proxies can vary significantly in latency. A connect timeout of 10–15 seconds (versus 3–5 seconds for direct connections) is a safe starting point. If you see frequent spurious ConnectTimeout errors, the proxy routing overhead is likely the cause.
How do I handle timeouts in asynchronous Python code?
For async code, use either httpx.AsyncClient と httpx.Timeoutあるいは aiohttp.ClientSession と aiohttp.ClientTimeout. Both support separate connect and total timeout configuration. You can also wrap any coroutine with asyncio.wait_for(coro, timeout=N) for a hard wall-clock cap.
What is a good timeout value for web scraping?
A connect timeout of 5–10 seconds and a read timeout of 15–30 seconds is a reasonable starting range for direct web scraping. When using proxies, increase the connect timeout to 10–15 seconds. For data-driven tuning, measure your P95 request latency and set your timeout slightly above that threshold.
結論
について Python requests timeout parameter is small in syntax but carries a massive responsibility. It protects your application’s threads and memory, prevents scripts from hanging indefinitely, and provides the diagnostic information you need to build truly resilient systems.
The key takeaways are: always use tuple-style timeouts to separate connect and read phases; catch specific exceptions to implement differentiated retry logic; use session-level defaults to enforce consistency across a codebase; and when using proxies or scraping at scale, tune your timeouts to account for routing overhead. Networking is inherently unpredictable — but with these practices in place, your Python applications don’t have to be.



![Best Proxies for Sneaker Bots: Guide to Copping Your Grails [2026] best proxies for sneaker bots](https://www.okeyproxy.com/wp-content/uploads/2026/03/best-proxies-for-sneaker-bots-500x333.jpg)



