How to Use Python Requests Retry: Step-by-Step Guide

python requests retry

Imagine this: You’ve spent hours writing a sophisticated Python script to scrape data or interact with a critical API. You start the process, and everything looks perfect. Then, halfway through, a transient network hiccup or a “503 Service Unavailable” error causes your script to crash — leaving your data incomplete and your work wasted. This is one of the most common frustrations for Python developers, but it is entirely preventable.

内容 隠す

In the modern web environment of 2026, network reliability is a myth. Servers experience micro-outages, proxies rotate, and rate-limiting systems like Cloudflare grow increasingly aggressive. To build resilient applications, you must master Python requests retry logic. This guide takes you from diagnosing the root causes of request failures all the way to implementing advanced, production-grade retry strategies — so your scripts never fail prematurely again.

Diagnostics: Identifying Failure Types and Responses

Before you can fix a failing request, you must understand why it failed. Not every error warrants a retry. In fact, blindly retrying the wrong type of error can lead to permanent IP bans, duplicate data writes, or wasted compute time. There are two main categories of failures to understand.

Network-Layer Exceptions (Before the Server Responds)

These errors occur before your request even reaches the target server’s application logic. In the リクエスト library, they surface as Python exceptions that you can catch directly.

  • ConnectTimeout: Your script couldn’t establish a TCP connection within the specified time limit. Usually caused by a slow server, a bad proxy, or an unreachable host.
  • ReadTimeout: The server accepted the connection but didn’t send any data back within the timeout window. The server may be overloaded or processing a large request.
  • ConnectionError: A low-level network problem such as a DNS resolution failure or a “Connection Refused” message from the OS.
  • ChunkedEncodingError: The server started sending a response but dropped the connection mid-stream before finishing.

All of these are generally safe to retry, because in most cases the server never processed your original request.

Server-Side HTTP Status Codes (After the Server Responds)

Sometimes the connection is fine, but the server returns a status code indicating a problem. The critical skill here is knowing which codes are temporary (and safe to retry) versus permanent (where retrying is pointless or harmful).

Status Code 説明 Should You Retry? Best Action
429 Too Many Requests (Rate Limited) はい Retry with backoff; respect Retry-After header
500 Internal Server Error Yes (often transient) Retry with backoff; server may have crashed momentarily
502 / 503 / 504 Gateway / Service Unavailable / Timeout はい Retry with backoff; typical during server restarts or traffic spikes
400 Bad Request いいえ Fix the request payload or URL parameters — your code has a bug
401 無許可 いいえ Refresh your API token or credentials first
403 Forbidden (WAF / IP Ban) No (not without IP change) Switch proxy IP; retrying same IP is useless
404 Not Found いいえ The URL is wrong — retrying will never help

Understanding these distinctions is the most important foundation. Once you’ve identified a retriable error, you need a systematic plan for handling it safely.

Implementation Strategies: How to Plan Your Retries

A “brute force” retry — hammering the server immediately and repeatedly — is not a solution. It can trigger security firewalls, worsen a server’s load, or get your IP permanently banned. A professional Python requests retry strategy is built on three core principles.

1. The Principle of Idempotency

Before implementing a retry, ask yourself: Is this request idempotent? An idempotent request is one that produces the same result no matter how many times you repeat it. GET, HEAD, PUT, DELETEそして OPTIONS are generally idempotent. POST requests (such as submitting a payment form, creating a database record, or sending an email) are NOT. Retrying a failed POST could result in duplicate charges, duplicate records, or other unintended side effects.

2. Exponential Backoff

Instead of retrying every second, you should increase the waiting time between each attempt. The formula used by urllib3 is:

wait = backoff_factor * (2 ** (retry_number - 1))

For example, with backoff_factor=1, the wait times between retries would be: 0s, 1s, 2s, 4s, 8s. This gives the server progressively more “breathing room” to recover from a spike or a restart before you try again.

3. Jitter (Randomization)

Suppose you have 100 worker processes all hitting a 503 error at the same moment. Without jitter, every one of them will retry at the exact same 1s, 2s, and 4s marks — causing a synchronized flood of traffic known as the “Thundering Herd” problem. Adding a small random value (jitter) to each wait time staggers the retries across time, which is much gentler on the destination server and reduces your chances of being blocked. Modern versions of urllib3 support this natively via the backoff_jitter parameter (see section below).

The Standard Way: Using HTTPAdapter and urllib3 Retry

The most efficient way to implement Python requests retry logic without installing any extra libraries is to use the Retry object from urllib3 — the HTTP engine that リクエスト uses internally. You attach a retry policy directly to a requests.Session object, and it applies automatically to every request that session makes.

Step 1: Install the requests library (if you haven’t already)

pip install リクエスト

Step 2: Build a session with a retry policy

import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry

def get_session_with_retries():
    """
    Creates a requests.Session with automatic retry logic built in.
    Any request made through this session will automatically retry
    on network errors or specific HTTP error codes.
    """
    session = requests.Session()

    # --- Define the retry strategy ---
    retry_strategy = Retry(
        total=5,                  # Maximum total retry attempts
        status_forcelist=[        # HTTP status codes that should trigger a retry
            429,                  # Too Many Requests (rate limited)
            500,                  # Internal Server Error
            502,                  # Bad Gateway
            503,                  # Service Unavailable
            504,                  # Gateway Timeout
        ],
        allowed_methods=["HEAD", "GET", "OPTIONS"],  # Only retry safe, idempotent methods
        backoff_factor=1,         # Wait: 0s, 1s, 2s, 4s, 8s between retries
        backoff_jitter=0.5,       # Add up to 0.5s of random jitter to each wait (prevents Thundering Herd)
        respect_retry_after_header=True,  # If server sends a Retry-After header, obey it
        raise_on_status=False,    # Return the last response instead of raising an exception
    )

    # --- Attach the retry strategy to the session ---
    # "mount" means: use this adapter for all URLs starting with these prefixes
    adapter = HTTPAdapter(max_retries=retry_strategy)
    session.mount("https://", adapter)
    session.mount("http://", adapter)

    return session


# --- How to use it ---
session = get_session_with_retries()

try:
    # Always set a timeout: (connect_timeout_seconds, read_timeout_seconds)
    # Without this, your script could hang forever waiting for a server that never responds.
    response = session.get("https://httpbin.org/status/503", timeout=(5, 10))

    print(f"Final Status Code: {response.status_code}")

except requests.exceptions.ConnectionError as e:
    print(f"Network connection failed: {e}")
except requests.exceptions.Timeout:
    print("The request timed out after all retry attempts.")
except Exception as e:
    print(f"An unexpected error occurred: {e}")

Understanding the Key Parameters

Here is a plain-English breakdown of what each parameter in the Retry object does:

  • total=5 — The maximum number of retry attempts. After 5 failed tries, the request gives up. Setting this to なし would allow infinite retries (never recommended).
  • status_forcelist — The list of HTTP status codes that should trigger a retry. Without this, urllib3 will not retry on status codes by default — only on network-level exceptions.
  • allowed_methods — Only these HTTP methods will be retried. POST is deliberately excluded to prevent duplicate operations.
  • backoff_factor=1 — Controls how long to wait between retries. With a factor of 1, the wait times are 0s → 1s → 2s → 4s → 8s (doubling each time).
  • backoff_jitter=0.5 — Adds a random delay between 0 and 0.5 seconds on top of each backoff wait. This is a built-in way to solve the Thundering Herd problem (available in urllib3 ≥ 2.x).
  • respect_retry_after_header=True — If the server responds with a Retry-After: 60 header (saying “come back in 60 seconds”), the library will automatically sleep that long before retrying. See section below for more detail.
  • raise_on_status=False — After all retries are exhausted, return the last HTTP response object rather than raising a MaxRetryError exception. This gives you more control in your code.

Beginner Tip: What Does “mounting an adapter” mean?

When you call session.mount("https://", adapter), you are telling the session: “For any URL that starts with https://, use this adapter (which has our retry logic).” Mounting for both https:// そして http:// ensures all requests are covered, regardless of protocol.

Respecting the Retry-After Response Header

について Retry-After HTTP header is a signal from the server telling you exactly how long to wait before making another request. When a server returns a 429 Too Many Requests または 503 Service Unavailable response, it often includes this header. Ignoring it is one of the most common mistakes developers make — it almost guarantees your IP will be blocked sooner.

The good news: when you set respect_retry_after_header=True in your Retry configuration (which is the default), urllib3 handles this automatically. If the server sends Retry-After: 30, the library sleeps for 30 seconds before the next attempt — no extra code required on your part.

However, if you are writing fully custom retry logic, here is how to handle the header manually:

import requests
import time

def fetch_with_retry_after(url, max_retries=3):
    """
    Manually respects the Retry-After header from a server response.
    Useful when writing custom retry logic beyond what HTTPAdapter provides.
    """
    for attempt in range(1, max_retries + 1):
        response = requests.get(url, timeout=(5, 10))

        if response.status_code == 429:
            # Check if the server told us how long to wait
            retry_after = response.headers.get("Retry-After")

            if retry_after:
                wait_seconds = int(retry_after)
                print(f"Rate limited. Server says wait {wait_seconds}s. (Attempt {attempt}/{max_retries})")
            else:
                # No header — use exponential backoff as a fallback
                wait_seconds = 2 ** attempt
                print(f"Rate limited. No Retry-After header. Waiting {wait_seconds}s. (Attempt {attempt}/{max_retries})")

            time.sleep(wait_seconds)
            continue  # Go to the next loop iteration (retry the request)

        # If we get here, the status code was not 429
        response.raise_for_status()  # Raise an exception for other 4xx/5xx errors
        return response              # Return the successful response

    raise Exception(f"Request to {url} failed after {max_retries} attempts.")

Advanced Control: Using the Tenacity Library

について urllib3 + HTTPAdapter approach is excellent for standard HTTP errors. However, it struggles with complex business logic — for example: “retry only if the JSON response body contains "error": "temporary_failure"“, or “retry if the response time exceeds 2 seconds”. For these cases, the Tenacity library is the gold standard.

Why Choose Tenacity?

Tenacity uses Python decorators, keeping your code clean and readable. It supports highly specific retry conditions, fine-grained wait strategies, callbacks for logging, and full asyncio support for asynchronous applications.

Step 1: Install Tenacity

pip install tenacity

Step 2: Basic usage with a decorator

import requests
from tenacity import (
    retry,
    stop_after_attempt,
    wait_exponential,
    retry_if_exception_type,
    before_sleep_log,
)
import logging

# Set up basic logging so we can see retry events in the console
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

@retry(
    stop=stop_after_attempt(3),                         # Give up after 3 total attempts
    wait=wait_exponential(multiplier=1, min=2, max=10), # Wait 2s, 4s, 8s... up to 10s max
    retry=retry_if_exception_type(requests.exceptions.RequestException),  # Retry on any network error
    before_sleep=before_sleep_log(logger, logging.WARNING),  # Log a warning before each sleep
)
def fetch_data(url):
    """
    Fetches data from a URL. If a network error occurs, Tenacity
    will automatically retry up to 3 times with exponential backoff.
    """
    print(f"Attempting request to {url}...")
    response = requests.get(url, timeout=(5, 10))
    response.raise_for_status()  # This raises an exception for 4xx/5xx, which Tenacity will catch
    return response.json()


# --- Usage ---
try:
    data = fetch_data("https://api.example.com/data")
    print("Success:", data)
except Exception:
    print("All retry attempts failed. Moving on.")

Advanced: Retry on a specific JSON error message

import requests
from tenacity import retry, stop_after_attempt, wait_exponential, retry_if_result

def is_temporary_error(response):
    """
    Returns True if the response contains a temporary error flag,
    which tells Tenacity to retry. Returns False otherwise.
    """
    try:
        body = response.json()
        return body.get("error") == "temporary_failure"
    except Exception:
        return False  # If we can't parse JSON, don't retry on this basis

@retry(
    stop=stop_after_attempt(4),
    wait=wait_exponential(multiplier=1, min=1, max=8),
    retry=retry_if_result(is_temporary_error),  # Retry based on RESPONSE CONTENT, not just exceptions
)
def call_api(url):
    response = requests.get(url, timeout=(5, 10))
    return response

This level of flexibility is simply not possible with HTTPAdapter alone — Tenacity shines when your retry condition depends on the content of the response, not just the HTTP status code.

Debugging: How to Log Retry Attempts

When a script is silently retrying in the background, it can be very hard to understand what is happening. Adding logging to your retry logic is an essential practice, especially during development and when monitoring production scrapers.

Option A: Enable urllib3’s Built-in Debug Logging

について リクエスト library uses Python’s standard logging module internally. You can activate verbose output with just two lines:

import logging
import requests

# This enables DEBUG-level logging for the requests/urllib3 stack.
# You will see every retry attempt, wait time, and HTTP header in your console.
logging.basicConfig(level=logging.DEBUG)

session = requests.Session()
response = session.get("https://httpbin.org/status/500", timeout=5)

Option B: Add a Custom Retry Event Hook

If DEBUG output is too verbose, you can attach a response hook to your session to log only what you care about:

import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry

def log_retry_event(response, *args, **kwargs):
    """
    This hook is called after every response, including retried ones.
    We use it to log any non-2xx status codes we encounter.
    """
    if response.status_code >= 400:
        print(f"[WARNING] Got status {response.status_code} for URL: {response.url}")

session = requests.Session()

# Attach our logging hook to the session
session.hooks["response"].append(log_retry_event)

retry_strategy = Retry(total=3, backoff_factor=1, status_forcelist=[429, 500, 502, 503, 504])
adapter = HTTPAdapter(max_retries=retry_strategy)
session.mount("https://", adapter)
session.mount("http://", adapter)

response = session.get("https://httpbin.org/status/503", timeout=5)
print(f"Final status: {response.status_code}")

Real-World Pitfalls: Why Retries Sometimes Fail

Even the best Python requests retry logic has its limits. Modern websites use sophisticated Web Application Firewalls (WAFs) that can distinguish between accidental network errors and deliberate automated traffic. Here are the most common reasons a retry strategy stops working:

1. The 403 Forbidden Loop

If you receive a 403 Forbidden response, it typically means your IP address has been flagged or blacklisted. Retrying with the same IP is completely futile — the server has already decided to refuse you. Similarly, if a 429 Too Many Requests error persists even after long backoff periods, your IP is likely “rate-limited at the edge,” meaning the CDN or load balancer is blocking you before requests even reach the application server.

2. Forgetting to Set a Timeout

This is the single most common beginner mistake. If you don’t include timeout= in your requests.get() call, your script can hang indefinitely waiting for a server that never responds — and your retry logic will never trigger. Always use a two-value tuple: timeout=(connect_seconds, read_seconds). A sensible default is timeout=(5, 30).

# BAD — can hang forever:
response = requests.get("https://example.com")

# GOOD — will raise a Timeout exception after 5s connect / 30s read:
response = requests.get("https://example.com", timeout=(5, 30))

3. Over-Retrying

Retrying more than 5 times is rarely productive and can result in your script being flagged as a DoS (Denial of Service) source. If a request hasn’t succeeded after 5 attempts, something is fundamentally wrong — either with the target server, your credentials, or your IP reputation. Investigate the root cause instead of adding more retries.

4. Retrying POST Requests That Aren’t Idempotent

As discussed in Section 2.1, blindly retrying a POST request can create duplicate database records, duplicate payments, or duplicate emails. If you must retry a POST, ensure the server implements idempotency keys (a unique token per request that the server uses to deduplicate).

5. Ignoring the Retry-After Header

Many APIs and CDNs will tell you exactly how long to wait via the Retry-After response header. Ignoring this and retrying sooner will almost always result in another 429 and can lead to escalating bans. Always respect this header.

The Joint Solution: Retry Logic + Proxy Rotation

When you hit the limits of what code alone can solve — especially persistent 403 または 429 errors — you need to change your network identity. This is where combining your Python requests retry strategy with a high-quality proxy service like オッケープロキシー becomes a game-changer.

Why Proxy Rotation Solves What Backoff Cannot

Backoff logic tells your script to wait longer. Proxy rotation tells your script to look like a different user. By integrating a residential proxy service such as OkeyProxy — which provides access to a pool of over 150 million real residential IP addresses — you can catch a 429 または 403 error and immediately retry using a fresh, clean IP address rather than waiting an arbitrary amount of time.

Code Example: Switching Proxies on a 429 Error

import requests
import random

# A list of proxy addresses from your proxy provider (replace with real credentials)
PROXY_POOL = [
    "http://user:[email protected]:12321",
    "http://user:[email protected]:12321",
    "http://user:[email protected]:12321",
]

def fetch_with_proxy_rotation(url, max_attempts=5):
    """
    Fetches a URL, and if a 429 or 403 error is encountered,
    automatically switches to a different proxy IP and retries.
    """
    for attempt in range(1, max_attempts + 1):
        # Pick a random proxy for this attempt
        proxy = random.choice(PROXY_POOL)
        proxies = {"http": proxy, "https": proxy}

        try:
            response = requests.get(url, proxies=proxies, timeout=(5, 30))

            if response.status_code in (429, 403):
                print(f"Attempt {attempt}: Got {response.status_code}. Rotating proxy...")
                continue  # Try again with a new proxy on the next loop iteration

            response.raise_for_status()
            return response  # Success!

        except requests.exceptions.RequestException as e:
            print(f"Attempt {attempt}: Network error — {e}. Retrying...")

    raise Exception(f"All {max_attempts} attempts failed for {url}")


# --- Usage ---
result = fetch_with_proxy_rotation("https://api.example.com/data")
print(result.json())

How OkeyProxy Enhances Your Retry Strategy

  • Bypass Geo-blocks: Combine retries with country-level IP targeting to access regionally restricted content.
  • Reduce 429 Frequency: Residential IPs carry high trust scores and are far less likely to trigger rate-limiting in the first place — reducing the need for retries overall.
  • High Concurrency: While retrying a failed request on one thread, OkeyProxy supports thousands of simultaneous successful connections on others.

プロのアドバイス In your except block, if you detect a persistent 403 または 429 that doesn’t resolve after 2–3 backoff attempts, don’t just sleep — trigger a function that pulls a fresh IP from your IP pool. This is the core technique used by professional-grade web scrapers.

Summary and Best Practices Checklist

Building a robust HTTP request system is an art. Use this checklist to verify your Python scripts are production-ready in 2026:

  1. Always use requests.Session(): Sessions reuse underlying TCP connections (connection pooling), making your requests faster and more efficient — especially when making many requests to the same host.
  2. Set explicit timeouts on every request: 用途 timeout=(connect_seconds, read_seconds) on every .get() または .post() call. Without this, your script can hang indefinitely.
  3. Implement exponential backoff: 用途 backoff_factor >= 1 to give servers time to recover. Never retry immediately in a tight loop.
  4. Add jitter to your backoff: 用途 backoff_jitter (urllib3 ≥ 2.x) or wait_random (Tenacity) to prevent the Thundering Herd problem in concurrent workloads.
  5. Respect the Retry-After header: セット respect_retry_after_header=True (or handle it manually) to obey server-provided wait times. This dramatically reduces the chance of escalating bans.
  6. Only retry the right status codes: Retry on 429, 500, 502, 503, 504. Never retry on 400, 401, 403, 404 — those indicate problems your code must fix, not transient server issues.
  7. Protect non-idempotent methods: Do not add POST requests to allowed_methods unless you have server-side idempotency protection.
  8. Cap your retries at 3–5 attempts: More than that is rarely productive and risks triggering rate limits or bans.
  9. Log your retry events: Add logging so you know when retries are happening. Silent retries in production make debugging very difficult.
  10. Use proxies for hard IP blocks: When backoff alone cannot resolve a 403 or persistent 429, switch your IP via a service like オッケープロキシー.

FAQ: Frequently Asked Questions

Q: Why doesn’t the requests library have a built-in retry parameter?

The philosophy of リクエスト is to be simple and human-friendly — “HTTP for Humans.” Advanced retry logic varies enormously between projects (different status codes, different wait strategies, different idempotency rules), so the library deliberately leaves it to the lower-level urllib3 layer or specialized libraries like Tenacity.

Q: How many retries are considered “polite”?

For most public APIs and websites, 3 to 5 retries with exponential backoff is the accepted standard. Anything beyond that may be flagged as abusive traffic or even a Denial of Service (DoS) attempt by automated WAF systems.

Q: Should I retry on a 403 Forbidden error?

No — not with the same IP. A 403 means the server understood your request but refused it. Retrying with identical headers and the same IP will produce the same result every time. Your only recourse is to either fix your credentials/headers or rotate your IP using a proxy service.

Q: What is the difference between backoff_factor and backoff_jitter?

backoff_factor controls the base exponential wait time (e.g., 1s, 2s, 4s, 8s). backoff_jitter adds a random extra amount (e.g., up to 0.5s) to each wait, so concurrent processes don’t all retry at exactly the same millisecond. Think of it as adding a little unpredictability to avoid synchronized floods.

Q: Can I use retry logic with async Python (asyncio)?

Yes. For asynchronous HTTP requests, use the aiohttp library instead of リクエスト, and the Tenacity library for retry logic — it has full asyncio support via the @retry decorator on async def functions.

Q: What is the Retry-After header and should I always respect it?

について Retry-After header is an HTTP standard that lets a server tell clients how many seconds to wait before retrying. It appears most commonly in 429 そして 503 responses. You should always respect it — ignoring it and retrying sooner will result in continued failures and potentially escalating IP bans.

結論

Mastering Python requests retry logic is the difference between a fragile one-off script and a professional, production-grade application. By correctly diagnosing error types, implementing scientific backoff and jitter strategies, respecting server signals like the Retry-After header, and combining retry logic with proxy rotation when IP-level blocks arise, you can build HTTP clients that remain resilient against the unpredictable realities of the modern web. In 2026, persistence — when done correctly and respectfully — is the key to reliable data collection and API integration.