Of course! Handling retries with timeouts is a very common and important task in Python, especially when dealing with unreliable network services or external APIs. A robust solution should handle both types of failures:

- Timeouts: The operation takes too long and you want to give up and try again.
- Other Exceptions: The operation fails for other reasons (e.g., connection error, HTTP 500 error) and you want to retry it.
Here’s a comprehensive guide, from the most basic approach to the most robust and recommended one.
The Problem: Naive Retry is Not Enough
A simple try...except block retries on any exception, but it doesn't handle timeouts. This can lead to your program hanging indefinitely if an API call gets stuck.
# BAD: This will hang forever if the call gets stuck
import requests
def call_api(url):
try:
response = requests.get(url) # Can hang indefinitely
response.raise_for_status() # Raises HTTPError for bad responses (4xx or 5xx)
return response.json()
except requests.exceptions.RequestException as e:
print(f"Request failed: {e}. Retrying...")
# This retry has no timeout!
return call_api(url) # DANGER: Recursion depth and hanging
# call_api("http://httpbin.org/delay/10") # This will hang for 10 seconds, then retry, hang again...
Solution 1: The Manual Loop (Good for Understanding)
This is the most straightforward way to implement a retry mechanism. You use a for loop, a try...except block, and time.sleep() for delays.
Key Components:

forloop: To control the number of retries.try...except: To catch specific exceptions.requests.get(timeout=...): To prevent the call from hanging indefinitely.time.sleep(): To wait before the next retry.
import time
import requests
from requests.exceptions import Timeout, RequestException
MAX_RETRIES = 3
RETRY_DELAY_SECONDS = 5
REQUEST_TIMEOUT = 2 # Timeout for a single request attempt
def call_api_with_retry(url):
"""Makes an API call with a manual retry loop."""
for attempt in range(MAX_RETRIES):
try:
print(f"Attempt {attempt + 1}/{MAX_RETRIES} for {url}")
# Set a timeout for the individual request
response = requests.get(url, timeout=REQUEST_TIMEOUT)
response.raise_for_status() # Raise an exception for 4xx/5xx errors
return response.json()
except Timeout:
print(f"Timeout occurred on attempt {attempt + 1}. Retrying in {RETRY_DELAY_SECONDS}s...")
except RequestException as e:
print(f"Request failed on attempt {attempt + 1} with error: {e}. Retrying in {RETRY_DELAY_SECONDS}s...")
# Wait before the next retry
time.sleep(RETRY_DELAY_SECONDS)
# If the loop finishes, all retries have failed
print(f"All {MAX_RETRIES} retries failed for {url}.")
return None # Or raise a custom exception
# --- Example Usage ---
# This will fail the first time (timeout), then succeed on the second
# call_api_with_retry("http://httpbin.org/delay/1") # Will succeed
# call_api_with_retry("http://httpbin.org/delay/5") # Will timeout 3 times and fail
Pros:
- Easy to understand the logic.
- No external dependencies.
Cons:
- Verbose and repetitive. You have to write this boilerplate for every function.
- Can be tricky to handle exponential backoff (waiting longer between each retry).
Solution 2: Using tenacity (Recommended & Most Robust)
For any serious application, using a well-tested library is the best practice. The tenacity library is the gold standard for this in Python. It's powerful, flexible, and handles all the edge cases for you.
First, install it:

pip install tenacity
tenacity allows you to configure retry behavior with a clean, declarative decorator.
Key Components:
@retry(...): A decorator that wraps your function.retry=...: The total number of attempts.wait=...: The delay between retries. Can be a fixed time or a function for exponential backoff.stop=...: When to stop retrying (e.g., after a number of attempts).retry=...: Which exceptions to retry on.reraise=...: Whether to re-raise the exception after all retries are exhausted.
Example 1: Simple Retry with Timeout
This is equivalent to the manual loop above.
import requests
from requests.exceptions import Timeout, RequestException
from tenacity import retry, stop_after_attempt, wait_fixed
@retry(
stop=stop_after_attempt(3), # Try 3 times
wait=wait_fixed(5), # Wait 5 seconds between retries
retry=(Timeout, RequestException), # Retry only on these exceptions
reraise=True # Re-raise the last exception if all retries fail
)
def call_api_with_tenacity(url):
"""Makes an API call using the tenacity decorator."""
print(f"Attempting to call {url}...")
response = requests.get(url, timeout=2)
response.raise_for_status()
return response.json()
# --- Example Usage ---
try:
# data = call_api_with_tenacity("http://httpbin.org/delay/5")
data = call_api_with_tenacity("http://httpbin.org/status/500") # Retries on HTTP 500 error
print("Success:", data)
except Exception as e:
print("Final failure after all retries:", e)
Example 2: Exponential Backoff (Highly Recommended)
Exponential backoff is a crucial strategy for production systems. It reduces load on the failing service by increasing the wait time between retries.
from tenacity import retry, stop_after_attempt, wait_exponential
@retry(
stop=stop_after_attempt(5),
wait=wait_exponential(multiplier=1, min=4, max=10), # Wait 4s, 8s, 16s... (capped at 10s)
retry=(Timeout, RequestException)
)
def call_api_with_backoff(url):
"""Makes an API call with exponential backoff."""
print(f"Attempting to call {url}...")
response = requests.get(url, timeout=2)
response.raise_for_status()
return response.json()
# --- Example Usage ---
# call_api_with_backoff("http://httpbin.org/delay/5")
Solution 3: Using requests.Session with requests.adapters
This is a more advanced, lower-level approach that is extremely efficient if you are making many requests to the same host. It leverages urllib3's built-in retry mechanism.
**Key Components:`
requests.Session(): An object that persists parameters across requests.requests.adapters.HTTPAdapter: Allows you to configure how requests are sent.max_retries: The number of times to retry on specific connection errors.pool_connections/pool_maxsize: Manages connection pooling for performance.
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
def create_session_with_retries(retries=3, backoff_factor=0.3):
"""
Creates a requests Session with a robust retry strategy.
"""
session = requests.Session()
# Define the retry strategy
# total: total number of retries
# backoff_factor: a backoff factor to apply between attempts
# status_forcelist: a set of HTTP status codes to retry on
retry_strategy = Retry(
total=retries,
read=retries,
connect=retries,
backoff_factor=backoff_factor,
status_forcelist=(500, 502, 504) # Retry on these server errors
)
# Mount the retry strategy to the session for HTTP and HTTPS
adapter = HTTPAdapter(max_retries=retry_strategy)
session.mount("http://", adapter)
session.mount("https://", adapter)
return session
# --- Example Usage ---
session = create_session_with_retries()
try:
# The session will automatically handle retries for connection errors
# and the specified status codes (500, 502, 504).
# Note: This does NOT handle timeouts by default. You still need to pass
# a timeout to the get() call.
response = session.get("http://httpbin.org/status/502", timeout=5)
response.raise_for_status() # Will raise if status is not 2xx after retries
print("Success:", response.json())
except requests.exceptions.RequestException as e:
print("Final failure after session retries:", e)
Pros:
- Very performant due to connection pooling.
- Efficient for making many requests to the same server.
Cons:
- More complex to set up.
- The retry logic is tied to the
Sessionobject, not individual functions.
