Of course! The try...except block is a fundamental concept in Python for error handling. When you use it with the requests library, you're writing more robust and resilient code that can gracefully handle network problems, bad URLs, or unexpected server responses.

Here’s a complete guide, from a simple example to a more advanced, production-ready one.
The Core Problem: Why Use try...except with requests?
A network request can fail for many reasons:
- Connection Error: Your computer is offline, or the server is down.
- Timeout Error: The server takes too long to respond.
- HTTP Error: The server responds, but with an error status code like
404 Not Foundor500 Internal Server Error. - Invalid URL: The URL you're trying to access is malformed.
Without error handling, your program will crash on any of these errors. try...except allows you to "catch" these errors and decide what to do next.
The Basic Example: Catching a General Exception
This is the simplest way to handle any error from requests. It's good for beginners but not ideal for production because it catches everything, including programming mistakes unrelated to the request.

import requests
url = "https://api.github.com"
try:
# This is the code that might fail
response = requests.get(url)
# If the request was successful, print the status code
print(f"Success! Status Code: {response.status_code}")
print(f"Response Headers: {response.headers}")
except requests.exceptions.RequestException as e:
# This block will run if ANY error from the requests library occurs
print(f"An error occurred: {e}")
print("\nProgram continues to run here...")
What's happening here?
try:: Python attempts to run the code inside this block.requests.get(url): We make the HTTP GET request.except requests.exceptions.RequestException as e:: If thetryblock raises any exception from therequestslibrary (likeConnectionError,Timeout,HTTPError, etc.), it is caught here.requests.exceptions.RequestExceptionis the base class for all exceptions thrown byrequests. Catching this is like catching a net for any request-related problem.as eassigns the exception object to the variablee, so you can print a helpful error message.
A More Robust Example: Handling Specific Exceptions
Better code handles specific errors differently. For example, a 404 Not Found is a different problem than a ConnectionError.
import requests
# A URL that will cause a 404 error
url_404 = "https://api.github.com/non-existent-page"
# A URL that is invalid
url_invalid = "htp://invalid-url.com"
# A URL that might time out
url_timeout = "https://httpbin.org/delay/10" # This URL waits for 10 seconds
def make_request(url):
print(f"\n--- Attempting to request: {url} ---")
try:
# Set a timeout to prevent the program from hanging indefinitely
response = requests.get(url, timeout=5)
# Raise an HTTPError for bad status codes (4xx or 5xx)
response.raise_for_status()
# If we get here, the request was successful (status code 2xx)
print(f"Success! Status Code: {response.status_code}")
# print(response.json()) # You can process the JSON data here
except requests.exceptions.HTTPError as http_err:
# This is raised by response.raise_for_status()
print(f"HTTP Error occurred: {http_err}")
print(f"Status Code: {response.status_code}")
except requests.exceptions.ConnectionError as conn_err:
# Problem with the network (e.g., DNS failure, refused connection)
print(f"Connection Error occurred: {conn_err}")
except requests.exceptions.Timeout as timeout_err:
# The request timed out
print(f"Timeout Error occurred: {timeout_err}")
except requests.exceptions.RequestException as req_err:
# A catch-all for any other requests-related errors
print(f"An unexpected error occurred: {req_err}")
# --- Let's test it ---
make_request(url_404)
make_request(url_invalid)
make_request(url_timeout)
Key Improvements in this Example:
response.raise_for_status(): This is a very useful method. It checks if the response status code indicates an error (400 or higher). If it does, it raises anHTTPError. This lets you handle all bad HTTP responses in one place.- Specific
exceptBlocks:except requests.exceptions.HTTPError as http_err:: Catches errors fromraise_for_status()(e.g., 404, 500).except requests.exceptions.ConnectionError as conn_err:: Catches network-level connection issues.except requests.exceptions.Timeout as timeout_err:: Catches errors when the server doesn't respond in time.except requests.exceptions.RequestException as req_err:: The final safety net for any otherrequestserror.
timeout=5: This is a crucial parameter. It tellsrequeststo stop waiting for a response after 5 seconds, preventing your program from hanging forever if the server is unresponsive.
Best Practice: Using finally
The finally block is executed no matter what—whether the try block succeeded or an exception was caught. It's perfect for cleanup actions, like closing a file or a database connection.

In requests, you don't usually need to manually close connections (the library handles it), but it's good practice to know how to use finally.
import requests
url = "https://api.github.com"
try:
response = requests.get(url, timeout=5)
response.raise_for_status()
print("Request successful!")
except requests.exceptions.RequestException as e:
print(f"Request failed: {e}")
finally:
# This code runs regardless of success or failure
print("Request attempt finished. Cleaning up...")
# If you had a file or a database connection, you'd close it here.
# e.g., if 'connection' in locals():
# connection.close()
Putting It All Together: A Reusable Function
For real applications, you'll want to wrap this logic in a function.
import requests
import json
def fetch_data(url, timeout=5):
"""
Fetches data from a URL with robust error handling.
Args:
url (str): The URL to fetch data from.
timeout (int): The timeout in seconds.
Returns:
dict: The JSON response data if successful, None otherwise.
"""
try:
print(f"Fetching data from {url}...")
response = requests.get(url, timeout=timeout)
response.raise_for_status() # Raise an exception for bad status codes
# If the response is JSON, parse it
if response.headers.get('content-type') == 'application/json':
return response.json()
else:
print("Response is not JSON.")
return None
except requests.exceptions.HTTPError as http_err:
print(f"HTTP error: {http_err} (Status Code: {response.status_code})")
except requests.exceptions.ConnectionError as conn_err:
print(f"Connection error: {conn_err}")
except requests.exceptions.Timeout as timeout_err:
print(f"Timeout error: {timeout_err}")
except requests.exceptions.RequestException as req_err:
print(f"An unexpected error occurred: {req_err}")
except json.JSONDecodeError:
print("Error: Failed to decode JSON response.")
return None # Return None if anything went wrong
# --- Usage Example ---
# Success case
data = fetch_data("https://api.github.com/users/octocat")
if data:
print(f"Successfully fetched user: {data.get('name')}")
print("-" * 20)
# Failure case (404)
data = fetch_data("https://api.github.com/users/this-user-does-not-exist-12345")
if data:
print(f"Successfully fetched user: {data.get('name')}")
else:
print("Failed to fetch user data.")
print("-" * 20)
# Failure case (Timeout)
data = fetch_data("https://httpbin.org/delay/10", timeout=3)
if data:
print("Successfully fetched data from slow URL.")
else:
print("Failed to fetch data from slow URL (timed out).")
This final example is a great template you can adapt for your own projects. It handles specific errors, parses JSON safely, and returns a predictable result (None on failure, data on success).
