Of course! Downloading images in Python is a common task. The best method depends on your needs, but the most popular and robust approach is using the requests library.

Here’s a complete guide, starting with the simplest method and moving to more advanced, robust solutions.
Method 1: The Easiest & Most Common Way (Using requests)
This is the standard for most use cases. It's simple, powerful, and can handle redirects, cookies, and other HTTP features automatically.
Step 1: Install the requests library
If you don't have it installed, open your terminal or command prompt and run:
pip install requests
Step 2: Write the Python script
This script will download a single image from a URL and save it to a file.

import requests
# The URL of the image you want to download
image_url = "https://www.python.org/static/community_logos/python-logo-master-v3-TM.png"
# Send a GET request to the URL
try:
response = requests.get(image_url, stream=True) # stream=True is important for large files
response.raise_for_status() # This will raise an exception for HTTP errors (4xx or 5xx)
# Define the local file name
file_name = "python-logo.png"
# Open a file in write-binary mode and save the content
with open(file_name, "wb") as f:
for chunk in response.iter_content(chunk_size=8192):
f.write(chunk)
print(f"Image downloaded successfully and saved as {file_name}")
except requests.exceptions.RequestException as e:
print(f"Error downloading the image: {e}")
How it Works:
import requests: Imports the necessary library.requests.get(image_url, stream=True):- Sends an HTTP GET request to the
image_url. stream=Trueis a crucial optimization. Instead of downloading the entire image into memory at once, it downloads it in small chunks. This is essential for large files to avoid running out of memory.
- Sends an HTTP GET request to the
response.raise_for_status(): This is a good practice. If the download fails for any reason (e.g., 404 Not Found, 403 Forbidden), it will raise anHTTPError, allowing you to handle the error gracefully.with open(file_name, "wb") as f::- Opens a file in write-binary (
wb) mode. This is necessary because image files are binary data, not text. - The
withstatement ensures the file is automatically closed even if errors occur.
- Opens a file in write-binary (
for chunk in response.iter_content(chunk_size=8192):: This loop iterates over the image data, which has been downloaded in chunks (8192 bytes is a common size).f.write(chunk): Each chunk is written to the file until the entire image is downloaded.
Method 2: Downloading Multiple Images from a List
You can easily extend the script to download a list of images.
import requests
# A list of image URLs to download
image_urls = [
"https://www.python.org/static/community_logos/python-logo-master-v3-TM.png",
"https://www.djangoproject.com/m/img/logos/django-logo-negative.svg",
"https://www.google.com/images/branding/googlelogo/1x/googlelogo_color_272x92dp.png"
]
for i, url in enumerate(image_urls):
try:
# Get the file name from the URL (or create one)
file_name = url.split("/")[-1]
# Handle cases where the URL might not have a clear filename
if not file_name or '.' not in file_name:
file_name = f"image_{i}.jpg" # Default extension
response = requests.get(url, stream=True)
response.raise_for_status()
with open(file_name, "wb") as f:
for chunk in response.iter_content(chunk_size=8192):
f.write(chunk)
print(f"Downloaded {file_name}")
except requests.exceptions.RequestException as e:
print(f"Failed to download {url}. Reason: {e}")
Method 3: A More Robust & Reusable Function
For cleaner code, it's good practice to wrap the download logic in a function. This version also includes better error handling and a timeout.
import requests
import os
def download_image(url, save_path, timeout=10):
"""
Downloads an image from a URL and saves it to a specified path.
Args:
url (str): The URL of the image.
save_path (str): The full path (including filename) to save the image.
timeout (int): Timeout for the request in seconds.
"""
try:
# Create the directory if it doesn't exist
os.makedirs(os.path.dirname(save_path), exist_ok=True)
response = requests.get(url, stream=True, timeout=timeout)
response.raise_for_status() # Raise an exception for bad status codes
with open(save_path, "wb") as f:
for chunk in response.iter_content(chunk_size=8192):
f.write(chunk)
print(f"Image downloaded successfully and saved as {save_path}")
return True
except requests.exceptions.RequestException as e:
print(f"Error downloading image from {url}: {e}")
return False
# --- Example Usage ---
image_url = "https://www.python.org/static/community_logos/python-logo-master-v3-TM.png"
# Create a 'downloads' folder if it doesn't exist
save_directory = "downloads"
file_name = "python-logo.png"
save_path = os.path.join(save_directory, file_name)
download_image(image_url, save_path)
Method 4: Downloading Images from a Web Page (Advanced)
This is a more complex task that involves two steps:
- Scraping the web page to find all image URLs (usually in
<img src="...">tags). - Downloading each of those URLs using the
requestsmethod from before.
For this, you'll need two libraries: requests for fetching the page and BeautifulSoup for parsing the HTML.

Step 1: Install the libraries
pip install requests beautifulsoup4
Step 2: Write the script
import requests
from bs4 import BeautifulSoup
import os
def get_image_urls(url):
"""Finds all image URLs on a given web page."""
try:
response = requests.get(url, timeout=10)
response.raise_for_status()
soup = BeautifulSoup(response.text, 'html.parser')
# Find all <img> tags and get their 'src' attribute
img_tags = soup.find_all('img')
img_urls = []
for img in img_tags:
# Get the URL from the 'src' attribute
img_url = img.get('src')
if img_url:
# Handle relative URLs (e.g., "/images/pic.jpg")
if img_url.startswith('//'):
img_url = 'https:' + img_url
elif img_url.startswith('/'):
img_url = url + img_url
img_urls.append(img_url)
return img_urls
except requests.exceptions.RequestException as e:
print(f"Error fetching the page {url}: {e}")
return []
def download_images_from_page(page_url, save_dir="downloaded_images"):
"""Downloads all images found on a web page."""
img_urls = get_image_urls(page_url)
if not img_urls:
print("No images found on the page.")
return
# Create the download directory
os.makedirs(save_dir, exist_ok=True)
downloaded_count = 0
for img_url in img_urls:
# Get a simple filename from the URL
file_name = img_url.split("/")[-1].split("?")[0] # Handle query parameters
if not file_name or '.' not in file_name:
file_name = "image.jpg" # Default name and extension
save_path = os.path.join(save_dir, file_name)
# Use the download function from Method 3
if download_image(img_url, save_path):
downloaded_count += 1
print(f"\nFinished. Downloaded {downloaded_count} images to '{save_dir}'.")
# --- Example Usage ---
# Replace with a real URL you want to scrape
target_url = "https://en.wikipedia.org/wiki/Python_(programming_language)"
download_images_from_page(target_url)
Summary of Methods
| Method | Best For | Key Libraries | Pros | Cons |
|---|---|---|---|---|
| Simple Download | Downloading one or a few specific images. | requests |
Simple, fast, standard. | Not scalable for many tasks. |
| Batch Download | Downloading a predefined list of images. | requests |
Easy to implement, loops efficiently. | Requires a pre-made list of URLs. |
| Reusable Function | Building larger applications, cleaner code. | requests, os |
Modular, robust, handles paths and errors well. | Slightly more setup. |
| Web Scraping | Automatically finding and downloading all images from a webpage. | requests, BeautifulSoup |
Powerful, automates discovery. | More complex, requires parsing HTML, be mindful of robots.txt. |
