How to Solve "ConnectionError: Max retries exceeded with url" in Python requests
The requests.exceptions.ConnectionError: Max retries exceeded with url
error in Python, when using the requests
library, indicates a problem establishing a network connection to the target server.
This guide explains the common causes of this error and provides practical solutions, including handling retries, timeouts, and SSL certificate verification.
Understanding the Error
The requests.exceptions.ConnectionError
is a broad exception that encompasses various network-related problems.
The Max retries exceeded
part means that requests
tried multiple times to establish a connection, but failed each time. This usually doesn't mean a problem with your code itself, but rather with the network or the server you're trying to reach.
Common Causes and Solutions
Incorrect or Incomplete URL
The most basic cause is a typo in the URL or a missing protocol (http://
or https://
):
import requests
# ⛔️ INCORRECT: Missing protocol
# response = requests.get('example.com/posts', timeout=10)
# ✅ CORRECT: Include http:// or https://
response = requests.get('https://example.com/posts', timeout=10)
response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx)
print(response.status_code)
- Always include
http://
orhttps://
in your URL, otherwise you will get the error.
Network Connectivity Issues
- No Internet Connection: Ensure your computer or server has an active internet connection.
- DNS Problems: Can your computer resolve the domain name? Try
ping example.com
(replaceexample.com
with the actual domain) from your terminal. If the ping fails, you might have a DNS issue. - Firewall/Proxy: A firewall or proxy server might be blocking the connection.
Server-Side Issues (Rate Limiting, Server Down)
The server you're trying to reach might be:
- Down: The server is temporarily or permanently unavailable.
- Overloaded: The server is too busy to handle your request.
- Rate Limiting: The server is intentionally limiting the number of requests you can make in a given time period (very common with APIs).
Solutions:
- Wait and Retry: For temporary server issues, the best approach is often to wait and retry the request later.
- Check Server Status: If possible, check the server's status page or API documentation for any known issues.
- Respect
Retry-After
Header: Some APIs will return aRetry-After
HTTP header indicating how long to wait before retrying. Your code should respect this header. - Implement Rate Limiting: If you're making many requests, implement your own rate limiting to avoid overwhelming the server and getting blocked. Use
time.sleep()
or a more sophisticated rate-limiting library.
SSL Certificate Verification Issues
If you're using https://
and requests
can't verify the server's SSL certificate, you might see a requests.exceptions.SSLError
(which is a subclass of ConnectionError
).
-
Solution (Generally Discouraged): You can disable SSL verification by setting
verify=False
:import requests
response = requests.get('https://example.com', verify=False, timeout=10)
# WARNING: This disables security checks! Only do this if you
# understand the risks and are certain the server is trustworthy
# (e.g., a local development server).- Disabling the SSL verification should only be done in local development, or testing, and only in cases when you understand the implications.
-
Better Solutions:
- Fix the Certificate: The best solution is to fix the underlying certificate issue on the server.
- Provide the CA Bundle: If you have a custom Certificate Authority (CA), you can tell
requests
to use it:response = requests.get('https://example.com', verify='/path/to/ca_bundle.pem')
- Update
certifi
:requests
uses thecertifi
package for its CA bundle. Make sure it's up-to-date:pip install --upgrade certifi
Firewall or Proxy Issues
A firewall or proxy server might be blocking your requests.
- Solution:
- Check your firewall settings.
- Use proxies.
Implementing Retries with Backoff
The requests
library doesn't automatically retry failed connections (except in a few very specific cases related to keep-alive). You should implement your own retry logic, especially when dealing with network issues or potentially unreliable servers. The recommended way to do this is to use the requests.adapters.HTTPAdapter
with a Retry
object:
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
def make_request():
session = requests.Session()
retry = Retry(connect=3, backoff_factor=0.5) # Retry configuration
adapter = HTTPAdapter(max_retries=retry)
session.mount('http://', adapter) # Mount for all http:// URLs
session.mount('https://', adapter) # Mount for all https:// URLs
url = 'https://reqres.in/api/users'
response = session.get(url) # Use the session object.
response.raise_for_status() # Check for HTTP errors
parsed = response.json()
print(parsed)
make_request()
requests.Session()
: Creates a session object. Using a session is good practice for multiple requests to the same host, as it can reuse the underlying TCP connection.Retry(connect=3, backoff_factor=0.5)
: Configures the retry behavior:connect=3
: Retry up to 3 times on connection errors (this doesn't include retries on HTTP status codes like 500).backoff_factor=0.5
: Implements exponential backoff. The delay between retries will be:- First retry: (no delay, immediate retry)
- Second retry: 0.5 seconds
- Third retry: 1 second
- ...and so on. This prevents hammering a failing server.
HTTPAdapter(max_retries=retry)
: Creates an adapter that applies the retry logic.session.mount(...)
: Attaches the adapter to the session for bothhttp://
andhttps://
URLs.response.raise_for_status()
: This line is very important. It checks the HTTP status code of the response. If the status code indicates an error (4xx or 5xx), it raises anHTTPError
exception. This is a good way to handle HTTP errors explicitly.
Setting Timeouts
Always set timeouts with requests
to prevent your code from hanging indefinitely if a server is unresponsive:
response = requests.get('https://example.com', timeout=5) # 5-second timeout
timeout=5
: Specifies a timeout of 5 seconds. If the server doesn't respond within 5 seconds, arequests.exceptions.Timeout
exception will be raised. You can catch this exception to handle timeouts gracefully. You can also specify separate connect and read timeouts as a tuple:timeout=(connect_timeout, read_timeout)
.
Simulating a Browser with Headers
Some servers block requests that don't look like they're coming from a web browser. You can often work around this by setting the User-Agent
header:
import requests
from fake_useragent import UserAgent # Use to simulate realistic browsers
def make_request():
ua = UserAgent()
headers = {'User-Agent': ua.chrome} # Set a user agent
try:
url = 'https://reqres.in/api/users'
response = requests.get(url, headers=headers, timeout=10)
response.raise_for_status() # Check for HTTP errors
parsed = response.json()
print(parsed)
except requests.exceptions.RequestException as e: # Catch multiple exception types
print(f"Request failed: {e}")
except Exception as e:
print(e) # Handle the exception
make_request()
- The
fake-useragent
library is used to generate realistic user agents. - The
headers
parameter is used to set the user agent and to simulate requests made from a web browser. - It's good practice to catch multiple exceptions, to handle potential errors.
- Always use
response.raise_for_status()
to make sure that any error raised is explicitly handled.
Conclusion
The ConnectionError: Max retries exceeded with url
error usually indicates network problems, server issues, or an invalid URL.
The key takeaways are:
- Verify your URL: Ensure it's a string and includes the protocol (
http://
orhttps://
). - Check network connectivity: Make sure you have a working internet connection.
- Implement retries with backoff: Use
requests.adapters.HTTPAdapter
withRetry
to handle temporary network issues. - Set timeouts: Use the
timeout
parameter to prevent your script from hanging indefinitely. - Handle SSL certificate issues: If necessary, configure
requests
to use a custom CA bundle, or (in very limited, controlled situations) disable verification. - Simulate a browser: If necessary, set the
User-Agent
header. - Catch exceptions: Handle
requests.exceptions.RequestException
and its subclasses to gracefully handle errors.
By following these steps, you can make your requests
calls much more robust and reliable.