Proxy Port logo
How-to Guides > How to set up a proxy for urllib

How to set up a proxy for urllib

In this article, we'll explore how to set up a proxy for Python3 urllib, a built-in library used for making HTTP requests. We'll provide a code snippet that demonstrates how to define a proxy server and port number, create a ProxyHandler object, and use it to make requests through the proxy.

To set up a proxy for Python3 urllib, you can use the urllib.request.ProxyHandler class to define the proxy server and port number, and then use this object to create an opener object which will be used to make requests through the proxy.

Here is an example code snippet that shows how to set up a proxy using urllib:
from urllib import request

# Define the proxy server and port number
proxy_server = 'http://yourproxyserver.com'
proxy_port = '8080'

# Create a ProxyHandler object with the proxy server and port
proxy_handler = request.ProxyHandler(
    {
        'http': f'{proxy_server}:{proxy_port}',
        'https': f'{proxy_server}:{proxy_port}'
    }
)

# Create an opener object with the ProxyHandler
opener = request.build_opener(proxy_handler)

# Use the opener to make a request through the proxy
response = opener.open('http://example.com')

# Print the response
print(response.read())

            
        
In this example, replace http://yourproxyserver.com with the URL of your proxy server, and 8080 with the port number your proxy server is using. Then replace http://example.com with the URL of the website you want to access through the proxy.

If you don't have your own proxy, you can obtain one from the Proxy Port provider package.
Install package:
$ pip install proxyport2

            
Set the API_KEY and call the get_proxy function:
from proxyport2 import set_api_key, get_proxy

set_api_key('<API_KEY>')

print(get_proxy())


        
Obtain an API Key for free by referring to the detailed instructions.
Here's an example of how you can combine previous steps:
from urllib import request

from proxyport2 import set_api_key, get_proxy


set_api_key('<API_KEY>')

proxy = get_proxy()
proxy_handler = request.ProxyHandler(
    {'http': proxy, 'https': proxy})

opener = request.build_opener(proxy_handler)
response = opener.open('https://example.com', timeout=5)

print(response.read())


        
Don't forget to replace <API_KEY> with your actual API Key.

Sometimes, you may encounter timeout errors like this:
TimeoutError: The read operation timed out

            
Public proxies are not very reliable and don't last long. To overcome these obstacles, you need to try retries:
from urllib import request

from proxyport2 import set_api_key, get_proxy


set_api_key('<API_KEY>')

for i in range(10):
    proxy = get_proxy()
    proxy_handler = request.ProxyHandler(
        {'http': proxy, 'https': proxy})
    opener = request.build_opener(proxy_handler)

    try:
        response = opener.open('https://example.com', timeout=5)
        print(response.read())
        break
    except Exception as e:
        print(e)


        
If you're planning on scraping from multiple pages, you should consider using a web scraping framework like Scrapy instead of the low-level urllib.
Proxy for scraping
More