This is the hands-on guide for developers who need to integrate proxy rotation into Python applications. We cover synchronous and async implementations, error handling, session management, and performance optimization.
Basic Setup with Requests
import requests
# ZentisLabs rotating proxy — new IP per requestPROXY = "http://USER:PASS@gate.zentislabs.com:7777"proxies = {"http": PROXY, "https": PROXY}
response = requests.get( "https://httpbin.org/ip", proxies=proxies, timeout=15)print(response.json())# {"origin": "185.234.xx.xx"} — residential IPEvery request through gate.zentislabs.com:7777 gets a different residential IP automatically.
Sticky Sessions (Same IP)
For multi-step workflows where you need the same IP across requests:
# Append _session-{id} to username for sticky sessions# Same IP is maintained for up to 30 minutesSTICKY_PROXY = "http://USER_session-checkout123:PASS@gate.zentislabs.com:7777"sticky_proxies = {"http": STICKY_PROXY, "https": STICKY_PROXY}
session = requests.Session()session.proxies = sticky_proxies
# All requests use the same IPsession.get("https://shop.example.com/") # Visit homepagesession.get("https://shop.example.com/product/1") # View productsession.post("https://shop.example.com/cart/add") # Add to cartGeo-Targeting
Route traffic through specific countries, states, or cities:
# Country-level targetingproxy_us = "http://USER_country-us:PASS@gate.zentislabs.com:7777"proxy_de = "http://USER_country-de:PASS@gate.zentislabs.com:7777"proxy_jp = "http://USER_country-jp:PASS@gate.zentislabs.com:7777"
# City-level targetingproxy_nyc = "http://USER_country-us_city-newyork:PASS@gate.zentislabs.com:7777"proxy_london = "http://USER_country-gb_city-london:PASS@gate.zentislabs.com:7777"Async with aiohttp (High Performance)
For scraping thousands of pages concurrently:
import aiohttpimport asyncio
PROXY = "http://gate.zentislabs.com:7777"AUTH = aiohttp.BasicAuth("USER", "PASS")
async def fetch(session, url): try: async with session.get(url, proxy=PROXY, proxy_auth=AUTH, timeout=aiohttp.ClientTimeout(total=15)) as resp: return await resp.text() except Exception as e: print(f"Failed {url}: {e}") return None
async def main(): urls = [f"https://example.com/page/{i}" for i in range(100)] async with aiohttp.ClientSession() as session: semaphore = asyncio.Semaphore(20) async def bounded_fetch(url): async with semaphore: return await fetch(session, url) results = await asyncio.gather(*[bounded_fetch(url) for url in urls]) print(f"Fetched {len([r for r in results if r])} pages successfully")
asyncio.run(main())Scrapy Integration
# settings.pyDOWNLOADER_MIDDLEWARES = { 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 110,}
# In your spiderclass ProductSpider(scrapy.Spider): name = "products" custom_settings = { 'CONCURRENT_REQUESTS': 16, 'DOWNLOAD_DELAY': 1.5, 'RANDOMIZE_DOWNLOAD_DELAY': True, }
def start_requests(self): for url in self.urls: yield scrapy.Request( url, meta={'proxy': 'http://USER:PASS@gate.zentislabs.com:7777'}, callback=self.parse )
def parse(self, response): yield { 'title': response.css('h1::text').get(), 'price': response.css('.price::text').get(), }Error Handling and Retry Logic
import requestsfrom requests.exceptions import ProxyError, Timeout, ConnectionErrorimport time
def fetch_with_retry(url, max_retries=3, proxies=None): for attempt in range(max_retries): try: response = requests.get(url, proxies=proxies, timeout=15) if response.status_code == 200: return response if response.status_code == 403: print(f"Blocked on attempt {attempt + 1}, rotating IP...") time.sleep(2 ** attempt) continue if response.status_code == 429: retry_after = int(response.headers.get('Retry-After', 5)) time.sleep(retry_after) continue except (ProxyError, Timeout, ConnectionError) as e: print(f"Connection error on attempt {attempt + 1}: {e}") time.sleep(2 ** attempt) return NoneBandwidth Optimization
Proxy bandwidth costs money. Minimize waste:
# 1. Only download what you needresponse = requests.head(url, proxies=proxies) # Check headers first
# 2. Use compressionheaders = {"Accept-Encoding": "gzip, deflate, br"}
# 3. Skip binary contentif 'image' in response.headers.get('Content-Type', ''): return # Don't download images through proxy
# 4. Use conditional requestsheaders = {"If-Modified-Since": "Wed, 01 Jan 2026 00:00:00 GMT"}response = requests.get(url, headers=headers, proxies=proxies)if response.status_code == 304: print("Not modified, using cached version")Quick Reference
| Task | Proxy Format |
|---|---|
| Rotating IP | USER:PASS@gate.zentislabs.com:7777 |
| Sticky session | USER_session-id123:PASS@gate.zentislabs.com:7777 |
| US only | USER_country-us:PASS@gate.zentislabs.com:7777 |
| City target | USER_country-us_city-newyork:PASS@gate.zentislabs.com:7777 |
| Mobile IP | USER_type-mobile:PASS@gate.zentislabs.com:7777 |
📦 ZentisLabs bandwidth never expires — buy once, use whenever you need it. No monthly resets, no wasted bandwidth. Perfect for Python developers who scrape on their own schedule.
