If you've ever done any form of web scraping in Python, or just making HTTP requests in general, I'd say it's likely you've used the requests library rather than Python's built-in urllib module(s) directly. And that is probably because the requests library is so damn easy to use. The requests API design is one of the best of its kind. With it's massive success in user appreciation and adoption, the requests library has earned its slogan of 'HTTP for Humans'.

Real quick - what's the difference between urllib and urllib2 - and if we're getting wild, urllib3?

  • Python 1.2 included urllib as part of the CPython standard library, containing the original HTTPClient.
  • Python 1.6 introduced an "experimental" urllib2 as
  • When Python 3 came around, the original urllib was deprecated and urllib2 became

You do have to wonder - why does the requests library use urllib3 under the hood as oppose to urllib? <TODO: RESEARCH>

Let's take an example. We'll use basic HTTP authentication.

import urllib

auth_handler = urllib.request.HTTPBasicAuthHandler()
auth_handler.add_password(realm=, MAILGUN_MESSAGE_URL, 'api', MAILGUN_API_KEY)

References

What are the differences between the urllib, urllib2, urllib3 and requests module?
In Python, what are the differences between the urllib, urllib2, urllib3 and requests modules? Why are there three? They seem to do the same thing...

Outline

  1. Overview of the options (honorable mention of httplib?)
  2. Why requests uses urllib3 instead of urllib? Is it simply legacy?
  3. History of urllib -> urllib2 -> urllib
  4. Modern urllib use (It has made things easier I believe)
  5. Embrace the stdlib???

If you wanted a small to non-existent standard library, then use something like Lua. But Python comes jam packed with one of the strongest standard libraries right out of the box.