Urllib.request or Request?

Asked

Viewed 300 times

1

I am studying web scraping and in many guides I have seen examples where they are used urllib.request and request.get. From what I’ve tested and understood the two do the same thing.

So what’s the difference between them and when to use each?

1 answer

1


I don’t think it’s request.get, but in the plural requests.get, if this is the case, please understand that requests (https://pypi.org/project/requests/) is not a "native" lib, it is usually installed part and has a number of other libs (ie this almost for a framework to work with HTTP) and methods that make it easy to work with (this all statement from the developers):

  • Keep-Alive and Connection Pooling
  • International domains and Urls
  • Sessions with Cookie Persistence
  • Browser-style SSL Verification
  • Content Automatic Decoding
  • Basic/Digest Authentication
  • An "elegant" medium working with Key/Value cookies Cookies
  • Decompression automatica
  • Unicode Response Bodies
  • Support for HTTP(S) Proxy
  • Multipart File Uploads
  • Streaming Downloads
  • Connection Timeouts
  • Requisitions in "parts" (Chunked)
  • support for .netrc

Already urlib2 and urllib, are generally native and have changed depending on python versions. The purpose of developers in creating the requests was to make it easier in this "port" between python versions or between complex HTTP problems that urllib does not and you would have to solve at hand and it was created thinking about the PEP 20

In my opinion if you’re going to do something simple, it doesn’t take much to go urllib even (being version 3.7+ of Python), if you are distributing to more than one version of Python ai complica, but could just check if urllib is available with a try next to the import and then try to use urllib2 (I believe that in Python2 only, correct me if I mistake something here).

But if you are going to do a lot of complex work, you need to speed up development and want something that will facilitate (even more if you are going to do something like a web scraping), then install the requests with Pip:

pip install requests

And start with the documentation: https://requests.kennethreitz.org/pt_BR/latest/index.html

Browser other questions tagged

You are not signed in. Login or sign up in order to post.