I don’t think it’s request.get
, but in the plural requests.get
, if this is the case, please understand that requests
(https://pypi.org/project/requests/) is not a "native" lib, it is usually installed part and has a number of other libs (ie this almost for a framework to work with HTTP) and methods that make it easy to work with (this all statement from the developers):
- Keep-Alive and Connection Pooling
- International domains and Urls
- Sessions with Cookie Persistence
- Browser-style SSL Verification
- Content Automatic Decoding
- Basic/Digest Authentication
- An "elegant" medium working with Key/Value cookies Cookies
- Decompression automatica
- Unicode Response Bodies
- Support for HTTP(S) Proxy
- Multipart File Uploads
- Streaming Downloads
- Connection Timeouts
- Requisitions in "parts" (Chunked)
- support for
.netrc
Already urlib2
and urllib
, are generally native and have changed depending on python versions. The purpose of developers in creating the requests
was to make it easier in this "port" between python versions or between complex HTTP problems that urllib does not and you would have to solve at hand and it was created thinking about the PEP 20
In my opinion if you’re going to do something simple, it doesn’t take much to go urllib
even (being version 3.7+ of Python), if you are distributing to more than one version of Python ai complica, but could just check if urllib
is available with a try
next to the import
and then try to use urllib2
(I believe that in Python2 only, correct me if I mistake something here).
But if you are going to do a lot of complex work, you need to speed up development and want something that will facilitate (even more if you are going to do something like a web scraping), then install the requests
with Pip:
pip install requests
And start with the documentation: https://requests.kennethreitz.org/pt_BR/latest/index.html