Usage of requests module in python

  • 2020-05-07 19:55:41
  • OfStack

This article illustrates the usage of requests module in python. Share with you for your reference. Specific analysis is as follows:

It is unnecessary to use python in HTTP correlation processing, which includes the urllib2 module for comprehensive functionality at a huge cost of complexity. Compared to urllib2,Kenneth Reitz's Requests module supports a complete set of simple use cases more concisely.

Simple example:

Imagine trying to use the get method to get resources from http:// example.test/and see the return code, content-type header information, and response body content. This is easily achieved using either urllib2 or Requests.

urllib2  


>>> import urllib2 
>>> url = 'http://example.test/' 
>>> response = urllib2.urlopen(url)
>>> response.getcode() 
200 
>>> response.headers.getheader('content-type') 
'text/html; charset=utf-8'
>>> response.read() 
'Hello, world!'

Requests


>>> import requests 
>>> url = 'http://example.test/' 
>>> response = requests.get(url) 
>>> response.status_code 
200 
>>> response.headers['content-type']
'text/html; charset=utf-8'
>>> response.content 
u'Hello, world!'

The two methods are similar in that urllib2 calls the method to read the property information in response, whereas Requests USES the property name to get the corresponding property value.
There are two subtle but important differences:

1 Requests automatically decodes the returned message by Unicode
2 Requests automatically saves the returned content, so you can read it multiple times, instead of returning only one object of a similar file type that can be read only once, as urllib2.urlopen () does.

The second is the annoying thing about working with code in an python interactive environment

1 complex 1 point example:

Now let's try the complex example: GET method is used to obtain http: / / foo test/secret resources, the need to basic http validation. Using the above code as a template, it seems that we just need to replace the code between urllib2.urlopen () and requests.get () with a request that can send username, password

this is the method of urllib2:


>>> import urllib2
>>> url = 'http://example.test/secret'
>>> password_manager = urllib2.HTTPPasswordMgrWithDefaultRealm()
>>> password_manager.add_password(None, url, 'dan', 'h0tdish')
>>> auth_handler = urllib2.HTTPBasicAuthHandler(password_manager) 
>>> opener = urllib2.build_opener(auth_handler)
>>> urllib2.install_opener(opener)
>>> response = urllib2.urlopen(url)
>>> response.getcode()
200 
>>> response.read()
'Welcome to the secret page!'

A simple method instantiates two classes, then forms a third class, and finally loads it into the global urllib2 module before calling urlopen. What are the two complex classes
Confused,   all the document http urllib2 here: / / docs python. org/release / 2.7 / library/urllib2 html
So how does Requests solve the same problem?

Requests


>>> import requests 
>>> url = 'http://example.test/secret' 
>>> response = requests.get(url,auth=('dan','h0tdish'))
>>> response.status_code 
200 
>>> response.content 
u'Welcome to the secret page!' 

I just added an auth keyword function when I called the method
I bet you can remember it without looking up the documentation.

error handling Error Handling

Requests's handling of errors is also very useful. If you use an incorrect username and password, urllib2 will cause an urllib2.URLError error, while Requests will return a normal response object as you would expect. Just look at the Boolean value of response.ok to see if the login succeeded.


>>> response = requests.get(url,auth=('dan','wrongPass'))
>>> response.ok 
False 

Some other features of :

* Requests is as simple for HEAD, POST, PUT, PATCH, and api for DELETE methods
* it can handle multiple uploads and also supports automatic transcoding
* better documentation
* and more

Requests is very good, you can try it next time you need to use HTTP.

I hope this article has been helpful to your Python programming.


Related articles: