Named async Python class

Asked

Viewed 164 times

0

I would like to call a class that does a post for an Endpoint but that this call was not blocking, because it is inside an event that sometimes triggers 5 cohorts in a row and I can’t wait for the Post’s conclusion to continue.

I did it this way, but I’m having a hard time testing if it’s really working

Main class.py

"""
códigos que não interessa no exemplo
"""
from def_api import Api
api = Api('teste')
for item in range(10):   #for ficticio
   api.Send('valores')

def_api.py class

import requests

class Api(object)
   def __init__(self, valor1):
      self.__valor1 = valor1

   async def Send(self, valor2):
     self.__valor2 = valor2
     data = {'valor1': self.__valor1, 'valor2': self.__valor2}
     r = request.post(url, json=data)
     return await r.status_code

I was in doubt: whether the await should be placed on the request, r = await request.post.. or if I should take the await so it doesn’t get blocking..

I ask for help, links, etc.

  • 1

    The request.post() call will block the main thread even if the function is async. You would have to call "await request.post()" to not block And request.post() would have to be itself an async function, which I don’t think it is. It would have to use threads or, better yet, asyncio to achieve this result. This OS question in English has some examples: https://stackoverflow.com/questions/22190403/how-could-i-use-requests-in-asyncio

1 answer

2

The requests is a synchronous library - that is, nothing you do will allow requests to run in a non-blocking and collaborative way with asyncio, directly. A path would be to search Pypi for other libraries that emulate requests, but have an asynchronous api - for example the aiohttp-requests.

However, even without exchanging requests for another lib, not everything is lost: it is precisely for these cases that the Python asyncio includes the so-called "run_in_executor" - basically, there is a way to create an object "Executor" of the library concurrent.futures that maintains a set of threads (or processes), and can run the blocking functions on one of these threads, so that the function that needs this result can continue in a non-blocking way on the main thread.

The call loop.run_in_executor in itself takes charge of dispatching the function to the executor, and check the result -the bureaucracy is minimal, just creating the executing object.

Documentation: https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.run_in_executor

I won’t be able to test the code now, but at first the adaptation of your code is something like:


import requests
from concurrent.futures import ThreadPoolExecutor as Executor

# configura 20 requisições máximas em paralelo:

http_executor = Executor(20)

class Api(object)
   def __init__(self, valor1):
      self.__valor1 = valor1

   async def send(self, valor2):
     self.__valor2 = valor2
     data = {'valor1': self.__valor1, 'valor2': self.__valor2}
     loop = asyncio.get_event_loop()
     r = await loop.run_in_executor(http_executor,  request.post, url, json=data)
     return r.status_code

(plus the hint of how will be the layer of run_in_executor, I changed the method name to send in lowercase instead of Send, why is the convention in Python)

  • If I run a blocking function on a thread, can I have any threading limit problems reached? due to some Linux limit (cpu) ?

  • 1

    Threads are not much efficient, but it will take a while for you to hit some limit - you could open hundreds of them, no problem. But there are two things: first the "Executor" of Python is very good at just working a lot of Workers (in this example, 20), and reuse these Workers according to the demands - u be, even with thousands of calls, Python will be reusing these 20 Workers - By the time you reach 21 request and have no free worker, she will be "waiting" automatically a finish the task. This is the idea of concurrent.futures

  • 1

    And second, even if you increased the executor to use hundreds of Workers, it will hit some other bottleneck before - depending on other factors: your network bandwidth, for example, but, basically, like the server on the other side, where you’re sending your post, is sized to respond, or limit requests coming from the same IP. It is not uncommon to have no gain other than 5 Workers.

  • But under very favorable conditions, for example, if you send and receive little data, but the calculation of the response takes on the server side, And the remote server has several Workers to meet different requests, it is possible to have increases up to a very large number of Workers. (And you’ll probably be consuming a larger amount of remote system resources than the person scaled to a single user - it won’t be cool')

Browser other questions tagged

You are not signed in. Login or sign up in order to post.