Simulate click touchscreen in python

Asked

Viewed 405 times

1

I need to create a python function that can create a click holding it while a second click is started and finished during the duration of the first.

Using the mouse, this is not possible, however, when using the touchscreen, it is possible.

Imagine that while holding the W key, I want you to hold down at position 100,100 and while holding the A key, hold down position 200,200.

I tried to create a version in pyxhook and pyautogui, but it is not possible to parallelize clicks.

def OnKeyboardUpEvent(event):

    #Verifica se a tecla W foi solta
    if (event.ScanCode == 25):
        pyautogui.mouseUp()

    #Verifica se a tecla A foi solta
    elif (event.ScanCode == 38):
        pyautogui.mouseUp()

def OnKeyboardDownEvent(event):

    #Verifica se a tecla W foi pressionada
    if (event.ScanCode == 25): #Tecla S
        pyautogui.mouseDown(100,100)

    #Verifica se a tecla A foi pressionada
    elif (event.ScanCode == 38): #Tecla A
        pyautogui.mouseDown(200,200)

hookman = pyxhook.HookManager()
hookman.KeyUp = self.OnKeyboardUpEvent
hookman.KeyDown = self.OnKeyboardDownEvent
hookman.HookKeyboard()
hookman.start()

I imagine that using pyautogui is not possible to perform the process in this way, but that maybe there is some library (I imagine that to simulate touch screen touches) that could create this function 100% parallelizable, you know if there is any library that can do this control?

Note: I am using linux (Arch Linux)

  • Take a look at the module "puautogui", you can simulate mouse click, click and drag, typing... https://pyautogui.readthedocs.io/en/latest/

  • @Hugosalvador, the problem is that you can’t click in parallel

1 answer

0


in fact, Pyautogui does not have Apis to simulate multi-touch. frmework Kivy has Apis to receive and manage multi-touch - but not to send events.

summary - I did a search with the detailed path below, and the thing may be that it is as simple as copying, in your code, the functions below the Pyautogui, and putting the further parameter detail that you receive and send the desired "touch" id in this parameter. The functions are here - already put with the suggested changes for you:

https://github.com/asweigart/pyautogui/blob/737192e18878aa81d922b592c99e32073aee0f54/pyautogui/_pyautogui_x11.py#L84


from Xlib.display import Display
from Xlib import X
from Xlib.ext.xtest import fake_input

_display = _display = Display(os.environ['DISPLAY'])


def moveTo(x, y, detail=0):
    fake_input(_display, X.MotionNotify, x=x, y=y, detail=detail)
    _display.sync()


def mouseDown(x, y, button, detail=0):
    moveTo(x, y, detail=detail)
    button = BUTTON_NAME_MAPPING[button]
    fake_input(_display, X.ButtonPress, button, detail=detail)
    _display.sync()


def mouseUp(x, y, button, detail=0):
    moveTo(x, y, detail=detail)
    button = BUTTON_NAME_MAPPING[button]
    fake_input(_display, X.ButtonRelease, button, detail=detail)
    _display.sync()

research so far

I think the only way you’ll get is to take the source code from Py-auto-Gui, and check in there how it does to send mouse events on X11 systems, and then explore the libraries it calls itself and check for calls to multi-touch.

Pyautogui’s file is this one: https://github.com/asweigart/pyautogui/blob/master/pyautogui/_pyautogui_x11.py

It shows that he uses the Xlib (Wrappers from Python) - which has the code here:

https://github.com/python-xlib/python-xlib


Well, continuing the analysis - it’s not something simple, but basically these calls to Xlib let you pass a "Detail" to the "click" call - and this Detail is the "touchID" - a value used internally by the driver from each multi-touch device to the first, second, third... touches on the device - I believe there is no reason why this "Detail" should be different from a sequential numbering - "0, 1, 2..." (detail that I did not find documentation on this - in the source code of the Xlib itself, in C, you can see that it picks up this touchid from the Event->Detail): https://github.com/ec1oud/multitouch-xinput/blob/6df6739f5557e72ef9ac87304e12e7dc27e651a9/multitouch.c#L299

So now you have to understand how to create touchs with click events manually and call this function Xlib.ext.xtest.fake_input with the data needed to create the events (tip - use the pyautogui code, on the first link above, to create practical functions that call fake_input without you having to pass all the parameters, every time). Then, you call fake_input with events "Buttonpress, Buttonrelease and Motionnotify", passing in the "Detail" field the ID of which multi-touch "point" you are accessing. Note that it already accepts "Detail" as a parameter - so maybe it’s even simple.

As I don’t have a simple setup to receive multi-touch events and do some testing, I have no way to put some test code here -

-- And, know that you’ll be doing a more or less pioneering job -- When it’s up and running, it’ll be worth writing a post about it - and, a touch (it can be in a comment here), if it works - suddenly we can send a PR to Pyautogui .

  • Wow, I was hoping for something more implausible (although I didn’t find anything in the initial research I did. Anyway, thanks for the excellent answer! !

Browser other questions tagged

You are not signed in. Login or sign up in order to post.