@pdfdfpdak
That's absolutely not the point. The point is the conflict of gestures at different levels of management. Including, by the way, system-wide, android also has gesture control. And the touchscreen is not initially adapted for gesture control, unlike the mouse. With a mouse you can draw intricate gestures by pressing the right button, but on the screen - what do you press? I've written before (in other topic): try the Sleipnir browser. If you want, also try Unxpected Keyboard, it's made to work with micro gestures all the time, and after a normal keyboard it's uncomfortable and hard as hell.
The only successful gesture work I've seen is in the Habit browser, which is also a Japanese browser. There are multi-level, flexibly customizable petal menus (like the petals of a daisy) on the right and left. Each petal has a different function binded to it. That's handy!
The problem is that fingers are not at all as accurate as a computer mouse. And controlling basic browser functions should be easy, as if automatic (on reflexes). To do this, you need to avoid overlapping gestures, unclear gestures, and ambiguous results. Otherwise, the work turns into a tense struggle for the right operation 🙂