- Nov 04, 2012
-
-
Any number of touches >= 3 starts a drag operation, but the window jumped around as new touches joined. So recalculate the touch drag area/hotspot and tiling mode as expected as soon as touches enter/leave the drag op.
-
If during a multitouch drag, the area height is 1.5x the original size, the window is tiled to the closest side the hotspot is on (left/right). If both with/height are 1.5x the original, the window is maximized.
-
Window moving is triggered by 3-4 simultaneous touch events on the window, the hotspot being in the center of the touch area bounding rect.
-
-
- Nov 03, 2012
-
-
The touch sequence is possibly unhandled, but we need a call to meta_window_end_touch() so such touch sequences are notified to the server, this would trigger the real TouchEnd event as the touch is rejected, but should be a no-op the second time
-
Slave devices are at least needed for touch devices.
-
-
These functions deal with passive touch grabs, where available
-
This function would return the slave device behind and event, this is mostly needed for touch passive grabs, as XIAllowTouchEvents() at the moment requires a slave device, this has changed in the latest drafts of the multitouch protocol.
-
This function returns the touch ID generating an input event, if any.
-
This function tells whether an input event should be ignored, the only current reason being the duplication of touch events and their emulated XI2 pointer event counterparts.
-
TouchBegin/End/Update are now handled similarly to ButtonPress/Release/MotionNotify.
-
Jasper St. Pierre authored
This allows keybinding handlers to get access to devices.
-
If some device has a popped up menu, meta_frame_get_flags() returns 0 to indicate other devices cannot trigger any actions on the frame.
-
This will be the device popping up the menu, gtk_menu_popup() uses the current GTK+ event device, and that might not even be filled up with mutter event bypassing.
-
Multiple windows may now have different popup menus, responding only to the device pair that popped it up.
-
This is so different pointers may have different cursors on them.
-
-
Don't spare a synchronous X call, instead translate properly the XEvent coordinates in ui.c and use the event coordinates in MetaFrames
-
It must not be freed (at least meanwhile the device pair exists), so ensure it's kept around, and we don't create info for a same keyboard twice.
-
Now either the current focus keyboard or the client pointer (i.e. the pointer paired to the last keyboard that had the window focus) are used to guess the pointer that should be grabbed.
-
The client window determines the device pair that is used for core protocol calls such as XQueryPointer(), so different clients using the core protocol can be focused by different devices simultaneously.
-
This is nothing but the beginning, things work with a single pointer/keyboard pair, but will be largely broken with multiple devices interacting simultaneously, so we first need a policy to establish a sensible behavior for extra pointer/keyboard pairs. the MUTTER_USE_CORE_DEVICES envvar has been added to force use of Xlib core events for devices in order to check for regressions.
-
XIAllMasterDevices doesn't quite work for key grabs, so only do this at the moment for the Virtual Core Keyboard
-
Mutter only handles master devices.
-
These happen invariably on the VCP/VCK pair.
-
At the moment feedback is only provided for the Virtual Core Pointer
-
This function returns a list of the devices currently handled by the device map.
-
-
-
-
-
-
-
like in XQueryPointer/XIQueryPointer, the return value tells whether the pointer device is on the same screen than the passed window.
-
Jasper St. Pierre authored
-
MetaFocusInfo is a struct holding all necessary info, code has been updated to use the per-keyboard focus info instead of the old fields.
-
These functions are meant to replace X[GS]etInputFocus() calls across the core.
-
-
-