I have a Surface Pro 2, and I really like the device. Metro style apps are great for casual use, like browsing the internet while on the couch at night, and I love having the ability to install traditional desktop applications as well, including developer-oriented software like Cygwin and Notepad++. Managing many windows with a touch screen is central to my process, but it’s clumsy with the touch screen, and the touch pad on the Type Cover doesn’t help a whole lot. It’s tempting for me to imagine a workstation that doesn’t use a mouse at all, but we’re definitely not there yet. As an experiment, I decided to try envisioning a mouse-less, touch-screen workstation that I might be interested in using for software development. The result is TouchWM. What follows is some of my observations: What I’ve learned, liked, disliked, and what I still need to figure out.
TouchWM is a window manager that uses two-touch events to handle selecting, moving, and resizing windows. Single-touch input is passed straight through to the underlying application. If two touches are detected, the window is selected, and can be moved or resized with subsequent touch and movement. It omits title bars, instead using a radial menu centered around the initial touch in order to perform operations like closing and maximizing that typically live in a title bar.
You can see a demo of it in operation here:
Full-screen isn’t sufficient.
IDEs like Eclipse and Visual Studio are great at encapsulating many development tasks in one application, but it’s unrealistic to expect them to cover everything. I always want to have a web browser open also, along with a terminal, file manager, and other applications. I frequently want to be able to refer to multiple windows at the same time - referencing documentation or specs while coding, for instance - and with reference to larger displays, it’s simply wasteful of screen space. What we’re used to using for Android and iOS devices can’t scale up.
Split-screen isn’t much better.
Windows 8 supports displaying Metro apps alongside each other or alongside traditional desktop applications by means of splitting the screen between the two of them. This allows some minor multi-tasking, but multi-tasking is still limited because it’s too difficult to manage more than two programs simultaneously. At least for the short term, a developer still needs good support for overlapping windows.
Let’s Get Rid of Titlebars
Titlebars suck. Either they’re too small to be useful for finger-based input, or they simply take up too much space. They just don’t make sense as a major part of user input in a mouse-less world. Lots of applications have no need for multi-touch support, so it’s a missed opportunity that window managers don’t use it. Why can’t multi-touch input be used to move and resize windows? Touch-centered menus make a very nice alternative: They free up screen space traditionally given to title bars, and can easily provide more options than the standard three (minimize, maximize, and close). The radial menu I’ve used so far isn’t optimal - it can be clumsy rotating your hand to reach a particular button, and it probably needs configuration to be able to nicely support both left and right-handed users. Perhaps it should be more persistent, rather than disappearing whenever touch is first released. Still, in the general sense I think the idea is an improvement.
What about applications that need multi-touch?
How does a window manager (that itself uses multi-touch input) effectively determine whether the event was for it, or the application inside it? I’m still looking for ideas here. My initial way out of this is simply to use three touch events as a toggle on the window manager, causing it to ignore or accept incoming touch events. This works for the moment because I have no use-case that requires more than two touches, but I’m sure that’s not a permanent solution. In the longer term, I think this might work better through means of some sort of gesture or toggle button. There also needs to be a better visual notification of this state. Adding a toggle button to the screen would be convenient, but might potentially conflict with windows beneath it. A motion-based gesture might work, but I’m worried it would be too non-intuitive to new users. I think this is something I’ll need to prototype to make up my mind on.
Large Displays
Windows 8, Android, and iOS all use things outside of the display as a significant part of the user interface. This includes both buttons off of the screen, as well as gestures that involve swiping in or out of the screen. While this is intuitive for small devices, I believe that the approach scales poorly. For example, swiping in from the top to pull down notifications, as is done on Android, would be laborious on a full-sized monitor.
This is my list of priorities going forward with TouchWM.