Reddit Reddit reviews Brave NUI World: Designing Natural User Interfaces for Touch and Gesture

We found 1 Reddit comments about Brave NUI World: Designing Natural User Interfaces for Touch and Gesture. Here are the top ones, ranked by their Reddit score.

Computers & Technology
Books
Computer Science
Human-Computer Interaction
Brave NUI World: Designing Natural User Interfaces for Touch and Gesture
Used Book in Good Condition
Check price on Amazon

1 Reddit comment about Brave NUI World: Designing Natural User Interfaces for Touch and Gesture:

u/adamkemp ยท 1 pointr/windows8

Here's a source: http://www.amazon.com/Brave-NUI-World-Designing-Interfaces/dp/0123822319

That was written by a Microsoft guy who worked on the Surface (no, not the tablet; the table!). It is full of information about designing intuitive touch interfaces, and I recommend reading it if you really are interested in this topic. For instance, page 155:

> The biggest problem with making your gestures self-revealing is getting over the idea that gestures are somehow natural or intuitive. We have seen over and over again that users cannot and will not guess your gesture language. To overcome this, put UI affordances on the screen to which they can react.

Earlier in that chapter it goes into more detail:
> Never rely on an action being "natural" (a.k.a. "guessable"). It's not.
> The only exception to the above is "direct manipulation" - users can and will guess to grab something and move it somewhere else.

I'm not going to write you a bibliography of sources, but if you study touch usability this is a recurring theme. No one is going to guess that selection is a swipe gesture. People will try taps, and if they're experienced with iOS and Android they may try long press, but I have never witnessed anyone guessing that a swipe does a selection. That's probably why Microsoft changed it for the Start screen on Windows 8.1. It looks like they haven't pushed that change throughout the OS, but I'm keeping my eye on that because it affects one of our apps.

> Same thing happens when you put someone in front of a Mac and they have never used one before.

You're talking about different things. Not knowing where a feature is immediately is very different from not knowing how to navigate the UI to even look for the feature. Someone who has never used a Mac might not know in which menu a feature is or which toolbar button they want to press, but they can see the menus and the toolbars, and they can explore those to find what they want. There is even a "Help" menu item with a search field which will literally point to the menu item they are searching for. The UI provides a mechanism for discovery. That is in stark contrast to the Windows 8 UI in which key functionality is completely invisible, and the only way to find it is to do a magic gesture which no one would guess on their own. What user is going to stumble across a swipe from the edge entirely independently with no training? No one. But anyone who knows how to use a mouse and keyboard knows how to click around in menus until they find something they want. There is a huge difference between those situations which you are glossing over.

> How do you add something to the dock? Rearrange it? Remove it?

Drag the items around. As mentioned above, the only truly intuitive interaction is direct manipulation.

Regardless, you are still mixing up two concepts: specific features versus foundational interactions. Removing an item from the dock is a specific feature. It's not that big of a deal if someone can't figure out how to do that one thing.

What is a big deal is if the reason they can't figure that out is because the very mechanism of interacting with the UI to find the feature is literally invisible. This is most important for things like the app bar or the charms bar. There are fundamental things you can't do at all in the OS if you don't know the magic gesture to bring up those bars, and there is zero on screen affordance for them. No amount of exploring and poking around is going to help someone with no training figure out how to do that. You have to actually show them.

That means it's not just one specific feature they can't find, it's every single feature which is revealed through that common magic gesture. Every single app is harder to use as a result of that core interaction being so obscure. Likewise, every app which relies on lists with selections is harder to use because the method of making a selection in a list is so obscure that no one can guess it. There is no way to just look around and discover that interaction. You just have to know the magic gesture. To quote from the book again, "We have seen over and over again that users cannot and will not guess your gesture language".

That is why the OS is fundamentally harder to use. Not because this one feature or that other feature is hard to figure out, but because from the top down the core interactions which enable you to use basic features are not discoverable. Maybe in 5 or 10 years, with enough determined effort, Microsoft can train a critical mass of the population to be able to use their new OS so that these new gestures become ingrained in our culture, but I don't think they have that much time to get this right. If it takes that long they are in trouble.