The same rule applies to visual metaphors: Just as any
literary metaphor, a visual metaphor confuses if it doesn’t
clarify; it breaks if you stretch it; and it becomes
ridiculous if you combine it with a second or third
metaphor.
Avoid metaphor that do not clarify; if a metaphor darkens
the meaning of what you are trying to express, don’t use it.
I think that it would be clearer if there were some examples of nested metaphors or metaphors that 'darken' other metaphors. Maybe there is in the actual text, but the quote out of context seems a little abstract.
We found that the iPad applications we designed, made it relatively easy to be translated back into websites. The iPad could prove to be a wonderful blue print to design web sites and applications. If it works on the iPad, with a few tweaks, it will work on a laptop.
In that case you might as well skip the iPad application and focus on the cross-platform web application. Many of the current iPad applications don't really need to be native apps.
Glad I'm not the only one who thought many of the popular iPad applications were silly.
The Average Reading Time idea is intriguing, however, I don't expect it to be of much use. Reading time varies vastly depending on the material. Nobody reads a journal article at the same pace as a novel, for instance.
Average Reading Time certainly seems like something you would have to try if you want to find out whether it’s useful.
They should also drop the pseudo accuracy. Telling me the time down to the second (or are those minutes – hm, another source of confusion) seems ridiculous.
It knows how quickly you change/scroll through pages while you read, so it can make a reasonable assumption of the number of words you read per minute and create an estimation that way.
Yes and no. I've tried this sort of thing before, and obviously the machine can't know when you are actually reading a paragraph slowly or when you've just been distracted. It would have to be clever about weighting an overall average with a short term average. I'll be curious to see how they do it, definitely.
Good points. Given enough data it could do a decent job at normalizing lows & highs, enough to be useful anyway.
It would be interesting to see a proximity sensor in the iPad. I'm not sure what the range of those is. The accelerometer might be sensitive enough to detect when it's being held up as it's going to move a little bit in your hand.
Yeah, that's where I went from "this could be useful" to "that's stupid". It's like doing an internet speed test to determine how long it will take to download a file, ignoring how fast the file has actually been downloading.
It's one thing to calibrate, it's another to rely on context-sensitive data and pretend it is universal, when instead you can just rely on the context-specific data for any specific context and refine the estimates as you go. To my knowledge, web browsers do not "calibrate" like this.
Some kind of a practical advice I could use in my work. If I knew what it looked like I might not need it. Maybe "we tried this and users hated it, but then we tried that and they loved it and lessons here is that that sort of things works better in such and such circumstances".
First of all, for many/most users iPad is a part of a different workflow compared to iPhone, and if people are to use it differently we should design differently.
Each iPhone's screen should serve single purpose and unambiguously lead to the next action. That way you get an app that can be used on the go. My guess is that this is not true for iPad, as it's not likely to be used on the go, so there will be a bigger preference towards multi-purpose screens. I would like someone to talk about that.
"In order to answer the questions: Is the font big enough? Does it render well? Is the Schriftbild (text impression) inviting or rejecting? How does it feel to read? ...we had no other choice than printing out your screens on a 1:1 scale. We had to print out hundreds of pixel designs to get a feeling for the new canvas and the high resolution... After two months of printing, we did get the type choices and sizes quite right..."
I'm not sure I fully agree with his argument against using materials and textures to humanize a software interface. I do know that it requires a designer who's execution is better than average, but I don't think it's as much of a distraction as they proclaim. I also think it largely applies to the type of software one is designing. For example, the stock Notes app is fine but for Cultured Code's Things, it would not fit since task management is a much deeper interaction.
Anyhow, this article wasn't really informative, seems to be mostly opinion driven and the message is scattered. I showed the iPad to my parents this weekend, they were enamored by it because it was easy to relate with compared to their clunky Windows-based "laptops" -- a UX that's cold and impersonal.
Did you read the full article? Doesn't seem like it, since "it requires a designer who's execution is better than average" is exactly what it says. And not just that, it also defines the criteria for using metaphors and textures.