Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's more: you have to hold your phone/tablet or you have to sit/stand at a desk which can hold the screen, keyboard etc for you. The broader problem here is the somewhat caveman style interaction we have to do with computers/devices, ie. go right up and touch them and be near them. This is quite often bad for our health (see problems with sitting, RSI etc) and means they use up valuable space in our homes.


Wait a minute...

Caveman like?

That seems way over the top.

Essentially making the computing experience part of us is super compelling. I get that.

But it all remains in very early, highly speculative days.

As for valuable space, whose systems do you really trust? The ones running in my home are known and not a worry, or untrusted and managed appropriately.

For it to work like you hint at, we need either:

Massive, central systems that people essentially rent and are forced to trust

, or

Small ones so power efficient they can be implanted and perform in meaningful ways using minimal hardware, ideally communicating along internal channels somehow.

Even with those, sharing is a big deal, potentially exposing a now very dangerous and highly personal resource to others.

Meanwhile, the cavemen sent people to the moon on far less overall computing than we often carry in our pockets these days.


That's a false dichotomy. You can easily run the computer in a cupboard and communicate with it over the local network. I run my own private "cloud" services today. Of course, that might not be how it turns out because data is too valuable. But the headset should just be a display for the computer in your cupboard.

Just for the record, I'm not even a little bit excited by this, nor do I expect I'll ever buy such a device. I'd want the full matrix experience before I went in, so that I could do things I can't do in real life. However, chances are I won't want to do those things any more if/when it ever becomes a possibility.

I also share your concerns about how this stuff will likely be implemented (ie. on their cloud). But tbh that's already lost at this point.

But I can't deny that getting away from keyboards, monitors, desks etc is hugely valuable.


Maybe.

I don't like the approach. I am not at all sure "should" is the right word to use here. It could be very compelling to operate that way.

I think it might be. And I want to try it, same as I have everything else to see what the kryptonite is and how it will matter.

The better, higher value way to move past keyboard, mouse and display is to talk up the new paradigm. People will use it because it is new. They will also use it because it nails some use case or other that is super important to them.

Leading with our existing interfaces suck sets all the wrong expectations and could actually impact the incoming tech!

There is no need to punch down on these things.

Setting the security and privacy issues, which are significant, we are left with value potential and what it will take to actualize it.

VR mostly sucks unless one is seeking immersive experiences. And there it shines bright indeed. Outside of that, the tech is jarring, high latency, low-fidelity mess when passing the real world through as is often needed.

AR eliminates that, while still being both immersive and able to augment what one is doing in various apparently considerable use value laden ways.

I think about CAD, as one example I am very familiar with. I have used pretty much everything ever made for CAD too. Buttons and dials, macro pads, light pens, foot controller, space controllers, keyboard, mouse, old school tablet, new school touch and stylus visual tablet, voice input,, and I could go on.

My favorite by far is a big, fast display 3D active shutter glasses capable. Modeling becomes more fluid and the user can actually see complex surfaces in an intuitive way. Assembly is fantastic when one can fly around the thing with a space controller while still being able to pick, trigger macros and just build.

Many would call a setup like that advanced. Truth is it is a nice Samsung 3D plasma in my living room and a laptop with a 3D controller and mouse plugged in, often on my couch for comfort.

Despite how compelling that is, few people do it.

Now, take away keyboard. How do we input specifics? Voice input? Waggle flanges at virtual keyboards and other devices?

We could develop more precise language, and borrow from the sciences and describe things letting the system create them.

Maybe. But there remains a ton of details to contend with.

Let's say I remain skeptical.

But does any of this make keyboard and mouse out dated, caveman like, or just bad?

Nope.

The way I have always done it is to just use the new stuff and see what comes. Weave that into my process and gain value without also increasing costs and risks.

Heck, I have even used a system with a full haptic interface! The software was called "Freeform" the company was sensable or something close to that.

Basically, it was all about clay and one could carve and paint on it using the haptic! One fun thing was to carve a hole, then set the tool into it, get up for a drink, with the haptic floating jn mid air just as if it were placed into a real block of clay!

I loved that thing and when I put little kids on it, they made surprising stuff.

But it was no real answer for much outside it's killer value proposition.

Few things are.

Maybe I will put this another way:

Every UX device has both a super power and one or more kryptonites to deal with.

Which explains why VR is not going to boom. And it explains how AR might too.

And should that happen, it won't be how shitty one may feel keyboard mouse is.

Nope.

It will be the superpower, and with AR that is the ability to overlay information onto our already keen senses




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: