Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So after tgat meeting at the MSFTB, I delved into the possibilities for a CAD system for the blind. An engineer myself, I'm very familar with standard CAD.

Yes, you can have a haptic feedback probe (e.g. a wand) that can be used to trace the surfaces and edges of a virtual object, but imagine trying to understand say, a sky scraper model with a single probe. Or a nest of pipes, valves and cables. And what about understanding precise scale, turning layers on/off, or slicing a model to understand interior arrangments? The challenge is overwhelming.

However, at the time I thought "OK, first things first - how do you sense basic objects like blocks and spheres without sight." It seemed to me that you'd want to exploit as many body sensory inputs as possible to get the job done. Also at that time haptic feedback glove research was growing - mostly for remote 'robotic arm' controls.

Now, I see there has been a lot of progress in this area: http://dev-blog.mimugloves.com/data-gloves-overview/

It would be great if a human could reach into a model and explore surface contours with both hands (all fingers). I think the ideal 'gloves' would not only impart resistance on fingers and wrist, but also the elbow, shoulder and perhaps torso to give the most accurate sense of realism of a model. In addition to resistance, it would be ideal for the glove to simulate texture using vibrational haptic feedback for even more realism. After that, the UI would really need to be fine-tuned to enable scaling, slicing, layering, measuring, and constructing in a virtual space using tactile/audio feedback vice visual feedback. What this UI is exactly, I dont know.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: