Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not parent, but I'll answer.

1) The implementation of the wake word recognition is implemented in hardware that cannot be updated remotely.

2) The device is unable to transmit information without lighting up a TX light. Implemented in unmodifiable hardware.

3) Voice recordings are processed and then deleted within one month.

4) The device is run by a company that doesn't make money through marketing or data analysis.



Love your list & know I'm late, but I'd add one item to your list: Make your update mechanism resistant against sending "personalized" update versions to specific clients. A client querying a central server can be overcome with a single NSL, while a peer to peer solution would need to hijack your connection as well.


1 means it's impossible to offer better voice recognition accuracy or change the wake word, meaning they get pilloried by reviewers for offering a device that's strictly worse than the current gen, and they get pilloried by HN for contributing to tech waste.

1&2 get the company pilloried by HN the moment a vulnerability comes out.

3 is a requirement that doesn't actually solve any problems and ruins a legitimate need of ML - comparing performance against a known baseline.

4 is overbroad - most software companies make their money through data analysis of one form or another.


Be honest, even if the manufacturer tells you #1 is true you're not going to believe them.

You're setting up an impossible set of demands that no manufacturer is going to meet and no consumer really cares about (other than perhaps yourself and a few tinfoil hat wearing crypto-anarchists).


The iPhone hardware meets my standards. The secure enclave for touchID is detailed in it's description and security. The Apple business model is not data collection, and the legal fights they have put up give me a reasonable belief that they take my security and privacy seriously enough for me. They are also printing money so it seems to be workable as part of a business model. I think you are wrong on all accounts.


I think you misread my comment. I'm actually a huge fan of the secure enclave and the way that Apple protects customer privacy. What I was commenting on was the distrust a lot of people have, combined with a lack of knowledge about how the hardware works. For example, Amazon's echo devices have a physical circuit that disconnects the microphone when you mute it. They could have done it in software, but wanted to break the physical circuit so that there was no question in customers minds whether the device was still listening or not when it was muted.

I trust Amazon as much as I do Apple to protect customer privacy.


Can't you set up Alexa on a raspberry pi? I guess you could then have your own trigger word "hello pi" or whatever that you've programmed it to listen for, and only then activate Alexa maybe?

Pain in the ass but you could open source it for the rest of the world.


Yeah, I think you are correct here. I think I would also be happy if someone else was selling me the hardware and I controlled it more, and it just was a hardware pipe to the voice service provider.


Hotword recognition does already exist and works on fairly modest hardware (I've never tested it on a pi, so I don't know specifically how well that would work - or more accurately how fast). The trouble is that it's an enormous hassle to set up and while it is open source, it involves some company or other having it's grubby paws all over the recognition networks. Which may or may not rub you the wrong way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: