... which of course uses the current platform's character set, not a consistent one across platforms. Definitely not what you want in this kind of application (unless Android is UTF-8 in all countries? I don't code for it). That was in this class:
Just thinking about that - this class of error gets reported by Findbugs and other static analysis tools. So, this bug indicates that there aren't tools like that in the build - for a security app, where correctness should be top priority, that's surprising.
So I raised an issue with Surespot, and it is indeed the case that Android uses UTF-8 everywhere by default, independent of locale (much more sensible than Sun's Java!). The code is correct. Nothing to see here, move along.
"Only the person you send the message to can read it. Period."
To use this kind of sentences on new software not reviewed by the comunity is dangerous. There is people that risk their lifes using this kind of app.
The thing that puzzles me is that sentence:
"You can delete your message from the receivers phone."
I don't see in the 'how it works' any information about it. Do they do that in a cryptographic way somehow that I can not imagine? Or is basically that the application removes the content if the server request it, something we could avoid just with a backup or modifying the code of the app.
You can never, ever be assured that a message has been deleted. There's always packet sniffing, a modified client, even just taking a screen dump. It requires trust in the participant.
Recently I read a whitepaper where a security tester was talking to a malware author on Skype. The author mentioned an IP address and deleted it moments afterwards. The researcher dumped their ram into a file and searched for the string (successfully).
Indeed. More like, the person that has recorded your encrypted message probably won't be able to read it until they're able to get a hold of the session key, perhaps by gaining physical access to yours or the receivers phone, or by installing malware on either phone, or because of a flaw in the (p)rng that was used to generate the key.
It is open source, so at least it is trivial to create a clone that interops flawlessly, while copying off plaintext to a third party (That's not a flaw with the project as such, but it is a risk with using "security" software in general -- how do you verify the security software? In some ways this is made worse by app stores -- because they delegate trust away from the user and into obscurity; the appstore assures you that the app you installed is the app someone uploaded -- not that it does what you think it does).
> "Only the person you send the message to can read it. Period."
> To use this kind of sentences on new software not reviewed by the comunity is dangerous. There is people that risk their lifes using this kind of app.
It is also false, since it seems that their threat model also includes the server being able to transparently MITM you and read all your messages. A pretty egregious overstatement, I think.
Yeah, I guess verifying the full length of the fingerprint would mitigate that, and not doing that exposes you to a MITM attack anyway. Not much less secure than exchanging the keys directly, then, you are right.
sounds like the same problem as DRM. Once content is made available to the user, there's always a chance to intercept and copy it (unless we can install a DRM chip inside people's brain and even then I imagine there would be some hacks).
You can delete your message from the receivers phone.
That second bullet point set off my BS detector (and is where I stopped reading). No system on earth lets you reliably delete a message sent to another device over the Internet, after the fact. Neither can any such system reliably prevent users from sharing pictures that they can see on their device.
This site reads like an add for a perpetual motion engine.
Which is too bad, because open-source encrypted mobile chat is an interesting thing in and of itself, without impossible pie-in-the-sky claims.
"When a user is created and its public keys uploaded to the server, the server signs the public keys. Clients that download the public key then validate the signature of the key against the hardcoded server public key in the client. This ensures a MITM attack trying to use a rogue key pair to impersonate a user will be prevented."
This doesn't look good to me. Process implies trusting central server for cryptographic operations, which is very insecure. Central server should only be used as transport mechanism and should not be in any way involved in cryptographic operations that include working with secret keys. If someone, for example, seizes control of server (for government agencies this is an easy task, especially in these days) then he could forge user's public keys. The fact that public key is hardcoded in client application doesn't solve the problem either. What if server key is compromised? You'll have users with hardcoded compromised key in their app. Not a good situation. I see bunch of other security related problems in algotrithm description page also, but this one is crucial.
Not exactly. With SSL, encrypted communication goes between client and server. In case of this app, encryption is done with user's public keys, no server is involved in encrypting messages. Server role is only in signing public keys to ensure their authenticity. But that alone is bad and insecure practice.
That "Signing public keys to ensure their authenticity" is exactly what (any of the possibly as many as 600) public CA's that your browser and/or OS come pre-configured to "trust".
Who the hell are "Xramp Global CA"? "VRK Gov Root CA"? "UCA Root"? "Trusted Certificate Services"? They're all just random selections from the first page of trusted root certs in this OS X machine's list of System Root keys. Any of them could choose to "authenticate" a public key that claims to be my bank. Apart from the few pinned certificates in Chrome (I think mostly Google certs), I've got no more reason to believe any SSL connection I make is "authenticated" any more than Iranian Gmail users should have had when a DigiNotar root CA cert had signed those rouge Google SSL certs.
It also used to "secure" your email communication with Lavabit… And the 8 (alleged) PRISIM participants. From what I read – "trusting the legal system for that" perhaps isn't a particularly prudent idea.
If the government doesn't want to recognize the value of your money, they don't need to snoop on your communications with your bank to do it.
Conversely if you're actually interested in protecting your information, then client-side encryption with self-authenticated keys has always been the only solution.
Also check out https://threema.ch/en/ - it has the same functionality as WhatsApp (except of group chat), but secure/encrypted and is available for Android and iOS.
They're using the NaCl library for cryptography and proper encryption of messages before leaving the phone can be validated here: http://threema.ch/validation/
(I'm not affiliated with Threema, just a regular user who likes the product)
* No discussion of how key exchange problem is solved
* Makes misleading security claims "when you delete a sent message it will be removed from the receivers phone"
Basically falls into "don't touch with a barge pole" category of crypto software.
Crypto software isn't a category where you can make it up as you go along, it has to be designed upfront with a set of security considerations for it to have a chance of survival in the real world.
To understand the standard approaches to threat modelling.
It should be trivial for someone to look at the documentation and quickly answer basic security questions like "Does it defend against replay attacks ?" and "Does it leak message size ?"
https://github.com/surespot/android/blob/master/src/com/twof...
...but someone making a mistake with getBytes() usually does it everywhere.