Hacker Newsnew | past | comments | ask | show | jobs | submit | jiiam's commentslogin

My experience with IaC output is that it's so broken to not only be unhelpful but actively harmful.


There is quite a large amount of people believing that Telegram stores messages in plaintext. I would like to know how they got that idea.

So far the best I've got is something along the line of: if you can get your chats when you log in with a new device, then so can a Telegram employee. With no proof of the claim of course.


If the chat is not end-to-end encrypted, which Telegram “cloud” chats are not, then by definition Telegram (the company) has access to the chats. Full stop.


Something being true only by definition is unfortunately a very weak claim.

For example the company servers could be hosted on an island with armed guards instructed to burn everything if anyone approaches and the decryption happens only on those servers: sure they have access by definition, but they really don't.


On the contrary, it’s a very strong claim.

The guards could decide they’re not getting paid enough and steal the data. Or the government could arrest them. Or the government could MITM the data center. Or any hundreds of different scenarios.

At the end of the day, the only thing preventing somebody from accessing the data is that they just… don’t.

This is very weak security and it is why cryptographers and security professionals call it “effectively plaintext.”


I am saying that in practice the security might be structured in such a way that it requires several different parties to connive, rendering it essentially fine.

I mean, having to modify server code in order to access data that is "effectively plaintext" is not so different from installing a backdoor inside the client: it's not like the user has any choice of client, so even for apps like whatsapp and signal that run E2EE one is still making a leap of faith.

If we add the fact that everything runs inside an os built by companies who may or may not be constantly spying on their users we could say that by definition there's a lot of stuff in our lives that lives in "effective plaintext".


EDIT: regarding the part about signal and whatsapp I must clarify that of course the possibility of inserting a backdoor on the server side is far more dangerous than the client side: Signal has verified builds so a backdoor would be evident and the user could stop using the service. And the same actually holds true for any app using E2EE if the user simply avoids autoupdating and wait for some confirmation that it is ok to update, at least as long as we can assume that any client side backdoor would be found by independent researchers.

I also want to repeat the original point that started this whole conversation: the point was how easy it would be for Telegram to access the chats and if the justice system can compel them to do so.

When people say it has the data in plaintext, I take as a "they can access them whenever the want right now without changes", and yes of course the could ultimately access the data (in fact they don't claim to be unable to). What they claim (and I believe it feasible) is that even if a judge seized all the assets and servers under his/her jurisdiction it would be impossible to decrypt any user data.


If the only thing stopping them from decrypting your messages is instructions to their own employees to not allow it to be done, that is not a defense against providing access to law enforcement. They can just change those instructions at any time without anybody knowing. Just like they can just change the server software to allow it.


I mean at this point they could also change the code running on the user devices, probably someone would notice but that's another story.

The point is: even if they could, should they do so when compelled by authority?


Somehow they must transfer the chat history from their servers to the user. Either it's plain text, or encrypted and they either use the keys to decrypt or send the keys to the user along with the encrypted content. In all cases they can simply access the contents themselves.


I think this statement requires a stronger argument, since even if they could have access to the data in theory there are concrete implementations where it could be extremely unfeasible.

For example, since we are in the realm of speculations, I propose the following alternative to the plaintext or accessible decryption keys: the decryption could happen inside a nitro enclave making it essentially impossible to access the data without changing the application code.

I'm not saying that this is what happens, just that I don't think that one can so easily deduce that "they can access the data" just from the fact that "they send you chat history to you".


The protocol is fully documented. You are free to read it for yourself without resorting to guessing. [1]

Messages are not stored in plaintext. The claim they are stored in plaintext is false.

One can have cogent arguments about one's preference for E2EE or not but the repeated claim here and elsewhere that messages are stored in plaintext is simply hearsay.

[1] https://core.telegram.org/mtproto/AJiEAwIYFoAsBGJBjZwYoQIwFM...


I didn't make that claim. I said it's either this or that. That's a different claim.


Just to be clear, are you saying that his claim

> Telegram uses the MTProto 2.0 Cloud algorithm for non-secret chats[1][2].

> In fact, it uses a split-key encryption system and the servers are all stored in multiple jurisdictions. So even Telegram employees can't decrypt the chats, because you'd need to compromise all the servers at the same time.

is false? If so can you cite a source? (The claim is just a summary of the FAQ https://www.telegram.org/faq#q-do-you-process-data-requests)


Yes. An employee can impersonate a user by registering a device in their name and intercepting the confirmation code and then read all non secret chats and private groups of that user.

At least one employee must have the ability to intercept the code.

(Unless the user has 2fa enabled, but that is not the default configuration.)

There are probably easier ways if we knew more about how the administrate their infrastructure.


Maybe? When you login from a new device you're asked to provide an OTP so maybe there is at least that layer of protection and, hopefully, requires some circumvention at the application code level.

However I think the real question is: even if that's possible, can law enforcement compel Durov or an employee to do so?


> can law enforcement compel Durov or an employee to do so?

The E2E encrypted comms are a red herring. There is plenty on Telegram that is public, plaintext and presumably illegal.

If Telegram refused to respond (note: not bend over and comply, just respond) to French legal requests in respect of plaintext criminal behaviour the way any other company would and should, that’s somewhat damning. If Durov went above and beyond and interacted with that content, his goose—as the author put it—is cooked.


If you don't use 2FA then the government can simply intercept SMS code for any phone number. Russian government did it against opposition activists, and it prompted Telegram to add a password as second factor. So any service which allows login or restoring access using SMS (incluging Gmail in default configuration) is vulnerable to such kind of attacks. It seems that people in the West are unaware of this type of attack.


EDIT: I just want to clarify that I don't believe the claim that an employee can intercept the validation code


There existed one server which sent the code, so whomever administrated that server could trivially have intercepted it by just modifying the software running there to copy/log it to them.


This could be extremely unfeasible. For example the code could be generated by a third party and encrypted before arriving on a server controlled by telegram and sent to the user. Or it could be generated inside a nitro enclave. Sure ultimately someone could modify the server code somewhere to log the code or any other specific message before it gets encrypted, but at this point we are talking about inserting a backdoor.


According to the Telegram FAQ (https://www.telegram.org/faq#q-do-you-process-data-requests) data on their servers is encrypted and the keys are split and stored in different jurisdictions (and different from the jurisdiction where the data is stored).

With such a setup what does it mean to comply with warrants? Are we saying that Telegram should voluntarily yield all information regardless of jurisdiction?


Ah, this multi-jurisdiction setup explains why Durov himself was targeted. As the presumed controlling entity behind that network of shell companies, serving him with a warrant seems like the most effective legal means to make Telegram comply.


> Both openvpn and wireguard protocols are trivially blocked by DPI.

I don't understand why this matters, it's not like your ISP will ever block this kind of traffic since every company that has any form of IT department uses some form of VPN making it not only a legitimate kind of traffic but also quite common.


I'd think that companies use commercial grade internet, and normal people use residential internet. If so, then it would be easy to imagine that the ISP blocks some features for the residential subscriptions.


Most companies certainly won't be using "commercial grade internet" in the way that term is usually used. That would usually be reserved for large enterprises, which really only covers a small part of the workforce in practice.

Many businesses don't bother even subscribing to a business package, because something like a static IP is unnecessary for them.

Further, the point regarding VPNs still stands -- think of the chaos it would cause for many people working from home (on residential connections). And that's just one example.

I don't find it plausible for an ISP to block this.


Actually, there is "commercial grade internet" at least in my country. The main difference is that it is several times more expensive, and in the office buildings the owner doesn't allow ISPs with cheaper "residential" plans.


Business, yes, that was the word I was looking for, thanks! So the ISP could just limit the residential packages, limit the business packages to actual businesses, and that's all.


> It's the reason some people will tell you Arch Linux worked perfectly on their machine despite having plenty of problems.

I feel personally attacked


Its all fun and games until your computer is bricked because you missed an update


But that's when the real fun starts!


I used to run a telegram webhook for myself and kept telling myself to make it in a service. You can deduce by the fact that I'm not sending you a link that it hasn't happened yet


Thanks, I needed to have this thought formalized. I see now why I have a hard disk full of perfectly architured dead projects, and also why the live ones are never going to be perfect


In game theoretic terms all of this looks a lot like a Nash equilibrium to me and, as such, fells inescapable


As long as everyone treats each other as a p zombie then yes that’s true


Also in Haskell:

1. Start by doing everything in ReaderT Env IO

2. Learn all about mtl (or monad transformers, free monads, freer monads, algebraic effects, whatever)

3. Do everything in ReaderT Env IO


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: