Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
When tech says ‘no’ (ben-evans.com)
223 points by mooreds on Sept 4, 2023 | hide | past | favorite | 328 comments


I spent a few years in Ireland and the Netherlands lobbying against insecure voting machines. I had so many conversations with politicians and civil servants where these effects were in abundance. In each country I was representing a group that included the most eminent and experienced computer scientists in the country. Whenever we briefed someone for the first time, it would only take a minute or two to cover how without a voter-verified paper audit trail, no-one knew how to build a system that provided anonymity, verifiability, and resistance to voter coercion and vote selling. But it just never clicked, so many of them refused to believe that we couldn't simply "nerd harder". And of course, there was no shortage of charlatans who would tell them that they could solve it.

In Ireland, we put enough pressure on the politicians for them to create a cross-body commission to investigate. Because anyone could make detailed submissions, and because the commission treated these submissions like the clerks of the Supreme Court treat amicus briefs; it was pretty effective. The commission ended up pausing, and ultimately abandoning, the rollout. Ever since I've learned to appreciate any avenues to "de-politicize" a controversy and get it to that kind of body.


I gotta admit, perhaps a bit naive of me, that the concept of "tech charlatans" didn't click for me until now. It's true, they are out there, and I see it now. I mean, I know there are phone scammers, but they probably been there before tech. I know there are hackers writing viruses and exploits, but those are oftentimes talented people doing bad things. However, this comment, and a couple recent experiences really drove "tech charlatans" home.

One experience was a trip to a crypto conference. Many booths were making unsubstantiated, impossible claims. If you tried to ask how, they couldn't answer. Like, someone would say they made transactions instantaneous, and when I asked how they solved the problems of unpredictable networks, they'd have no answer. It got pretty frustrating. I had to dig through a few claims like these only to be convinced that they're selling snake oil.

Another (less recent) experience was looking through a spreadsheet of government approved innovation/research grants. I couldn't believe what I was reading. Many of the entries seemed to make absolutely no sense, purposefully using buzzwords to sound smart, but having no meaning when unpacked. Buzzword salads. These are funded projects. And to get more money on round 2, all the "innovators" had to do is show any activity, which is very easy to fake.

So yes, tech charlatans. I'm a bit on the old school side of tech, and this gives me cognitive dissonance. I'm used to thinking of software devs/engineers as honest/creative/driven, but I guess this was always inevitable. Our field is very exploitable, because many people put their livelihood in tech, while knowing very little about it.


Well, nobody likes being called a charlatan and you have to be careful with that accusation. Case in point: this article. I hate it because the argument he makes is correct and useful until the end, when he tries to claim that giving certain government agencies access to encrypted messengers is impossible without giving it to all of them "because maths". This is a good example of tech charlatanism and it's the sort of thing that will hurt our industry a lot in the long run. It's why lawmakers often end up not listening to us.

There is nothing that stops tech firms doing exactly what they're being asked to do. Every claim otherwise is obfuscation because tech firms don't want to do it, mostly because of their internal internationalist politics where they don't want to be forced to pick sides and tell some governments "sorry, we're Anglos who choose to give the US/UK governments special access that you don't get because we're better than you". It's an example of the first kind of no, not the third kind.

End-to-end encryption is a vipers nest of false claims like this. There's lots of ways to implement such policies, like this: for each message that's being encrypted, you encrypt it under a per-message key which is then itself encrypted under the recipient's key, and also a police (public) key. The servers forward the messages to the police so they can decrypt them. If the decryption fails too often (hacked client) then that user is denied access to the network. Yes yes I know that WhatsApp/Signal and friends use a more complex protocol, that description is a simplified textbook example, but the argument doesn't change.

Cryptography is a very flexible set of tools. They can easily be used to achieve complex security goals, like empowering some parties whilst disempowering others. The resistance to doing this is legitimate and I even agree with it, but it's also political and not technological. When politicians push back and insist that their police should have access to WhatsApp, and get told it's impossible, well they are not all stupid and correctly conclude they're being bullshitted. Indeed some of the MPs in the UK have computer science degrees.

Fact is, buzzword salads can be used to baffle people and get them to agree with you even if you're wrong. Technologists are especially tempted to abuse them when they want to say "no" to make a Type 1 No seem like a Type 3 No. Researchers do the same thing all the time, your complaints about grant funding are as old as the hills. Honest specialists speak clearly even when they might benefit from speaking unclearly.


What you suggest is technically possible but it misses the point: Whose job is it to guard the police's key? The chance that that key will be stolen and a breach will happen to all the police's messages is 100%, either because the police aren't good at IT security or because an insider will be bribed or (worst of all) a dictator takes over and decides to read all the messages without due process and everybody who wrote a message critical of the dictator gets "disappeared."

By not sending a copy to the police, you can guarantee that those things won't happen.


> either because the police aren't good at IT security or because an insider will be bribed or (worst of all) a dictator takes over

... or because a police insider is threatened and/or hit repeatedly with a $5 wrench. One of the most effective decryption tools.


This isn't a new problem, there are complicated architectures to deal with process issues around lawful intercept:

https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst6...

It says it's from 2015, but the Catalyst 6500 product line is much, much older. And quite a few companies sold both mediation devices and encryption software.

All these problems exist with access to metadata, so police and service providers already need to address them today. I'm not saying that this isn't challenging (maybe it is, I don't work in this field), it's just that handling content (especially text message) within the existing framework wouldn't be such a massive change because of the existing infrastructure and procedures.


> All these problems exist with access to metadata, so police and service providers already need to address them today.

In countries with higher corruption levels than the US data/metadata from “lawful” access is sold on a black market. I don’t think the problem of keeping data safe solved in a future proof way even in the first world countries and if encryption will be compromised sooner or later data or even keys will be sold/stolen/abused.


This is a Type 1 argument and not even a good one because the effort required isn't even placed on those being regulated.

Police can store keys inside HSMs. Tech firms can give training and develop software stacks to ensure secure workflows if they want to.

You can also use a targeted approach. Allow the police to request intercepts of specific phone numbers. Tech firm rotates the public key being served to the clients, replacing it with a police key. They add a special flag that says "don't put any sign this happened in the UI". Now police can intercept just that specific phone number.

There's lots of ways to do this at different points in the security/usability spectrum, it's all just an engineering problem.

Also, bear in mind that police and governments won't find this argument convincing because they can already access every other form of electronic communication and in reality mass leaks of private data have never been an issue.


> in reality mass leaks of private data have never been an issue.

This is just wildly untrue. The 2015 OPM hack (in which an APT stole security clearance records) was a mass spill of private data. The Shadow Brokers leak put the NSA's most sensitive tools on the public internet for anyone to download. And of course we might never hear about collected intelligence that had been purloined by an adversary in the course of an offensive cyber operation.


Yes, obviously normal government agencies get hacked all the time, I was referring to places like the NSA.

And I know about the Shadow Brokers. Not personal information.


> And I know about the Shadow Brokers. Not personal information.

"Tools and methods" are in fact considered more sensitive than any personal information in the IC. The system you're proposing presumes the existence of at least one unhackable organization, and I regret to inform you that there is no such thing.


I think this sub-thread is getting a bit confused.

Firstly, nothing I've proposed demands unhackable organizations. That appears to be a requirement you invented. No security system presumes that! The so-called E2E encryption systems are hackable via several different organizations today: you could hack Meta client teams and insert code into the next releases, you could hack Google/Apple and tamper with the code as it gets shipped via the store, you could hack a phone OEM and insert a backdoor into the devices themselves.

Secondly, if you think government agencies would get hacked more often than tech firms then you may be right, but that's also irrelevant to any points being made here. The goal here isn't to design a perfect system, it's just to point out that the claim that no system can exist at all isn't true. Responding to that with "but your comment doesn't contain a full design doc for a system I personally judge as perfect" isn't going to get us far. Governments don't care if the system is perfect, right? They're OK with some leaks from hacked police departments.

Thirdly, I haven't even been making concrete proposals! Just pointing out how cryptography works as examples. If I was hired to implement these requirements tomorrow I wouldn't do things directly in those ways, they're oversimplified, hence the references to textbooks.

Finally, the point about mass leaks was about "every other form of electronic communication" so responding with a staff database that was stolen by China and never leaked onto the internet isn't a great counter-example (not a leak from a police department, not private citizen communications). Police investigations are targeted anyway, so there's not much to leak. NSA isn't targeted but they apparently can keep their metadata databases secure enough, even if they've sometimes lost control of PowerPoints or malware caught in the wild.


> it's all just an engineering problem

It's not at all, which is why your solution ideas miss the mark completely.

It is fundamentally a "humans can be corrupted" problem.

Everything you describe is a backdoor of various forms. Backdoors have nothing that prevent abuse other than triple pinky promise to only spy the bad guys (by whoever gets to define who is the bad guy, which changes over time).

Whenever you have a system where people have access to bypass it, you'll soon enough have some of those people be corrupted to abuse it, or ordered by higher ups to abuse it or threatened by various way to abuse. Since people are involved, there is a 100% chance this will happen.

As long as corruptible humans are in the loop (i.e. as long as humanity exists) the one and only way to avoid corrupted abuse of backdoors is to not have any backdoors.


Governments know that the police sometimes go rogue and don't care. It can be cleaned up when they do.

Tech firms also know that programmers can make mistakes whilst implementing complex cryptography, or even be corrupted, yet this is not itself an argument against implementing cryptography!

To repeat once again, we're not debating the ethics of E2E encryption here. Please don't waste time trying to convince me that E2E is encryption is a good idea, because (if it was real and worked) I'd agree with you! But your argument is a Type 1 No by the scheme presented in the article. It is a "we'd really rather not" social argument.

The problem our industry has is typified by the article. Too many tech people argue that giving police access to WhatsApp encryption is actually a Type 3 "it can't be done for fundamental physical reasons so it doesn't matter who demands it" problem, but that isn't true. Remember that governments don't care about E2E encryption whatsoever in any way. They would much rather ban it as a source of unnecessary problems. If tech firms claim they can't turn it off completely, they're obviously lying and that will just enrage governments. If tech firms claim they can't keep it whilst providing targeted access, governments don't care about that either. After all, email isn't end-to-end encrypted, nor is SMS, nor are normal phone calls, nor are letters. Why should WhatsApp be any different?

In reality it actually is possible to design a system that stops people with only server-side access to WhatsApp reading messages, whilst still breaking if the clients are compromised, and which allows police to have targeted levels of access without any risk of universal master key leaks. There are lots of ways to do that. You can use secure enclaves, zero knowledge proofs, or more exotic algorithms. But it's also not really relevant to the point I'm making, which is about the No Type being presented to governments. There was surely a better example that could have been chosen for a Type 3 No.


> Too many tech people argue that giving police access to WhatsApp encryption is actually a Type 3 "it can't be done for fundamental physical reasons so it doesn't matter who demands it" problem, but that isn't true.

This is changing the goalposts.

Giving police access to whatsapp chat is trivially easy. But that's not the question.

The pro-surveillance people say "Give the good-guy police (whoever they are) access to everything and keep it secure from the Bad Guys Whom We Oppose (whoever they might be this week)". That one is indeed impossible due to the laws of information theory.

> without any risk of universal master key leaks

You're looking at a technical problem when this is not that, it is a humanity problem.

It's relatively easy to avoid e.g. master key leaks. That's an irrelevant implementation detail. What matters is that if some set of people have unfettered access to bypass all protections, then all the Bad Guys will also have that access soon enough because you can't keep people from getting corrupted/threatened. No matter how hard you wish, you can't. Humans are like that.


> That one is indeed impossible due to the laws of information theory ... it is a humanity problem

So, is this impossible due to human nature or the "laws of information theory"? Which is it? And if the latter what "law" are you thinking of, exactly? Can you name these laws?

Here's the problem: it's neither impossible mathematically nor practically. Remember the kerfuffle over the NSA's backdoored ECDRBG algorithm? That was a very pure textbook example of what's possible, it would have allowed the NSA and only the NSA to decrypt TLS streams using it. According to you that would have been physically impossible due to violating some sort of law, but it's not. Cryptographers know how to construct such systems, there are many ways.

But the existence of such solutions doesn't even matter. Lawful intercept abilities have existed for ages, governments will happily accept a solution of just turning off E2E encryption entirely, and the possibility that governments or tech firms will get hacked doesn't bother them because that's a transient problem that can be made hard by throwing money at it.

They also don't care about corruption because the countries demanding this have low levels of it. Governments are the original experts in corruption, you might say, and have evolved a lot of different mechanisms to fight it. Finally, remember that all these systems are already hackable or corruptible. Pay off someone who works on the WhatsApp mobile app team and unless Meta's internal controls detect it, it's game over.


> So, is this impossible due to human nature or the "laws of information theory"? Which is it? And if the latter what "law" are you thinking of, exactly? Can you name these laws?

Yes. It's simple, if you send information in a recoverable way to another entity, they can recover it. If that entity involves humans, they can and will with 100% certainty be corrupted or threatened to obtain the information improperly.

> Remember the kerfuffle over the NSA's backdoored ECDRBG algorithm?

Amusing that your example is a counterexample to your thesis. Exactly. Backdoors never serve only their would-be masters. That's the impossible part.

If this is not blindingly obvious by now, I fail at being able to explain it better.


The backdoor in the EC DRBG algorithm was detected but never leaked because to actually open that back door required a key only the NSA had, but that key itself never leaked. Only the NSA could decrypt streams that used this PRNG. To everyone else they remained undecryptable.

So it's not a counter-example and your amusement is misplaced.


It’s naive to think that social engineering is so successful due to a lack of training, and that training will thwart it.

My old boss and I were recently laid off because my company (Livingston International) has been doing massive layoffs, because the company has been losing clients and money over repeated phishing attacks. We always shat on the assumed tech-illiterate person or persons responsible for us now having to sign in multiple times per DAY with MFA. Right before we left he mentioned this suspicious email he got the other day… and he clicked on it. It was one of those internal “we got you! Be careful next time!” emails. Come on man.


The tech companies themselves aren't able to read the messages, and no process exists to do so -- that's a necessary part of the security design and a core tenet of e2ee.

Sure, iCloud backups and photos are technically readable (they're not under e2ee) but policies and procedures exist to allow law enforcement to access them as well.

What you're asking for is that e2ee be eliminated in favor of a process-heavy solution. But at the end of the day, any human-dependent process can be broken by social engineering and a lack of vigilance.

Yes, an e2ee session can be broken by shipping a hacked client update to an endpoint. But that's much harder to accomplish, human- and process-wise, than obtaining the right key from the right person.


Having the key just isn't enough, you also need a traffic copy and break the transport encryption. The latter likely requires rewriting the data stream, and some cooperation from a vendor (maybe not the vendor whose service you are targeting, though). A one-time key leak without access to interception infrastructure is probably not that useful.

Key leaks are also relatively easy to prevent because secret keys are not relevant to investigations, so they never have to leave the interception framework and handed to a human operator. Sure, the infrastructure could be compromised, but that's already an extremely severe issue if you just have metadata in it.


I'm not asking for anything. I'm pointing out what's possible. As for the last sentence, we don't know what their internal procedures are so we don't know what's harder.


Why can't the key switching code already be in the client?


> in reality mass leaks of private data have never been an issue.

France, 1940, religious census data.


My personal definition of end-to-end encryption, which I believe is widely shared, limits decryption to the recipient of the message.

The protocol you have described is not end-to-end encryption by that definition. Sure, it's possible to do this kind of escrow encryption, but that's not what Meta and Signal are selling, and it's definitely not what I'm buying.


The law in question prohibits e2e encryption.

It's perfectly viable to create a messaging application with encryption that isn't e2e. Sure, it's a ridiculous thing that no democracy should demand, and will currently destroy any brand that attempts it, for good reason. It also won't be effective against any mildly persistent threat (not much of a change over our current implementation, really). But it's something perfectly viable to create.


It's still e2ee, it's just that the government is recipient of every message (in addition to the recipients you specify).


If the message is being decrypted before it reaches the target recipient -- i.e. the second "end" in "end-to-end encryption" -- then it is by definition not end-to-end encrypted.


The idea is that the client is doubly encrypting the message for both the target recipient and the government, and that if the client doesn’t do that it’s banned from the system. So it is still end-to-end encrypted on the server and is not being decrypted by anyone other than the recipients, they’re just forcing you to send every message to the government as well.

To be clear, I think this is a horrible idea, but it’s not technically impossible.


The government is not the “end”. The message is decrypted prior to reaching its end. It’s not e2ee.


That definition is insufficient to capture a secure system. It's not enough for a system to do this. You have to actually know / be able to prove it's doing this.

It's worth repeating this because tech firms have made the definition so confused, but encryption was developed to let you use a trusted device to communicate over an untrusted medium (radio). If your trust in the communications medium is the same as your level of trust in the device, which for so-called "E2E" messengers it is, then the whole system doesn't make any sense.

What Meta/Signal sell is kind of a smokescreen because they control both the clients and the medium and the key directory too, so nothing is really limited. They can update the logic at any moment to disable the encryption for you, the person you're talking to, or everyone, and nobody would ever know. They can also update the client to upload your private key if you're being specifically targeted, or use a weak RNG or suppress a key rotation notification or any one of a million other things. In fact, they might have already done that without anyone noticing. I pointed out in other posts that they already undermined one of the most basic properties of a modern cryptographic system (that the adversary can't tell if you're sending the same message twice) and they did so for typical government-type reasons of controlling rumors and misinformation, as they see it.

For E2E messengers to work conceptually they'd need to allow arbitrary third party clients, so you could choose to trust that client and then use the WhatsApp/Signal networks even though you don't trust them. Or at the very least, they'd need a very sophisticated and transparent auditing programme. They won't do either of those things.


If a company have the means to decrypt a particular users data, they have the ability to decrypt all users data. But the argument is not about that, it's about privacy, and how we have seen exceptions to privacy have always led to a slippery slope where they use it for more purposes than originally intended.

Btw, end-to-end encryption by its very definition, means that only the sender and receiver kan decrypt it. Your scheme is basically saying that the police should also be a receiver of all messages...


This is cryptography 101. Asymmetric crypto lets you encrypt a message using a public key without having the private key to decrypt it.

Remember that the encryption is being done client side by apps these networks control. E2E is therefore sort of fake to begin with because WhatsApp is not only the servers but also the client. You can't mix and match, so you have to encrypt messages using software provided to you by the "adversary". E2E encryption is therefore more of a tool to control bad insiders and negotiate with governments than encryption as conventionally understood.

Also remember that tech firms run the public key directory. Almost nobody verifies public keys, and even if they did, they're doing so with apps controlled by the tech firms so you can't know the verification is done properly anyway. And the keys can change at any moment, with your own way to know it's happened being UI controlled by the tech firms.

Still, even if clients and servers were separate, nothing stops clients from encrypting messages using a well known government public key and attaching that along with the e2e encrypted version.


The point the parent was making is that if Apple decided tomorrow they wanted to implement a backdoor for law enforcement there's no way we'd know until the evidence starts to show up in court cases (or someone at Apple/LEO leaks the knowledge). The system is a closed loop and proprietary. We're taking their word for it.


That does not change anything, asymmetric or symmetric, the same principles apply. E2E means that only the recipients has the ability to decrypt the messages, so far we seem to agree. As I said, and you verified, for your scheme to work, then messages should be encrypted with a known government PK, but that is not privacy. Now the government has the ability to decrypt everything, and can store messages indefinitely and use it for whatever they decide now and in the future. That is a surveillance state by definition.


But nothing stops the clients lying and putting fake mundane plaintext in the government "copy"


Not the government's problem to solve, and tech firms can easily solve that by changing how their encryption works (e.g. using secure enclaves, or remote attestation).


> If a company have the means to decrypt a particular users data, they have the ability to decrypt all users data.

The messaging company can embed the police's encryption key in the app but not have possession of the corresponding decryption key.

> exceptions to privacy have always led to a slippery slope

That's a reasonable argument. But to the GP's point, thats not a technical argument. Its just another argument that the policy is bad for normal "bad policy" reasons. It has nothing to do with the math.


> The messaging company can embed the police's encryption key in the app but not have possession of the corresponding decryption key.

Once three people have access to a secret, it isn’t secret anymore. Once hundreds of thousands of police officers have access to the private key, it will leak, and everyone will be able to read these messages.


> it will leak, and everyone will be able to read these messages.

Again, do you see how this is not a technical argument? It might be a good argument. But its not an argument about the math or the computer science. "We can't trust the police" is a social argument, not a technical argument. A math or CS degree will not help you understand this argument.

Anyway, why would the decryption key be in the hands of "hundreds of thousands of police officers"? Especially when the decryption key itself is useless without access to the encrypted messages themselves. If this were implemented, its much more likely that the police would build themselves a web portal or something through which they could access people's WhatsApp messaging logs. The crypto could all be handled on the data portal backend.

A much stronger argument against this sort of thing is the governmental slippery slope argument. If the UK police gains capabilities like this, you bet every other country will (reasonably) demand similar access. Apple / Meta would have to decide which police / security departments to work with, and thats a very complex problem. Who do you trust? Hungary? Bulgaria? Russia? Iran? Egypt? China? Brazil? Where, exactly, is the line? And should access be revoked after a coup, like in Niger?

Its much easier to just refuse all governmental cooperation. It protects your brand. And makes it much simpler to justify refusing access to police departments who you don't trust.


"This security scheme wouldn't work because of these social factors" _is_ a technical argument. Security is very specifically about making sure the right people have access to a resource and the wrong people don't. Social aspects are inherent in this. Therefore, in the context of security, social arguments are technical arguments.

Arguing that the myriad local police departments of the United States in particular do not have the security posture required to keep access to a data portal secure is a technical argument against government-backdoor encryption.


The article describes “type 3” bad ideas like demands to work out new mathematics:

> “Work it out” is generally a demand to invent new mathematics, but sadly, mathematics doesn’t work like that.

E2E + a backdoor for a particular police department isn’t the sort of bad idea that requires new mathematics.


The example is

> For the last 25 years, engineers have said ‘we can make it secure, or we can let law enforcement have access, but that means the Chinese can get in too” and politicians reply “no, make secure but not for people we like”.

Insecure police departments will inevitably leak the backdoor keys. It's not possible to limit who can use decryption keys based on who they are and not just possession of the keys under our current understanding of how encryption works. If you assume that the police will never leak keys then sure it's easy. But arguing about whether or not social factors like police department computer security is good enough to safely store keys is a technical argument about this technical problem.


Thanks for trying so hard here.

It's quite depressing how many people here don't seem able to deconflate "I don't believe police can be trusted with this power" from "it is mathematically impossible to give it to them".

This is a nice clear example of how experts talk themselves into lying to the public for the greater good, as we saw so often in the past :(


> Btw, end-to-end encryption by its very definition, means that only the sender and receiver kan decrypt it. Your scheme is basically saying that the police should also be a receiver of all messages...

And serverless literally means "without servers", and yet...

Point being, scope is a free variable. "E2EE" that's managed by a central server is already stretching it, yet people accept it. They'll mostly accept excluding law enforcement out of the scope of eavesdropers "E2EE" protects you from too.


Centrally managed E2EE is not end to end if the server can decrypt anything. The definition means that the keys to decrypt only exists at either end.


Similar story with people claiming early victory on online age verification. A common claim is "web age verification can't work unless you're happy giving every porn site your name and credit card". Clearly not true. Federated authentication is very old tech and the same techniques that allow you protect your identity with an Apple sign-in can also be used to allow sites to verify that the user is an adult with some goverment account, but nothing more. I agree that it would likely end up expensive and marred by beurocracy like most government IT projects, but at a technical level it's sound


I agree that some claims about e2e are misleading. That said, you could interpret "mathematically impossible" very charitably as risk assessment math. It's mathematically impossible to improve investigability while also improving resistance to spying by adversaries. Their relationship must be inverse. I agree that most people would interpret "math" as cryptography, and that it's better to make a clearer distinction between cryptography and risk assessment maths.

However this goes both ways. People demanding a solution think that you could make something investigable while keeping it completely airtight to adversaries or abuse. There is no escaping the fact that this is mathematically impossible in terms of how risk works. You can compromise security in favor of investigability, or you can improve security at the cost of investigability. And it's also important for the lawmakers to understand that each compromise is not gradual. It's drastic. If you went from 1 party having a key, to 2 parties, you've probably doubled vulnerability surface. If it's 3 parties, and one of those parties is an organization with lots of employees, you've probably exponentially increased vulnerability surface by orders of magnitude. This math does apply here.


Your proposed solution is an insecure system that can and almost certainly will be hacked. That’s the point. You could make insecure encryption pretty easily. What you can’t do is make something that is secure and yet also has a key that gets handed out all over the place. In the last decade alone, there have been all sorts of examples of exactly these kind of security keys being leaked.


There is no key that gets handed out all over the place. Numerous people are trying to explain that on this thread. Even a purely textbook cryptography setup would involve the private keys being generated inside an HSM and never leaving it. Only the public key would be widely distributed. A public key lets you encrypt a message but not decrypt it.

The hard part here isn't key leaks. System critical keys are commonplace in our society and virtually never leak, exactly because they don't tend to leave dedicated hardware. For example e-Passports are signed with long term government keys, they don't leak all the time. The US DoD runs a large scale private PKI, no problem. Our society has even got pretty good at physically giving people private keys in such a way that they still can't access them: credit cards, SIM cards, games consoles. The hard part is the workflows around them. Ensuring the HSM only decrypts messages for authorized users and things.

Even if you assume HSMs are constantly getting ransacked, governments don't care. They don't even necessarily want to have their own key management to deal with at all. A web portal that employees log in to, type in a phone number and then see the logs is perfect for them. Make it be dedicated hardware supplied by Facebook itself if you want, with login systems as secure as they use for their own employees. Governments just do not care about these details. Type 1 Nos, to use your lingo.

The hard part of such a system is defining your precise security goals and then implementing it in ways that all the goals are met simultaneously. So called "E2E encryption" isn't really, we all know that, so there's lots of flex to define systems that meet the same goals in different ways especially if you're willing to roll with good-enough type solutions e.g. assume a trusted client (which e2e messengers do already for things like their forwarding counters).


if you introduce a key that can decrypt all messages, what you have is not end to end encryption. Then you might as well just not do end-to-end, since the service provider can read all messages using the key they gave the police anyway


That "nerd harder" thing is something I keep coming across in the professional world and it is something of a paradox. It comes from someone who knows you are intelligent and more knowledgeable in a given area than they are, and who wants you to solve their problem, but they are unwilling to accept that your intelligence/knowledge extends to whether or not something can or cannot be done reasonably (Reasonable here being excluding things which are technically possible, such as lifting Rhode Island into some kind of Earth orbit, but aren't really quite feasible or practical).

One guy I worked for had a bad habit of starting unicorn hunts and a lot of "this looks easy from fifty thousand feet" foolishness with the phrase, "I know you're real smart but ..." and whatever followed was more a statement of how convenient it would be if something were true, rather than if it were true, possible, and so on.

Nerd harder, nerds.


I had a boss who wanted to learn how to decrypt another apps' data on a phone in order to access sensitive information. I told him it wouldn't work, since that defeats the purpose of encrypting that data if we don't own that app. What did he do?

Installed Android Studio on an i3 laptop and tried to use the emulator with some random tutorial from some MITM app that claimed it could do the job. Then linked me some other tools that explicitly said they couldn't do this outside of device emulation or rooting the device.



My favorite statistics quote by John Tukey seems relevant -

"The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data."


The thing about voting machines is that they can work, as long as they also provide a paper trail that determines the actual result of an election or referendum.

This means initial results will be available quickly, followed by actual results a day or two later. This takes the pressure off the counting process which should help prevent miscounts. They solve the real problem of unintentionally spoiled ballots. However, I don't think the concept of voting machines is entirely incompatible as long as we keep the paper trail authoritative.

I have had to explain the obvious flaws in electronic voting to various family members ("why can't we just vote online with our government login" being the most common one). When blockchain bullshit started appearing in public media, people started pretending like they finally found a problem that blockchains solve, only to quickly be shut down again soon after.

I don't think people are aware of how many layers of protections the voting system has and how well thought-out it is. Every year naive politicians try to call for modernisation, and every year they find out that paper voting is actually the best option we have been able to come up with.


> However, I don't think the concept of voting machines is entirely incompatible as long as we keep the paper trail authoritative.

The problem is impossible to solve if you want to respect the following requirements:

- No one can know who you voted for

- Anyone can oversee the process and understand why fraud can't happen

There simply is no way to prove voter anonymity with an electronic machine without enough technical knowledge to audit the system.


The ones I saw seemed fine? You fill out a paper ballot, and you submit it to the worker at the end who scans it and drops the ballot in the box in one motion.

You have a paper trail, it anonymous, and it's pretty easy to understand.

I guess if you modified the firmware on the scanner to disclose the vote counters continuously, and the worker at the end knew me by sight (the worker I showed my I'd to is on the other end of the room) you could find my vote. But you could also hide a camera in the booth, which is easier.


> But you could also hide a camera in the booth, which is easier.

It is also easier to check whether there is a camera in the booth.


Is it? Cameras can be really small.


You can see it this way: if you can time people voting, you can hide the camera anywhere in the voting room. Checking the booth is always going to be easier than checking the room.

There is also the issue of voters selling their vote. It's pretty easy for them to wear a tracker that tells you when they were in the booth. On the other hand, with paper ballots, the buyer has no way to check the vote, since market ballots are null.


The first problem is a non-issue. You don't need the voting machine to do the authentication if you let a human control the ballots that get stamped/marked/whatever.

The second problem also isn't a problem, because the machine doesn't need to be right. The paper ballot is leading, the machine is just an indication. The ballots are still counted by hand afterwards using the normal process. That means that as long as citizens are able to review the manual counting, they don't need to know or care how the voting machine works.

And yes, that does probably negate 90% of the advantages of voting machines.


> The first problem is a non-issue. You don't need the voting machine to do the authentication if you let a human control the ballots that get stamped/marked/whatever.

The machine can register what was vote was cast at 13:42. From there, the whole idea of anonymity disapears.

> The second problem also isn't a problem

The impossibility to have the second criterion is a problem because it prevents the first criterion from being verifiable by anyone.


> This means initial results will be available quickly, followed by actual results a day or two later

Actual results from paper ballots in French elections are in around an hour or two after voting closes. Yes, it relies on volunteers counting and on people watching but it’s much more effective than anything that would involve machine. You can literally watch your ballot box all day if you’re suspicious.


France's voting system is bulletproof when it comes to vote integrity. However, it has one significant drawback: it is hard to ask more complex questions than voting for x/y/z or yes/no questions.

One could say that it is a good thing (since it is hard to have a good public debate on many topics at once, voting on many topics at once means you collect people's preconceived opinions rather than people's informed judgement), but many US states ask many questions at once when people vote, so they would have to significantly reduce the share of direct democracy in their system.


That's literally how Dominion voting machines work.

And hand recounts are only for verification, you shouldn't rely on them because they're far less accurate.


The errors in hand recounts are rare because there are lots of mutually suspicious people looking at each ballot.

And when they do happen, they are uncorrelated. A single software error can easily change the result of the election; a thousand human errors cannot.


> And hand recounts are only for verification, you shouldn't rely on them because they're far less accurate.

Voting machines can be altered. Go to any big hacker conference and you can learn how to hack the common models in thirty seconds. Humans may be fallible, but I'll always trust those fallible humans over anything automated.


And hence - use random hand counted samples for verification.

But nobody has proven that the dominion machines changed a single thing, in fact, the hand recount in Georgia showed as much. Which is why the "election fraud" claims then moved on to more outrageous claims.


Blockchains with zk proofs fix this tho.


You should go to Brazil and try your luck there:

https://arstechnica.com/tech-policy/2018/06/in-a-blow-to-e-v...

From what I gather they banned any paper trail verification. Putting 100% trust in digital registry of the votes.

Must be the first unhackable system in history. /s


Besides, criticizing the system is currently de-facto a crime that can lend you in jail.


There is obviously no such thing as unhackable systems, but when people mention Brazilian elections and it's voting systems, it's incredible how everyone assumes it's a system developed/reviewed/audited by unexperienced people.


> resistance to voter coercion and vote selling

Thanks for mentioning that issue, it's something that I don't hear mentioned enough in online/distance voting debates (maybe it just means that I'm not involved enough, but anyway good to hear this mentioned). It's so critical and at the same time fairly orthogonal to all the encryption / zero-knowledge proofs / quantum resistance and other cool math that nerds love to nerd about.


There are downsides to being too against those things, because it'd ban at-home mail-in voting, and having that is worth a lot of downsides because it lets everyone actually look up what they're voting on.

Not to mention, if everyone has to travel to vote, you're mostly going to get retirees and people with a lot of free time voting.


I do consider at-home mail-in voting as a bad risk - locally we have a solution that involves separate 'voting stations' (just as any other voting station, officials + observers from opposing parties monitoring) visiting the people who for various reasons are unable to come to vote, and collecting a secret ballot on-site from e.g. bed-ridden sick people; and special voting stations for people who can't leave - e.g. hospitals, prisons, army bases. Of course, that won't help if large numbers of people aren't coming to vote because e.g. the lines are too long or they have to work and can't get to vote, but then you should fix these problems directly.

It is important that people are able to vote without being controlled by their family members or employers, so any unconstrained remote voting should be an exception that's minimized as much as possible.


> Of course, that won't help if large numbers of people aren't coming to vote because e.g. the lines are too long or they have to work and can't get to vote, but then you should fix these problems directly.

The issues are:

1. young people don't vote because they don't care, so it should help to make it easier for them.

2. once you're in the voting booth, it's too hard to remember who you decided to vote for when there's tens of choices and ballot props, like in California.

Mail-in voting also helps prevent the issue of local governments trying to sneak stuff past the voters by having offseason elections for it. Machine politics has died off in most places, but it's still strong in New York for instance. If everyone gets a mail ballot they'll notice.


Indeed. I supported the absentee voting push during the pandemic, but one thing a lot of my friends don't understand is my opposition to absentee voting being normalized going forward.


At-home mail-in voting is also where the biggest fraud risk is. One of the very rare UK electoral commission cases involved illegal registration and fraudulent postal ballots: https://en.wikipedia.org/wiki/Erlam_v_Rahman


Which is solvable by having enough voting stations spread out, and making voting day a national holiday or done on a sunday.


While I wouldn't object, it's not necessary. Voting in the UK is always on Thursdays IIRC, but there's lots of polling stations and they open early and close late. Queues are usually only a few minutes. However postal and proxy votes are still available


note: there's e-voting where you go vote at a voting booth and enter your ballot in a machine (with the advantage that the votes can be tallied faster) and there's e-voting where you use your browser or an app on your phone to vote remotely.

People tend to conflate these two things but they are quite different. Each one has their own set of problems/challenges. On top of the voting infrastructure itself, you also have to think about how to prevent people from being denied the right to vote, how to prevent issuing two votes, etc., etc. Voting infrastructure and logistics (whether electronic or not) can be very complex.


I remember that time, it was the only time in my life I was ever concerned enough to call into Ireland's most popular radio show "Joe Duffy", to raise my concerns about the rollout... They described me on air as a "computer expert against e-voting" ;-) I think I was a CompSci student at the time, but close enough...

As I recall a lot of ordinary folks were for it originally, until the pitfalls were raised.


You talked to Joe!

The Joe Duffy show is quite strange, because I've heard people of all ages and all demographics calling into it. It really is the way the nation talks to itself. Or, at least, it was, when I was listening ... fifteen years or so ago.


Yep, this was circa 2002...


The verifiability is the biggest one. One could conceivably develop a voting system that had perfect accountability, secrecy, and integrity, but would be far far more difficult to work out by hand. A zero knowledge roll up is not something your average poll worker could work out on scratch paper. Paper ballots counted by hand can and can achieve all the things we desire in election systems.


Fist off where I've voted they switched from punched cards to scanning paper ballots. It's all typically run by three middle aged women with a printed book of who's registered to vote. You just tell the first one your name and they find your entry and you sign next to it. The next one hands you a scantron sheet. You mark it up. The the third one feeds your sheets into the machine as you watch. Machine records your votes and drops the sheets into a bin in the machine. That's a system I trust and it has the positive of civic engagement. And fundamentally it's not broken.

You now also have the option of mailing in a ballot. That's easier to corrupt I'm sure. But at least it's still paper.

All the fully computerized systems seem really really sketchy to me. Especially in low trust cultures where underhanded stuff is normalized.


Couldn't the Scantron display one thing and record another? If the paper ballots aren't manually recounted, would anyone ever know?


The big thing is you can do audit with original documents created by the voter themselves. Which is the problem with all the 100% electronic systems. You can't because the voter doesn't create a master copy of his vote.


> A zero knowledge roll up is not something your average poll worker could work out on scratch paper.

There's no need to do calculations by hand. An electronic voting system only needs to be verifiable by the use of public data. Everybody walks with a computer.

If you mange to create a system that does that, well, the entire world is quite interested on it.


> Paper ballots counted by hand can and can achieve all the things we desire in election systems.

Sometimes, it's about efficiency as well. I've been a ballot worker here in Germany for the 14 years since I was eligible.

There are simple elections: electing a mayor, for example. You have a ton of DIN A4 paper sheets, but you simply sort them into heaps, count the heaps, and you're done. These don't really benefit from computerizing them, other than speed, but even for a larger polling station, you're done in an hour or two.

Then there are more complex elections, like the German Bundestag, where you have two votes on one DIN A4 sheet: the "Erststimme", where you vote for the directly elected representative of your district, is just as easy to count as the mayoral elections. But then you have to go through all the ballot papers again to count the "Zweitstimme", where you vote for the party lists that make up the other half of the Bundestag. While most people vote the same in both where you can short circuit during the first phase of sorting, IME around half the people vote different (say, they vote for the SPD candidate in primary because he's the one that will most likely have success over the CDU conservative, but in secondary vote they vote Greens or Left), so it's still a sizable amount of paper you have to touch twice.

And then there are the horrors: the European Parliament election [1] or regional/city elections [2], where each ballot paper can reach almost 1 m² in size making them very difficult to handle, and you have a ton of ways to distribute your votes across parties ("Panachage" [3] and others), so for the (again) about half of people actually using the complex distribution you have to painstakingly check and count votes. Having these computerized would eliminate so much trouble, particularly as there's always 5-10% of voters who mess up their maths, rendering sections or their entire vote void. And counting these can last for days of very mentally intensive work.

On top of that come specialized voting schemes such as ranked choice and its countless variants, where some are very difficult to execute without having a computer that can run through vote combinations as a batch job.

[1] https://www.pnp.de/archiv/1/europawahl-altoettinger-druckere...

[2] https://www.sueddeutsche.de/muenchen/kommunalwahl-muenchen-w...

[3] https://en.wikipedia.org/wiki/Panachage

[4] https://en.wikipedia.org/wiki/Ranked_voting


Paper ballots does not mean absolutely zero computers are involved. I do not know about Germany but in the United States we have voting machines that print paper ballots and those are submitted to a vote counting machine but you can verify your vote before and after printing. Voting machines are important for accessibility reasons. Not every can see or even read English after all.

The vote counting machines are then double checked by hand. A blockchain based solution would take significantly longer to verify by hand without a computer and the amount people who could do the cryptographic calculations in a reasonable time frame on scratch paper would mean we would essentially be trusting the results of the entire election to a few people. The average person can count and easily understand that the winner is whoever gets the most votes.

I would like to see these machines open sourced and confirmed to never be running on the internet through.


I agree that computerized voting can make some voting schemes viable and I feel sorry for all the hard work needed to handle a big city worth of those 164x60 cm ballots, if I got the German right.

However about that 1 m² ballot: maybe the problem is in the rules, especially if they end up with 5-10% of the voters making mistakes and voiding their vote. Something different would result in less mistakes and less costs (printing, distribution, labor, storage, etc.)


I'd say generally the German vote counting system is efficient enough but perhaps sometimes understaffed (I guess more people could be drafted if that really helps). At the same time, because it is quite manual and involves a fair bit of people with disseminated knowledge of local results, it would be quite tough to (wholesale) manipulate without people noticing.

Maybe some level of inefficiency is actually not a bad thing.


is this inefficency, just another incarnation of Proof of work?

Anyway, I think that inefficency caused by distribution to many not connected agents (regions, committee, people) is key in achieving security in voting system, so yes it's not only not bad but inevitable.


Not a proof of work but rather the only way things could have been done way back when and hurdles for change being high.


Could you elaborate on how this could be proof of work?


I feel compelled to mention the still very relevant xkcd on voting software:

https://xkcd.com/2030


I've come to the conclusion that the only way to convince these people that voting machines are a bad idea is to actually engage in election manipulation and then uncover it after the election has already been accepted. Consequences are the only thing they understand.


Could one not still do voter coercion and vote selling using the existing postal vote system?

You get people to do a postal vote, and they show you what they are voting for and you pay them, take away the ballot and post it yourself (or threaten them and do the same thing)


That means you know Rop :)


How did you get that body accepted as an idea as opposed to a "kick into the long grass" solution?


I would argue that there's a fourth kind of ‘no’, when tech decides that enough is enough, and says "No, fuck you."

We've reached that point in the UK, where the government has proposed draconian legislation[1] that would allow the government to force companies to create backdoors into encrypted messaging services.

As a result, Whatsapp[2] and Signal[3] have said that they will pull out of the UK, and Apple[4] has said it will remove Facetime and iMessage from the UK if the legislation passes.

1: https://www.eff.org/deeplinks/2023/07/uk-government-very-clo...

2: https://www.theguardian.com/technology/2023/mar/09/whatsapp-...

3: https://www.bbc.co.uk/news/technology-64584001

4: https://www.bbc.co.uk/news/technology-66256081


This is explicitly the point of the article, really: the author’s third type of ‘no’ is “we literally can’t do this” — and that’s the situation in the UK. It’s not that Meta, Apple, and Signal are saying “fuck you” to the UK; it’s that they’re saying “there is literally no way for us to comply with this legislation, so our only legal option is to leave your market.”

I agree that there’s a sort of implied, under-the-breath muttering of “…you morons”, but if there were a way for the messaging companies to comply they’d just…comply. Angrily and noisily, perhaps, but even in its current pathetic state the UK is too big a market to ignore.


The author's third type of "no" refers to the technical feasibility of compliance. OP, on the other hand, is referring to the refusal of the idea itself. In the case of encryption, the technical side of things is insignificant compared to the Orwellian nightmare these sort of laws intend to create. It would still be a terrible idea even if technical drawbacks were absent, and one doesn't even have to be a nerd to see that.


> “there is literally no way for us to comply with this legislation, so our only legal option is to leave your market.”

as much as I love my encrypted chats, both those sentences are not true.

Devil's advocate would say:

1: Tech companies can obviously share the keys with the government, like they do in a group chat, it would simply mean "GVT has joined the chat" [1] and it could also work the same way wiretaps have always worked, under a very strict legal framework and behind a colossal amount of bureaucracy to authorize their use.

2: tech companies are not leaving those markets, they are simply buying time, they don't want to do the work, because it's work they will not profit from, but there's nothing preventing them on the technical side from doing it.

[1] have you ever seen this on an Android device? https://i.imgur.com/XUxiUUr.jpeg


No. The problem is that as a messenger company you have to comply with different markets at the same time. If the biggest market (the EU) would punish you for breaching privacy and a small market (the UK) wants to punish you for the polar opposite you can either:

- develope a seperate app for that small market and break that promise (and have the headache of figuring out just how to treat communications that cross the border of those two markets)

- choose the bigger market, retreat from the smaller one and let the small market decide if they really want their special deviating regulation if it now means: "Those politicians took your messenger away" and there is no EU-buerocrat that you can blame for it.

Notice how this doesn't even require any particularly strong political stance by the messenger organisation? The latter just makes more sense from the standpoint of an organization that cares about it's use of resources.


> No. The problem is that as a messenger company you have to comply with different markets at the same time.

Well then you could technically say they can't keep the keys private then, since some places force them to share. It's definitely a "can but wont" scenario.


> (and have the headache of figuring out just how to treat communications that cross the border of those two markets)

You could also ban the ability for UK citizens to have chats with EU citizens, which I imagine some of the kookier UK conservatives would love.


You could do that, but then you cut a whole mode of interaction between e.g. UK parents and their kids who study or live on the continent.

Either way this is a measure that (like many conservative talking points) sounds good on paper ("law and order"), but once it becomes reality it won't win you any prices, except negative ones.


The response to your devil's advocate argument is: giving you the keys is not actually a solution, because now every foreign government is racing to break, steal or buy those keys, and not only can we not guarantee that it won't happen, but we can't even discover if it happens, or when. We can build a secret entrance, but we cannot guard it!


Why can’t they know when someone uses the secret keys?

Perhaps the messages would be individually encrypted and the keys would need to be used in order to retrieve the message encryption keys. And to do this, they would need to provide an explicit reason and only get the limited info that the warrants etc. would support and the reasons would be stated in every case.


The point is that today, the key isn't in Google's or Amazon's or Meta's servers, but on the phones of people. That means that you literally don't have the key if you don't have the phone. And governments don't want that, they want the keys in order to eavesdrop but without being noticed (and stealing the phone would get you noticed).

So your only option to comply with this is to remove the phone-only key storage option and move all of the key into your servers, which is what we talk about when we mean "breaking end-to-end encryption".

The issue is that to comply with the rules, you have to secure that server so only the good guys can get in, and only if the warrant is legit, but also to allow fast access for time-sensitive cases such as terrorism and secret cases such as NSA investigations. You also have to make sure that there's absolutely no way for people to access that server if they don't have the approval.

Oh, and also that server / these servers contain the keys to read every message from every citizen of your country (including politicians), which is probably worth as much of your GDP.

So you need to build the equivalent of a safe containing one trillion dollars that can't be accessed for any reason except all of the reasons mentioned abov3. Except that this theoretical trillion of dollars are special dollars where if you mess up and let people in without anyone noticing they got in, they can "steal" the trillion dollars and start spending them and nobody would notice that they're being spent. And there's just about every country on earth that would love to "borrow" your two trillion dollars, especially if you can't ever realistically prove they did it.

Easy, right?


Has there ever been a public key sign-countersign encrypted tap method?

I.e. Authorized tap requestors have keys (law enforcement, intelligence) and sign a request (including timestamp), storing a copy for audit.

The approval system (courts, FISA) validates that request, countersigns if they approve (including timestamp), storing a copy for audit.

The system owners (messaging services, etc.) then validate both signatures and provide the requested tap information, creating a tap record (including content scope and timestamp), storing a copy for audit.

Ideally, then all audit logs get publicly published, albeit redacted as needed for case purposes.

Part of the central issue is deciding "Who should be responsible for security?" Imho, if governments want to mandate a scheme like this, it sure as shit shouldn't be the tech companies. The government should have to manage its own keys, or deal with consequences of leaking them (while allowing the tech companies to retain independent records of individual requests).

As much as it pains me to say this... this wouldn't be the worst use case for a blockchain...



Yes! Exactly like what you've apparently thought about and worked on for a long time. Neat!

>> To decrypt it, multiple parties need to come together and combine their keys, all the while creating an audit log of why they are accessing this or that portion.

To me, this is the technical solution that best mirrors the ideals of the pre-technical reality.

And I consider myself an encryption absolutist! But I think the powers arrayed against it are too strong (and in some areas, too morally correct) to fully resist.

Which devolves to creating a compromise, and hopefully one better than "Government has no keys, any of the time" or "Government has all keys, all the time."


So instead of stealing a single key, the FSB has to steal three?


The client side devices / cameras / whatever would send the encrypted copies off-prem, to be decrypted in the case of proper due process and authorization. But it would require interactively querying a distributed database that is managed by agencies or networks representing civilian interests, and these agencies would rate-limit the queryinf and disclose every query, who did it and why.

We need more transparency in our governments and security agencies (including FSB, CIA). Start with transparency on why the need certain data. More here:

https://community.qbix.com/t/transparency-in-government/234/...


Yes. In addition to two of those keys being attributable to the federal government.

Which, at least in the US DoD's case, already manages the world's largest PKI system.

The key difference with the UK scheme would be (1) the tech company would retain the final decryption key & (2) any use of that decryption key would be required (technically and legally) to generate a public audit record (albeit optionally obfuscated if the court order so requires it).


And what happens when the NSA or the FSB or some other equivalent just breaks into where the keys are stored, or beats it out of an employee, and bypasses the entire logging mechanism?

Your security guard having a clipboard where everyone signs in at the gate doesn't matter if someone dug a hole under the fence.


You mean when the {other nation's foreign intelligence agency} penetrates {nation's intelligence agency} and {nation's court system}?

And still creates a logging trail because the log system is intrinsically linked to fulfilling a request?


"Intrinsically linked" doesn't exist. Encryption is math, math you can do on a piece of paper (in theory). Anything you set up to log the fact that people did that math is always going to be meaningless if people take the numbers and do the math away from your logging system.

Now, you can say "but you can't ever access the numbers, just order the computer to do the operation". And also "To order the operation, you need 2FA and a signature for a judge and the president". And, of course, "The numbers needed for decrypting are split between three different servers all with their own security system and they can't be forced to talk to each other without the president's signature being added to a public log". And that's all well and good, but consider this: I install a listener on the RAM of each of the three servers. I wait until it does a totally legit, totally approved thing that gets logged. I now have the numbers copied somewhere. I do the decrypting for everything else away from the servers.

Sounds like a difficult operation? You're talking about three numbers worth a trillion dollars if they ever get out. Spy missions have been done that were harder to pull off for less benefit.

You just thought of [technical solution] to prevent listening through the RAM? Great, you just solved one _very obvious_ part of the attack surface. Now to address the ten thousand other parts identified by your threat model, and I really hope that you did a perfect job while designing that threat model because one blind spot = all of the keys are out forever. Also, no pressure, but your team of 10 or 100 or even 1000 people working on that threat model are immediately going to be pit against teams of the same size from every government ever, so I hope your team has the best and most amazing engineers we'll ever see in the world. And that's not considering the human aspect of all of that, because, well, one mole during the deployment, one developer paid enough by an adversary to do an "accidental" typo that leaves a security hole, one piece of open-source software getting supply chain attacked during deployment, and your threat model is moot.


So many arguments against this boil down to 'Anything less than perfection isn't perfect.'

That true.

But it's also missing goods of a less-than-perfect but better-than-worst-case system.

By your argument, TLS shouldn't exist.

And yet, it does, is widely deployed, and has generally improved the wire-security of the internet as a whole. Even while having organizational and threat surface flaws.

I agree with you that no government entity should have decryption keys in their possession.

However, I disagree that there should be no way for them to force decryption.

There's technical space between those two statements that preserves user privacy while also allowing the legal systems of our society to function in a post-widespread personal encryption age.


That's completely missing the point. This is not about perfection, this is about the threat level.

Decryption is always going to be technically possible. A government can always get possession of a phone, invest a lot of time and skill to get the key out of it, and then use that. This is what happened in that one famous Apple case, and this is what is always going to happen when people use E2E encryption. The point I made in my other posts was that once you get the key, you have the key, and that doesn't change just because the key is on the phone. That's your threat model when you use E2E encryption.

TLS works the same way. The encryption keys are ephemeral, but they're temporarily stored on your computer and on the server you're communicating with. If you want to attack a TLS connection (and you can!) you need to obtain the key from either the server or the client, and that's your threat model when you use TLS.

This is a completely fine and acceptable threat model as long as the keys are stored in a disparate sea of targets, either on hundred of millions of possible client/server machines for TLS, or on each person's phone (each one with a different model, from a different maker, and using different apps) for E2E. The thing is, in such a distributed model, nobody can realistically get every key out of every phone at once. This makes every single attack targeted to a couple of high-profile target, and therefore the impact of successful attacks way, wayyyy lower.

The issue arises when you decide to forbid end-to-end encryption, and instead mandate a global way to decrypt everything without needing access to the phone itself. This changes the threat model in a way that makes it unsustainable.

Again, and I know I repeated that vault analogy but it's a great way to explain attack surfaces and threat models: It's fine if everyone has a vault at home with their life savings in gold inside, because nobody can realistically rob every vault from everyone at once. It's still fine if every city has a vault where people store their gold, because while a few robberies might happen, it's possible to have high enough security to make it not worth to rob this vault. It starts being a bad idea to ask everyone to put their gold into a large, unique central vault that "only the government" has access to, because the money you need to spend to protect that vault is going to be prohibitive (and no way the government isn't going to skimp out on that at some point). And finally, it's an awful ideal to make that with magical gold that you can steal by touching it with a finger and teleporting out with it, because all of that gold is going to disappear so fast you better not blink, and losing that combined pile gold is going to impact every citizen ever.

It's a matter of threat modeling: the moment there's a way to access absolutely everything from a single entry point with possibly avoidable consequences for the attacker, then that entry point becomes so enticing that you can't protect it. You just can't. No amount of effort, money, and technical know-how is going to protect that target.


> TLS works the same way.

TLS does not use emphemeral keys, from a practical live connection perspective, because the root of trust is established via chaining up to a trusted root key.

Ergo, there are a set of root keys that, if compromised, topple the entire house of cards by enabling masquerading as the endpoint and proxying requests to it.

And that's exactly the problem you're gripping about with regards to a tap system. One key to rule them all.


Hacking the root certificates of TLS doesn't allow you to read every TLS-encrypted conversation ever, thankfully. It just allows you to set up a MITM attack that looks legit. And sure, that is bad, but it's not "immediately makes everything readable" bad.

That's why I call TLS keys "ephemeral" under this threat model.

The goal of anti-E2E legislation isn't to be able to MITM a conversation - again, government agencies can already set that up with the current protocols fairly easily. The goal of the legislation is to make it so that, "with the correct keys that only the good guys have", you can decrypt any past message you want that was already sent using the messaging system, without needing access to either device.

If the governments only settled with an "active tap system" that works like a MITM for e2e encrypted channels, we wouldn't be having this discussion or we wouldn't be talking about new regulations. Because again, that is already possible, and governments are already doing it.


That's why I put the live caveat. Granted, decryption of previously recorded conversations and decryption of new conversations are two different threat models.

Out of curiosity, can MITM of new connections be set up fairly easily with current protocols? (let's say TLS / web cert PKI and Telegram)

For the TLS case, they'd need to forge a cert for the other end and serve it to a targeted user. Anything broader would risk being picked up by cert transparency logs. Which limits the attack capability to targeted, small-scale and requires control of key internet routing infrastructure? Not ideal, but at least we're limiting mass continuous surveillance.

For Telegram, the initiation is via DH [0] and rekeyed every 100 messages or calendar week, whichever comes first, with interactive key visualization on the initial key exchange [1]. That seems a lot harder to break.

[0] https://core.telegram.org/api/end-to-end

[1] https://core.telegram.org/api/end-to-end/pfs#key-visualizati...


And not just TLS and certificate authorities but also DNSSEC. Still, it is pretty worrying to have one CA like letsencrypt be behind so many sites, or seven people behind DNSSEC:

https://www.icann.org/en/blogs/details/the-problem-with-the-...

But here is how they protect it:

https://www.iana.org/dnssec/ceremonies

On the other hand, data is routinely stored in centralized databases and they are constantly hacked:

https://qbix.com/blog/2023/06/12/no-way-to-prevent-this-says...


The issue is that whatever "audit" or "protection" method you create, whatever technology you use to ensure only the "good guys" get the information and the "bad guys" can't, it's only layers added on top of the real issue:

The final key is always going to be a single number. Once the key is out, it's out. There's nothing you can do about it being out, and no way to know it's out unless your audit system somehow caught it beforehand.

And that key (or these keys, which doesn't change much between "one number" and "two billion numbers" in terms of difficulty of stealing or storing them) is going to be worth trillions of dollars.

Again, the bank vault thing is an apt analogy (up to a point): You can add all of the security "around" the vault, guard rounds, advanced infrared sensors, reinforced concrete with weaved kevlar in it, etc... But if someone ever gets the dollar bills in their hands, then they got the bills. And if they somehow manage to bypass the security systems and not get noticed as they go in for the steal, you have no way to know who they are or that they did it.

Now, that is completely fine for a standard bank vault: after all, you need to physically send someone in, it's pretty rare for people to actually want in the vault so security can be pretty slow and involved, it doesn't have that much "money" inside (I'm pretty sure no bank vault in the world contains more than a handful of millions at any given time), and above all it's "physical" stuff inside: you'd immediately see if it's gone, it's not like someone who got in the vault can "magically" copy the bank notes and leave with the money while leaving the vault seemingly intact.

It's less fine for a "server" vault, where not only do you store everything so it's worth trillions, but people need to access it all the time because "investigations" and "warrants", and in a fast way because "terrorism", and if there's a breach or a mole or anything like that then people can copy all of the data inside and leave the server seemingly intact.

I think thinking that there's a technical solution is misunderstanding the problem, and that anyone pretending they "solved" it are always going to minimize one risk or the other. The governments and regulators don't get that yet, because it looks like it's just a technological issue to build "the vault". But the real issue, the fact that "the vault" doesn't matter when the consequences of stealing the contents of the vault are risk-free for bad guys but so immensely impactful for citizens, is the reason why technical solutions won't ever be enough.


I understand the analogies.

What I don't understand is, in the absence of some sort of scheme, how a justice system functions.

How would you compel production of evidence when duly authorized?


> And to do this, they would need to provide an explicit reason and only get the limited info that the warrants etc. would support and the reasons would be stated in every case.

The scenario I'm talking about isn't overly-broad warrants, etc. Technology can't prevent that. I'm talking about just the tech implementation.

Fine, we have a private keypair for every message, and every message is additionally encrypted with the public key of the government-per-message-keypair.

How are these per-message keypairs generated? If from a central server, then that becomes a massive weakpoint in the system for multiple reasons: it could be attacked to prevent new keypairs from being generated, it could be hacked to extract private keys, it could be modified to generate keypairs that an adversary can easily break, it could be modified to also send private keys to adversaries, etc., etc.

If they're generated on-client, and the secret key is sent to some central repository, then the client or the device the client is running on could be compromised; the private keypairs could be intercepted en-route; the central repository could _still_ be compromised since it can't be airgapped to receive these keypairs.

In the case of a warrant, how is each key actually fetched? I don't mean the legal process, I mean at some point someone has to push a button and decrypt a message. How do we protect that process? Besides the fact that even air-gapped systems can be vulnerable to a sufficiently motivated and well-funded adversary, at some point some human being has to have access to this system, and that human being probably has family members. How vulnerable are they to being beaten with rubber hoses, or receiving their spouse's fingers in the mail?

If you're going to build a system that can expose everyone's private communications, it better be incredibly close to fool-proof, or it better not be built at all.


> because now every foreign government is racing to break, steal or buy those keys

it's much easier and much cheaper to simply steal the phone (maybe phones?) containing the keys. Or hack it (them?).

And then calmly search through the phone's backup.

That's what I would try first if I was in charge of such a task.


Yeah but he's not saying it's OK. Just that they could do it.


The tech companies design the system so that there exists no central key that could be used to decrypt every conversation. Each conversation generates their own unique key. If some back door existed, it could never be limited to "law enforcement" any hacker could unlock every conversation. Politicians are incapable of learning this.


> The tech companies design the system so that there exists no central key that could be used to decrypt every conversation

And in fact nobody claimed that, at least not in this thread.

It's still not impossible to provide the keys for a conversation, it's not a technical limitation by any means.

Perhaps the good guys at Mullvad can provide that level of privacy, but certainly not WhatsApp, their interest align with those of the users practically never.


> under a very strict legal framework and behind a colossal amount of bureaucracy to authorize their use.

In other words we'll provide some comedy material for our "trusted agencies" to amuse themselves with, between writing their latest summary snooping system and sharing stolen nudes round the office


First of all, the justice system revolves around the rule of the law, homicides are forbidden, doesn't mean that it is hard to kill someone, it's simply prohibited by the law and people tend not to do it.

Wiretaps use the same pattern, potentially it is very easy to listen to other people's conversation, but it is unlawful unless authorized, so people usually don't do it.

Imagine this scenario: a man only contacts the phone number of some woman when the phone of his wife is out of town, plus the man's phone can be located at the woman's house only at night when the wife is away.

What can that mean? Who knows...

That kind of data, which is equally revealing and privacy breaking, is completely legal. Why is that? Because tech corporations don't really care about what you say, but about your habits, to exploit them.

The justice system OTOH doesn't work in aggregates and patterns, it decides case by case, because every person is responsible of their actions and only theirs.

So the two use cases are vastly different and the tension towards complete and unbreakable secrecy is not 100% aligned with the interests of a society at large. Only a very tiny minority benefits from that.

Agencies snooping is illegal too, but they are out of the law anyway. "Licence to Kill" is the title of a Bond movie precisely for that reason.


Not only is it not true, it’s very likely they’re already doing it.

For example, WhatsApp sells itself as fully encrypted, etc. but if you’re in a group chat thars not true anymore. That information is available to WhatsApp and they almost certainly make it available to several governments (hopefully in a judicially protected way but we can’t know that).

Further, if you backup your WhatsApp chats, that’s game over for any privacy.

The UK legislation is stupid because the UK has been run by a bunch of stupid people for at least the past decade.

Nothing about this legislation is dumber than Brexit, for example, which was a referendum that was proposed to the public in such a ridiculous manner that the next half decade was spent in divining what the referendum actually meant.


> For example, WhatsApp sells itself as fully encrypted, etc. but if you’re in a group chat thars not true anymore. That information is available to WhatsApp and they almost certainly make it available to several governments (hopefully in a judicially protected way but we can’t know that).

Source? It could well be that the sender e2ee it to each of the recipients, no? (Trivial to add the government or WhatsApp itself to the recipients, then, but that is a different claim.)


OP reads like something I have played devil's advocate for. In an earlier discussion about WA vulnerabilities, one of the reported bugs was that as implemented, Facebook could have added themselves silently to any group chat, thus receiving with plaintext copies of all messages sent in the group from that point onwards. I then extrapolated that if they so chose, they could change their plumbing enough to make all chats group chats - even when they were between two people.

To be absolutely clear, there was not - neither back then, nor since - evidence of this being the case. But the technical capability and potential for such subversion was there at the time. I have not followed the domain news enough to know whether this is still the case.

What is available to WA and thus to governments, is the traffic pattern part. Who communicates with whom, when, how large the messages approximately are, and so on. The stuff our industry and journalists at large have chosen to call metadata[tm].

I stubbornly call the whole thing for what it is: traffic analysis. Old-school style.


I don't understand. In the case of a national security incident, the US gov/military would have popular apps cracked open ASAP.

We live under a global survelliance network and somehow the gentlemen's agreement on keeping end-to-end encryption in place, is the only thing keeping our chat apps private?

One would hope for a concrete privacy that doesn't depend on multi-national corps and nation-states to agree that it should be kept private.

Something else has to be going on here, because nobody could commit to keeping dangerous secrets on whatsapp, after assange and snowden.. right?


> I don't understand. In the case of a national security incident, the US gov/military would have popular apps cracked open ASAP.

Except that didn't happen. For example, there was a terrorisim-related mass shooting in San Bernardino, California, and they got the shooters phone.

Apple refused to decrypt it.

The Feds later bought a 0-day off of a foreign firm, thought to be Israeli or maybe Australian, and got into that way, but Apple stood their ground.

https://en.wikipedia.org/wiki/Apple%E2%80%93FBI_encryption_d...


>In the case of a national security incident, the US gov/military would have popular apps cracked open ASAP

But it just doesn’t work like many people think: the policeman says ‘open it’ and it gets opened by the company.

Imagine a bank vault containing something security critical. If the US government needs it but the key isn’t available, it will move heaven and earth to get the best vault breachers and experts of circumventing bank security, regardless of the cost.

You can think of it as a one-off job, at a high price tag, without the guarantee of success. But phones are easier to breach (more points of failure) than a bank vault.


Whatsapp can be assumed insecure because it's owned by Meta. Signal and the Apple apps are another story.

In the case of a national security incident, these companies don't cooperate with the government because they have no ability to do so. On occasions where the government has broken Apple encryption, they've done so by buying the software or services of companies who collect zero-day exploits, which Apple then fixes when they become known.

Governments hate basing a policing policy on the hope that a zero-day will exist when they need one. They much prefer a dependable backdoor that they can access which nobody else can, and that's the magic unicorn that cannot be built.


None of them are secure against the global surveillance apparatus. In the event of 9/11 2, the NSA will send goons to sit with the devs, make a special build that forwards data to a special data center if you're on a list, and then they'll go to Apple and Google to secretly force push the special build.


> We live under a global survelliance network

But we don't, right? Neither every word nor every move you make is recorded. Not to mention not globally shared, of all things.


Uhh.. London has a CCTV network that doesn't prevent theft.

My parking lots used to be tech-free and all now have poles with cameras and government signs on them.

Face camera tech in all retail outlets. Every second or third home is plastered in cameras, some beaming their data back to some US corp.

Phone tracks everything you do, to hundreds of app providers, who then sell it on to data aggregators.

Refuse covid vax? No access to public services.

My IT friends have admitted the whole point since the 80s was the spying.

Watergate and Cyberdyne (nixon) started the end of secret keeping.

It is no different to Oceangate and Teledyne (sub imploding) starting the end of physical freedom.

Snowden, Assange, basic facts about GCHQ and google offshoring domestic data in the UK so they can classify it as 'foreign spying' and legally dissect it all, when they re-import it.

Deanonymizing online activity is statisically trivial.

All the 9/11 stuff destroying liberty and adding excessive security at airport gates and body scanners that were never needed before. ect.

Euthanisia has been legalized.

Some of us, do in fact, pay attention.

Writing's on the (digital) wall. Every man will be a 'slave' to tech run by harvard-types, by the end of the 2030s.

While we're "working from home", the gov and tech companies destroyed the physical infrastructure that allowed freedom and replaced it with tyranny.

People are fleeing cities to live in the countryside. I. Wonder. Why.

Here I am remembering and telling the truth. As if that mattered.

I'm watching this in real time without the means to change my living circumnstances, and it's a big old brain melter.

I'm supposedly responsible for my actions as a person, yet the gov and tech companies, really are indicating it doesn't think I should be allowed to exercise free will.

....


Every word sent across the Internet is; every move accompanied by a mobile is; every move in an automobile with a tag is; every purchase made with a credit or debit card is.


that's a pretty low bar


While it's probably not your point, I find it greatly heartening that these companies are sending such a strong message by leaving a big and valuable market. It doesn't really matter what their reason is... it could be:

a) They value privacy and don't want to comply.

b) They can't access the messages, and thus can't comply.

Either reason is fine. Am I missing something?


c) They don't want to deal with the clusterfuck of dealing with communication between parties in the UK and parties in the EU. Because then, UK law would require them to remove the privacy tech in place, while EU law would require them to protect that privacy. And they'll have parties changing jurisdiction frequently, possibly mid-call.


d) It's more worthwhile to give up a small valuable market to appear like they're doing the right thing for a larger valuable market in order to keep those users.


I guess the distinction comes from the type of underlying 'no'.

These cases in the UK are a decided response to no-type-3: "we actually can’t do that".

Whereas Meta disallowing news links in Canada (another enough-is-enough response) is because of a no-type-2: "that’s a really bad idea".

Useful to have Evans' typology to distinguish those two cases.

(As a counterfactual in the UK, the decided response _could_ have been, "we actually can't do that technically, but ok we'll change the architecture, destroy our brand promise and increase the attack surface, and put the back doors in anyway.")


> but ok we'll change the architecture

That's why no-type-2 and no-type-3 don't have an strict line separating them. This is an example of both, because it's probably not being done because a lot of other countries would deem their service illegal if they did it, and not really for technical reasons.



There is kind of a "boy who cried wolf" thing going on where the tech industry has pushed back hard on pretty much any sort of regulation and oversight that the regulators automatically assume "they’re saying no because they just don’t like it.".


A big part of the problem here is that most of the proposals to regulate tech companies actually are terribly crafted attempts to regulate something the drafters poorly understand, or naked power grabs by other industries (legacy media conglomerates being a common perpetrator).

A good heuristic is to look at which part of the industry is opposing it. If it's opposed by small businesses or individual developers or civil liberties organizations, that's a bad rule even (or especially) if huge tech conglomerates like it. But if it's the reverse -- like antitrust enforcement -- now you might be onto something necessary.


The EU has been passing great laws which at worst have meant some annoying banners so no, it’s not just bad laws.


You haven't seen all their effects yet, and you've only barely escaped some of the especially bad ideas like intrusive CSAM scanning.

I don't think people will like eg Cybersecurity Act discouraging open source usage, or AI regulations mandating whatever we happened to call AI in 2023.


GDPR has honestly been fantastic so far, the problem the old cookie law has that it's defacto unenforced and site owners aren't ever fined for dark patterns of having to uncheck a 2000 item list of trackers. If they properly cracked down on it with only compliant 'yes or no' banners left as the law actually states we'd be in a much better place.


It's very middling: Google Analytics still tracks the vast majority of the web, including some government sites. Targeted advertising is still a thing. The cost of vagueness is pretty huge.

And the issue of lawfulness of "safe harbor" is still unresolved. US companies can just transfer data to the US where the US intelligence services can spy on it, regardless of what the laws of either country say.


DPAs seem to be understaffed and overwhelmed.

On the other hand, people said that nothing would happen to the likes of Google or Facebook, as they'll just work around it. Well, that prediction aged poorly, the lawsuits have been progressing, and it takes only one DPA from one country to change things. Like how it was deemed that Facebook's ads targeting isn't a legitimate purpose, so they'll be forced to ask for consent without denying service for those that refuse.

Meta knows that GDPR spells doom for their business model, which is why they abstained from releasing Threads in EU, as a sort of warning perhaps. But it won't work, because the EU market is too big to pull from, which is why EU legislation has teeth.

And GDPR actually works, even if it takes some time for DPAs to solve existing cases. And the "cookie law" works too. People complaining about banners miss the forest from the trees: banners are mostly needed when doing spyware shit, and they serve as a great warning to visitors. There are no cookie banners on Mastodon.


There's DPAs and DPAs.

A key issue is that the Ireland DPA is understaffed and overwhelmed, because that's where historically most of the global internet megacorps have registered their EU part for tax reasons, and it seems plausible that Ireland's DPA was intentionally understaffed because Ireland wants to be friendly with them, no matter how they affect German/French/etc consumers.

There seem to be motions about adjustments to the GDPR process which would allow other DPAs to take action with respect to their people's data without having to wait on the "company-local" DPA for however many years it takes. If that happens, I'd expect the situation for Google, Meta and others to change relatively rapidly (though still taking a year or more).


I think that impression you mention about the banners being mostly for spyware trash is no longer the case. If next to every site you visit does the same thing, then it feels normal rather than giving a hint something nefarious is going on. The Google and YouTube websites have banners (at least when you visit for the first time/are in private browsing or incognito mode), and lots and lots of people use those everyday.

If next to every site does it, most people will think there's nothing unusual.


Alternatives like DuckDuckGo or Brave Search don't have cookie banners. Google is far more widespread, but just because Google does something, that doesn't mean it isn't wrong. I will give Google credit for their consent dialog which has a "Reject All" option, (seemingly, from a UI perspective) in full compliance with GDPR.

The websites needing cookie banners or GDPR consent dialogs are using personal data for serving ads or selling it to the highest bidder. It's not a sentiment, but a fact visible to anyone that cares to see it. And just because it's a widespread practice doesn't make it ok.


>And just because it's a widespread practice doesn't make it ok.

You're missing the point. The fact that nearly every website you visit displays the banner means that users become desensitized to it. It's just one more thing for them to click on, next to the dialog to permit notifications and whatever else. It's the same reason developers are encouraged to solve warnings, so that when a new one props up they will notice it quickly and decide if its a problem or not; if you normally get hundreds of warnings when building you quickly learn to ignore them, hiding any problems that might exist.


Desensitised? I think not. Every time I see one of these utterly pointless cookie pop-ups it enrages me further that my time (and everyone else's time) has been so thoroughly wasted by this pointless law which accomplishes precisely nothing.

And it's different from the crying wolf of leaving unfixed warnings. The cookie pop-up often cannot be ignored as it requires some action to dismiss in order to view the actual content that the user was looking for in the first place.


>Desensitised? I think not.

Yes, because you're not thinking about the site asking for permission to track you (which was the original intent of the law), you're annoyed about your time being wasted. Which, I agree, is a total waste of time, but it doesn't counter my point that it desensitizes you to the signal the GDPR was meant to enhance.

>The cookie pop-up often cannot be ignored as it requires some action to dismiss in order to view the actual content that the user was looking for in the first place.

The user often cannot ignore the dialog in the sense that they cannot avoid interacting with it, but eventually they ignore it in the sense that they learn to automatically dismiss it without even thinking about it, like EULAs in software installers. Thus the dialogs become pointless.


Google dragged their feet for a while, with no easy way to reject all. IIRC they had to be threatened with serious fines before they complied.


What positive effects have you seen?


> pushed back hard on pretty much any sort of regulation and oversight

I think there's a selection effect going on here: If tech and politicians agree that some policy is a good idea, then tech does it voluntarily, so no regulation is needed. The only cases where it becomes a matter of regulation are the cases where tech and politicians disagree.

For example: Remember the privacy discussion around COVID-19 exposure-tracking apps? If the exposure-tracking apps had been implemented in a naive way, they would have been incredibly invasive to privacy. But tech proactively figured out good solutions to the privacy questions, so it never became an issue. If some politician _had_ proposed regulation saying that exposure-tracking apps needed to protect privacy, then tech wouldn't have pushed back, because that's what they were already doing anyway. But because tech was already doing it, politicians didn't propose the regulation.

So, because an issue never becomes a matter of regulation unless tech pushes back on it, it ends up looks like "tech pushes back on all regulation".

Furthermore, in the cases where tech and politicians disagree, the politicians haven't always been right. For example, GDPR cookie banners are a joke. California's AB5 law is another example, as the original article mentioned.

So, I don't think "boy who cried wolf" is a fair analogy. Tech companies aren't always right, but it's not as if they're automatically opposed to all new policies; and when they do oppose politicians' proposed policies, it's sometimes for good reasons.


A lot of cookie banners intended to satisfy GDPR don't - websites have to have a clear opt out, and anything "go to these other sites and opt out individually" is probably not compliant. Some big vendors however pushed the bounds of this and probably are getting their asses handed to them - but as with all legal things this would be happening in slow motion. GDPR has some good ideas, but implementation hasn't been perfect.

CCPA https://oag.ca.gov/privacy/ccpa in my opinion is a more straightforward law. Basically: if you want to sell someone's information you have to get consent first. It still has some of the "banner on every site" problem but at least most which are clearly for CCPA are binary ok/don't sell my data questions.


You and the parent are conflating GDPR with the "ePrivacy Directive" (cookie law).

GDPR doesn't say much about cookies. What GDPR says is that you have to have a legal basis for processing user data. For example, a DPA just ruled that Facebook doesn't have a "legitimate interest" when using the user profile for ads targeting, so they'll be forced to ask for opt-in consent without the ability to deny service to those that refuse. A legitimate interest is for things the customer expects as part of the service, e.g. an address is needed for home delivery, or maybe data needed for security (IP logging).

https://thisisunpacked.substack.com/p/the-eu-war-on-behavior...

The cookie banners are needed when websites are fingerprinting users. The website may have a legitimate interest for doing analytics, but the user still needs to be informed that they are fingerprinted. NOTE: here, too, you don't need cookie banners if it's functionality that the user expects, like a session or a shopping cart cookie.

At the risk of repeating myself: websites need cookie banners or GDPR consent dialogs mostly when doing shit that violates people's privacy.

I've read a lot of complaints against GDPR on HN and elsewhere and I feel that it misses the forest from the trees: those banners and dialogs expose just how widespread the practice of violating people's privacy is. And I fear that a lot of the backlash coming from Silicon Valley has been from people connected to the ads industry.


> NOTE: here, too, you don't need cookie banners if it's functionality that the user expects, like a session or a shopping cart cookie.

As I point out every time this comes up, under ePrivacy the burden the site has to meet is not "expects" but "strictly necessary to satisfy a user request". And the way most sites implement shopping carts, where items will still be in your cart if you close your browser and come back the next day, isn't ok:

a merchant could set the cookie either to persist past the end of the browser session or for a couple of hours in the future to take into account the fact that the user may accidentally close his browser and could have a reasonable expectation to recover the contents of his shopping basket when he returns to the merchant's website in the following minutes. https://ec.europa.eu/justice/article-29/documentation/opinio... (2.3)

Maintaining a shopping cart across days isn't "strictly necessary" and so requires explicit consent.

(More: https://www.jefftk.com/p/why-so-many-cookie-banners)


IANAL, but as a layman, I don't think I agree with your article. But thanks for the links.

I don't think the difference between hours and days is relevant, unless the law says that there's a difference. Setting user preferences, such as the language, is a matter of accessibility. I'd be hard-pressed to think of a better sample of “strictly required”.

Setting user preferences, such as a “dark mode” toggle, wouldn't be any different from setting the browser or operating system's dark mode preference, a bit that web pages can always read.

It's important to remember that ePrivacy isn't strictly about cookies, but about all client-side data that gets sent to the server. For example, in case you're doing analytics, fingerprinting via any other means except for cookies (e.g., user agent, HTTP referrer, IP, etc.) still counts under ePrivacy. As such, you can fingerprint users via “window.matchMedia('(prefers-color-scheme: dark)')”, and if you do that, then yes, you need a cookie banner. But not for doing what the user agent asked for.

For analytics, indeed, you need a cookie banner. And while it's a concern of service providers to improve their service that I understand, it's not something that the user expects. And my personal problem is that the entire web ended up using Google Analytics. Such data ends up being shared with third parties, which is why it's good that it is opt-in.

I do agree that businesses should consult lawyers (^^)b


Remembering user preferences are very different; I didn't bring them up and don't disagree there.

My claim is that many things users might expect to be retained are not automatically ok to retain, and that shopping carts as typically implemented are one of these.


Yeah. Cookies qua cookies are at best something obvious and measurable, but don't necessarily correlate with the company doing anything nefarious with your information. What's worse, it has always been the client's choice to keep cookies. You can, e.g., browse in a private tab on Chrome and they'll get automatically deleted.

The fact that cookies are A) only loosely correlated with undesired behavior and B) already optional makes it absurd that every site I visit should waste 10 seconds of my time with a banner asking me to consent to their use of cookies. Especially because the only way for them to remember that preference is... to give you a cookie!

Fortunately Brave has an option to automatically skip those banners, which works on most sites.


> I think there's a selection effect going on here [...] So, I don't think "boy who cried wolf" is a fair analogy. Tech companies aren't always right, but it's not as if they're automatically opposed to all new policies; and when they do oppose politicians' proposed policies, it's sometimes for good reasons.

I think you are missing another, equally massive, source of information politicians are exposed to: lobbying. We see Microsoft complaining about mergers, but politicians are exposed to non-stop lobbying.


> GDPR cookie banners are a joke

Many of the sites actually don't require cookie notifications because strictly necessary cookies^ are GDPR compliant but they still display the notification. In theory if cookie notifications were omitted when possible then sites who track more than needed would stick out immediately.

^ Strictly necessary cookies are essential for websites to provide basic functions or to access particular features of it. Such features include the ability to sign in, add items to your cart in an online store, or purchase stuff on the internet.


> If tech and politicians agree that some policy is a good idea, then tech does it voluntarily, so no regulation is needed. The only cases where it becomes a matter of regulation are the cases where tech and politicians disagree.

And these cases are the utter majority.

> For example, GDPR cookie banners are a joke.

They are, but only for those services that want to squeeze their customers like they're data lemons. What you need to do to provide the customer with the service they desire is covered automatically, the thing where you need a GDPR consent banner is if you want to include a truckload of external services to track your users across the Internet.


I set up a nice little webpage, that did not use any tracking at all. There was one cookie for a home-brew login functionality, and even that was only linked to an extremely limited amount of data (namely, the login name, and the password, didn't even mandate email...).

By all means, this one cookie - whose use was completely voluntary - was a 'technical cookie' which I do not need consent for under the EU's cookie laws.

It took about 30 minutes after the go-live before the first madmen came around and started shouting at me that I needed a cookie banner.

The problem with that law is not only that it caused everyone to set up a cookie banner to operate as they used to, it also is that it created a class of people who are self-declared data protection vigilantes.

I do consider publishing on Gopher (or Gemini) in the future, if only to prevent zealots - who often have no technical knowledge - from accessing my services. The people I address are capable - and willing - to use other protocols.


> And these cases are the utter majority.

I think it's the opposite -- tech and politicians agree on the vast majority of things, but they're considered banal so the topic never comes up. But, you could imagine an alternate universe where tech did things differently, and then politicians wanted to regulate them. Here are some examples:

* Tech companies offer most of their services for free. You could imagine a world where Google charged for searches and Facebook charged for posting, and so "poor people being excluded from the Internet" became a political issue.

* Tech companies translate their services into a variety of languages. You could imagine a world where Google and Facebook were only available in English, and so "non-English-speakers being excluded from the Internet" became a political issue.

* Tech companies don't allow anyone to view DMs, private posts, etc. except law enforcement. You could imagine a world where Google and Facebook had a culture where it was normal for employees to snoop on other peoples' DMs, and it became a political issue.

* Conversely, you could imagine a world where tech companies refused to allow law enforcement access to peoples' DMs even with a valid warrant, and it became a political issue. (This is starting to happen.)

* Tech companies allow anyone to post by default. You could imagine a world where tech companies only allowed people to post if tech companies liked their political views (similar to how newspapers' biases affect which editorials they publish) and it became a political issue. (This is starting to happen: the left is pressuring tech companies to restrict certain right-wing content, and the right is talking about regulation to force tech not to do that.)

* Tech companies sometimes kick people off the platform for arbitrary procedural reasons, but not for personal pettiness reasons (with the notable exception of Elon Musk kicking people off Twitter). You could imagine a world where it was normal for e.g. Google to delete a journalist's GMail account if the journalist published something Google didn't like, and it becoming a political issue.

* Tech companies often contribute to open-source standards and software. For example, Google is heavily involved in defining web standards; and they made Chromium open-source, allowing rivals like Microsoft to build on it. You could imagine a world where the tech ecosystem was much more fragmented and closed-source than it is today, and it becoming a political issue.

This is what I was saying about a selection effect: You can easily think of ways that politicians want to regulate tech more, because those topics are controversial and make the news. But there are actually a ton of ways that tech _could_ be much worse than it is, but those topics never come up, so it takes some imagination to think of them.


And for what proposed regulation of the tech industry was the pushback unwarranted?


The Apple/Google marketplace rules are common-sense, and address legitimate concerns. I moderate /r/androiddev, and we get at least 10 posts per week that are basically "I make a living from Android apps and Google has banned my developer account and denied my appeals". There needs to be some kind of public check on this behavior when livelihoods are at stake, and I don't trust either company for a second when they say this will ruin their app stores.


Yeah that's a great point. If bundling IE with Windows was an anti trust issue back in the 90s, I don't understand how totally locking down your ecosystem on mobile can possibly be allowed.

That doesn't require new regulations, though. Just need to enforce the existing law.


You have the freedom to chopse between 4, oops, 2, totally locked down ecosystems.


The trouble is 9 out of 10 of those people will have been pushing the boundary in some way and probably deserve their treatment. Open the app stores up and they will become a cesspit.


Quite a lot of complains about GDPR were pure hysterics.


Only because the regulators have taken an extremely soft approach to enforcement.


Right to Repair, USB-C chargers, DSA, ...


The first type of 'no' is summarized as just being annoying to implement and not making a big difference, but I think it can be a lot deeper and more consequential than that.

There can be regulation that will damage a companies profits, but also provides positives to public health or other beneficial outcomes. Deeply profitable companies will fight tooth and nail against these regulations even if they are full aware of the damage they are causing. They will come up with as many convincing sounding reasons to say "no" as possible in the name of those immense profits they enjoy, and use techniques like expensive lobbying, sponsoring pseudoscientific studies, running ads, play up fears about economic damage or other negative outcomes of the policy, etc. They try to make it sound like the second or third kinds of "no" in the article and paint it as a bad idea, or impossible to do, or anything else they can to prevent the regulation. And if a certain individual at that company doesn't want to fight for their unethical profit, they'll be swiftly replaced with someone who will.

The obvious (non tech) example is something like the tobacco industry, which spent millions on manipulating public and policy opinion using misleading scientific sounding language or studies to prevent or delay regulation despite being fully aware of the many health detriments of smoking. Public health has been significantly improved as a result of smoking reduction, restrictions on where you can smoke in public spaces, age restrictions, whatever.

I think there is a lot of this currently in companies profiting off social media, oil and gas, and selling user data.


> Deeply profitable companies will fight tooth and nail against these regulations even if they are full aware of the damage they are causing.

Unfortunately this is also very similar to one of the most insidious forms of regulation -- the mildly inefficient requirement. You have something which is absolutely not going to bankrupt the company, but it costs three times more than it's worth.

It may even provide some benefit to someone -- someone who is happy to lobby in favor of it if it means they get a third of the money that it's costing customers to require it.

But then inefficiency increases and costs go up and barriers to entry to go up and the market becomes more concentrated, and the incumbents only make a weak showing of opposition because it's not going to kill them and they actually like that it might kill some of their smaller competitors.

So rules like that accumulate, even though they're each a net negative to the world, until people can't make ends meet because everything costs so much more than people get paid. And nobody can point to one single rule as the problem because it's really ten thousand of these little inefficiencies adding up.


Data sharing is a great example. Small Biz A can't share data with Small Biz B, but Biz G can share data between Product A and Product B and their thousands of employees.


Large customers tend to limit data sharing - also in large businesses. Retail customers usually don't have that kind of power.


OTOH, Big G is also a single target should they start abusing that data - as opposed to Small Biz A and Small Biz B, which will just disappear and reopen as Small Biz C and D. Being larger has advantages, but it's not all advantages.


> Deeply profitable companies will fight tooth and nail against these regulations even if they are full aware of the damage they are causing.

There was an article recently about health labels on food in Mexico. The food manufacturers were not allowed to print cute mascots on the packaging for certain foods aimed at children, and had to place a warning label on certain foods. In the first case, the manufacturers switched to transparent packaging, and printed the mascot on the food itself (so that it was clearly visible). In the second case, the manufacturers basically made the front and the back of the packaging the same, but put the mandatory warning only on one side (so store employees would put it on the shelves with the warning on the back). I wish they would pursue ways to make food healthier with the same energy.

ETA: https://www.schneier.com/blog/archives/2023/08/hacking-food-...

https://news.ycombinator.com/item?id=37245593


I think there is a good ole 2x2 grid here.

The socially positive nature of the industry on one axis and the degree of strong completion on the other.

So we can have say Retail food stores, sainsbury's and tesco, that deliver mostly positive things (food!) in a highly competitive manner.

We can also see positive industries (water / sewage) that have terrible competitive landscapes (fundamentally monopolies/ utilities). These need to be regulated differently - ie with hands firmly clasped round the throat of all participants.

Bad industries and bad competition looks like the illegal drugs trade (I personally think the cut throat nature of retail stores is as literal cut throat as we want. When the completion stops focusing on making the product better and starts focusing on killing the other stores employees we are not seeing improved markets

And your example was bad industry / good competition- cigarettes are a good example here.

I think it's worth adding a third dimension to the grid - time and future shape. The retail food model is a good one but over time we can see the effect on out of town car parks, the urbanisation vs wqlkability etc etc. Intervening in how stores advertise the price of milk won't help this. But neither will "nerd harder" - there is no solution to "this business model if continued will go the wrong way" that does not involve chnaging the business model - ie charging for car parking space or something.

Anyway, it struck me as a useful simple graph. As business models move to different parts of grid they get regulated differently, and adding time/dependencies in means we can shape the results.

But in the end I am arguing for smart proactive interventionist government.

Let governments be governments


> also provides positives to public health or other beneficial outcomes

But the people asserting these positives are also lobbying, making convincing sounding arguments, running ads, playing up fears, sponsoring pseudo-scientific studies and all the other ills you criticize. And they'll do that even if they're fully aware of the damage they're causing, or will cause with their proposals.


100% - people don't realize what kind of mastermind bullshit companies come up with to keep the profits rolling in nevermind the damage caused.


> There followed a desperate scramble to exempt over 100 professions, from doctors to truck drivers to hairdressers, before the whole thing had to be abandoned. A lot of people told the politicians about the problem, but the politicians just said “everyone always says every law will be a disaster” and ignored them. Oops.

AB 5 passed, along with its long list of exceptions, and is law in California right now. It wasn't "abandoned" in any sense. Getting facts right is important to a persuasive argument


"The three companies, now also joined by Instacart and Postmates, funded a ballot initiative, Proposition 22, to exempt both ridesharing and delivery companies from the AB 5 requirements, while also giving drivers some new protections, including minimum wage and per-mile expense reimbursement. Proposition 22 passed in November 2020 with 59% of the vote.[8][9]"

From: https://en.wikipedia.org/wiki/California_Assembly_Bill_5_(20...

So the law has an exemption for the exact workers it was targeting.


So what? It's not illustrative of the point the post is making.

The post says, paraphrasing: "Sometimes technical people say 'No' and politicians are forced to accept the reality of that refusal"

That didn't happen here. The policy was not proven impossible or implausible, the politicians involved weren't forced to reckon with unforeseen realities; the gig economy companies simply used a different political tactic to get their desired outcome. The effort was never abandoned, even if the targets of the effort ultimately found a way to circumvent the policy makers via a ballot initiative.


Disclosure, I work for GM.

Type 1: The tradeoffs are not in my favor.

Type 2: You have not understood the tradeoffs.

Type 3: No one can evaluate the proposal.

> "When policy-makers ask for secure encryption with a back door, we do not always see that this would like be telling Ford and GM to stop their cars from crashing, and to make them run on gasoline that doesn’t burn. Well yes, that would be nice, but how? They say ‘no’? Easy - just threaten them with a fine of 25% of global revenue and they’ll build it!"

> "This would like be telling Ford and GM to stop their cars from crashing"

Easy - car doesn't go until all seatbelts are on. All seats face backwards. Helmet and HANS device for all occupants. Maximum speed is 45 mph. Cars are wrapped in giant foam pads. Cars are limited to roads mapped by the automaker. (Type 2b [I have interpreted the intention of your Type 3 proposal] - Customers would revolt)

> "Make them run on gasoline that doesn’t burn"

Easy - catalyze gasoline to hydrogen and use a fuel cell. Well not easy, but possible. (Type 3 - I can propose this, but no one can evaluate it without doing a LOT of work)

====

Personal opinion: People feel like experts have lied to them, because experts have lied to them. We can't trust experts. How should people think? Specifically when it is very expensive to test something? 'Expensive' includes all kinds of risk, not just money spent. 'Test' includes "what will this do to me?"

So some people think that there is a 100 mpg water carburetor that Shell bought the patents to and maybe the inventor had an 'accident'. In reality, (Type 2) They have not understood the tradeoffs. 100 mpg is easy - on a speed limited motorcycle on a chosen route. Water carburation is not too hard for motivated and handy person to use. It IS too hard to put on a general consumer's vehicle.

Jet airplanes used to inject water into the engine to get more performance on takeoff. Someone realized that it's cheaper, easier, and saves weight to just inject more fuel. The fuel doesn't fully burn, but it adds to the thrust just by being mass that goes out the end of the engine.


>“Work it out” is generally a demand to invent new mathematics, but sadly, mathematics doesn’t work like that

The article invalidates a somewhat reasonable point by saying this sort of thing. A cryptographic end-user application is not "mathematics". It's a piece of software, running on a piece of hardware, and there is no platonic, infallible security going on. This is the sort of 'no' people utter if they have ideological objections disguised as technological ones.

In reality any system, including cryptographic ones exist on a curve. Differentiated systems for access exist. The honest criticism would be that a system with a backdoor is less secure, but it's certainly possible to enable privileged access to third parties while excluding others, it's just riskier. But risk in reality also exists with any encrypted application, because it runs with keys stored on a phone, not in some untouchable maths dimension.

How much you move between access and security is absolutely a question of policy and architecture, not some theoretically impossible thing.


This was a great essay.

I really liked his summary:

> A Californian optimist would say that we’ll age out of this. The policy class that got their staff to print their emails will age out and be replaced by the generation that grew up sending emojis, and understands that tech policy is just as nuanced, complex and full of trade-offs as healthcare, transport or housing policy. A European would ask how well California handles healthcare, transport or housing.


> Your MPs’ WhatsApp group can be secure, or it can readable by law enforcement and the Chinese, but you cannot have encryption that can be broken only by our spies and not their spies. Pick one.

It doesn't seem technically infeasible for WhatsApp to move to a protocol where, say, every message is transmitted twice, once encrypted with the recipient's public key and once with the NSA's public key. Or for the state to ban all messaging systems that don't follow that protocol.

Arguments against encryption are generally philosophical rather than purely technical.


I am not a cryptographer, but the standard objection to this is that the NSA key will leak, either generally or be stolen by a Russian/Chinese agent. And in implementation, how many keys are we talking about? USA, UK, France, Germany, USA, Australia... Every country's law enforcement will demand a key, and how long will that remain secure?


The best argument is simply that you cannot ban e2e encryption because there’s thousands of people who are able to implement it all over the world. Banning E2E just means that everybody who cares about privacy (including the “bad guys” and privacy conscious users) will switch to a banned implementation, and everybody else will have their privacy put at risk for no reason at all.


Devil's advocate: the answer to that is that perfect is the enemy of good. Most "bad guys" are pretty dumb and won't bother using actually secure communication channels especially if messengers keep advertising that they do end-to-end encryption. And even for those who do care enough, most of them aren't all that tech-savvy and will make mistakes.

All that to say that a ban doesn't have to be 100% effective to make a meaningful difference.


> especially if messengers keep advertising that they do end-to-end encryption

That's probably a crime in UK. It is a crime in plenty of countries.

Anyway, the most impactful an anti-e2e law can be is to force people into getting some functional thing from free-droid, instead of naively getting it from the play store. The bar of intelligence required for that is still pretty low.


That's not a very strong objection.

Firstly, you can just rotate the key if that happens. It's one software update away.

Secondly, protecting keys isn't that hard. That's what HSMs are for. Not only have no secret keys ever leaked from the NSA as far as I know, not even when insiders turned against them and leaked as much as they could, but this isn't a noteworthy achievement either.


Indeed, if the Chinese demand a key under that scheme it is hard to see how the data will be kept secure against the Chinese spy agencies. And they will demand, the system is there and obviously available.

Plus, who would be stupid enough to use that protocol? It is sending bright flashing messages saying "we're reading your emails, mate!". Only people who were legally compelled to use WhatsApp would be reachable, everyone else would more to some other system.


Modern messaging protocols, including the Signal Protocol used by WhatsApp, use Diffie-Hellman key agreement for Forward Secrecy. DH requires an exchange between two active parties, who will then agree on an ephemeral session key. Ideally the session key is deleted once it is no longer being used, rendering any captured cipher texts useless.

While we could encrypt sessions keys under an escrow key that the authorities control, that's a very serious degradation of forward secrecy. If an authority's escrow key is ever compromised, then all sessions encrypted with keys escrowed are also compromised. Non-negotiated keys that are re-used are also inherently more vulnerable to cryptanalysis, so it's an invitation for trouble if any cryptographic weaknesses are found in the escrow scheme. These are technical considerations.


WhatsApp already degraded their crypto to achieve their own political ends (they restrict forwarding in order to slow down the propagation of "rumors", which in a textbook e2e crypto scheme wouldn't be possible) [1]. So "it would be weaker" isn't a good argument, they already accepted it.

The other objections are all Type 1 (it would be inconvenient).

[1] https://faq.whatsapp.com/1053543185312573


Impossible to enforce, maybe. Include a counter on each message, incremented by one locally if forwarded. Completely insecure against malicious clients, but the threat model doesn't feature malicious clients so it works well enough.


You’re making the original point - you can have encryption that can include the NSA, but it would also end up including other spies too, as secrets tend to leak the more important and widely used they are. The original point was not that you can’t have a secure leak, but that the secure leak ultimately wouldn’t stay secure forever. You don’t want to build the weapons your enemies end up using against you, and in digital ecosystems it’s often trivial to do this.


AFAIK, Apple and Google have never lost a private encryption key. I see no real reason to think that won't continue forever.

And ultimately keeping private keys secure is extremely easy compared to securing an OS, the app store ecosystem, and messaging app. Finding a zero day there and exfiltrating messages from a phone seems far more likely than a key being lost. We see zero days all the time and the number of lost private keys is something close to 0. If you trust them with the entire chain adding a key to decrypt messages is not a meaningful additional risk.

This is about selling phones and not some moral stand by Apple. They're perfectly happy handing over data to the Chinese government. E2E encryption is denying access to the West with a legal system that provides (somewhat theoretical) legal protection while continuing to hand it over to an autocratic government that has legit concentration camps.

E2E encryption is not a meaningful increase in security and it denies society the legitimate tools it needs to enforce laws. The practical effect is that criminals get away with a lot more crime while legitimate usage is not any safer.


Even if the system behaved like PRISM such that chat apps and network operators used the client-side apps to scan for keywords on specific accounts and reported back, similar to how child porn filters might work today, which would be an end-run around e2e encryption but not require transmission of every message, the risk is that the system itself might end up in a compromised state where any nation could request records of any device and suggest that national security is the reason. And that assumes the system is designed securely using asymmetric encryption and unleaked keys, there’s still data storage on the other end to worry about. I get it though, it’s possible to dive down a rabbit hole where you continuously think up technological ways that this could happen securely and prevent attack vectors as they come up. The point I’m trying to make is that it is indeed a political problem to prevent technology from being abused. Making zero exceptions is still more technologically and politically secure than making even one exception for trusted government use unless you trust every government.


That is a political argument though, not a mathematical one.

And a technical counterargument would be that even computer science is used to solutions that aren't perfect, but whose likelihood of failure is a function of effort, so they become practically usable once you push the likelihood of failure to the region around "would take more than age of observable universe", etc.

Examples include: efficient primality tests, UUIDs, asymmetric cryptography itself.


I feel like it's worth linking here to the big report from a few years ago: https://dspace.mit.edu/bitstream/handle/1721.1/97690/MIT-CSA...

Probably the most relevant part from the summary is:

> Third, exceptional access would create concentrated targets that could attract bad actors. Security credentials that unlock the data would have to be retained by the platform provider, law enforcement agencies, or some other trusted third party. If law enforcement’s keys guaranteed access to everything, an attacker who gained access to these keys would enjoy the same privilege. Moreover, law enforcement’s stated need for rapid access to data would make it impractical to store keys offline or split keys among multiple keyholders, as security engineers would normally do with extremely high-value credentials. Recent attacks on the United States Government Office of Personnel Management (OPM) show how much harm can arise when many organizations rely on a single institution that itself has security vulnerabilities. In the case of OPM, numerous federal agencies lost sensitive data because OPM had insecure infrastructure. If service providers implement exceptional access requirements incorrectly, the security of all of their users will be at risk.


The third 'no' is really well shown in https://www.youtube.com/watch?v=BKorP55Aqvg.

(A video about an expert being asked to do the impossible by people who have no idea what they are asked.)


Turns out it might actually be possible?

https://www.youtube.com/watch?v=B7MIJP90biM


That's awesome.


> we can make it secure, or we can let law enforcement have access, but that means the Chinese can get in too

How is this in category 3? Give decryption keys to your own government and not the others. I get it if you think it'll require more opsec from your government than you might trust them with, or that it'll have other negative effects (see: category 2), but how is this physically impossible (category 3)?


You must have never heard of intelligence agencies, and spying in general. People vastly underappreciate the value that intelligence gives and the lengths to which they'll go just to get access to information. This is as old as humanity, for example Sun Tzu clearly wrote that useful spies are the best compensated employees in the entire government.


If you give the key to every government that demands it (USA, UK, France, Germany, Japan... ), and every agency (CIA, NSA, FBI, DHS, DEA...) then how long will that remain secure? The key will leak and then you have no security.


You only give it to the government entities your country is asking you to. Heck, you could even encrypt each user's data with their own government's key.

Edit: and key rotation is also a thing.


I would assume that the issue is that the mere idea of decryption keys that can be "given" means there is no end-to-end encryption.


That can't be what they're saying, since asking for "let us access this" is already assuming no end-to-end encryption. The other governments would have nothing to do with the argument in that case.


There are different degrees of “secure”, right? Maybe you could give a key to the FBI without China getting ahold of it, maybe it’s still “secure enough”. But you can’t say it’s “just as secure” as not giving it to them. And that’s what law enforcement often asks for: Give us access without making it any less secure.


> And that’s what law enforcement often asks for: Give us access without making it any less secure.

No, I don't buy that. Maybe I'm going out on a limb here but I'm gonna say this is almost certainly a strawman caricature of what they're being asked, not what they're actually being asked. Law enforcement isn't stupid and people (especially law enforcement) understand that pretty much nothing in this world works in absolutes. They probably don't think the decrease in security is significant, but everyone (heck, even a kid) understands that the more people have access to something, the less secure it is.


I think it must be category 2, given that you’ve gotten a bunch of very strenuous type 2 objections, and no type 3 ones, in the comments they have responded to you.

Actually I think there are almost never type 3 objections. Almost every law is something of the form: “Do this, pay fines, or stop providing your service here.” Of course, the “do this” might be impossible, but there’s no mathematical contradiction in the idea that a company can be run out of business.


It's type 3 because there's also a requirement that there are NO unintended circumventions to security. That sounds like "hard to circumvent", but it's not the same.


I think the third type of “no” just sounds a lot more confusing or contradictory to the engineers and programmers who are tasked with trying to implement a solution that both fits the requirement, and allows the company to continue doing business mostly they were before.

Of course, on the other side, the full command is “change how you provide your service in this way that we’ve specified, or stop providing it.” There is no mathematical contradiction or impossibility here, they just don’t mind if they yank away your livelihood, don’t let them off the hook by assuming they are stupid (they might be, but they probably have someone clever who can feed them enough car analogies…).


Maybe stop listening to Big Tech companies and start listening to NPO like EFF or ISOC?


Some of these show a misunderstanding of the situation.

> Most of the Canadian tech and indeed media industries pointed out how stupid this was, and Google and Meta said that given the choice, they’d stop letting news appear rather than pay a fee they could not control and that had no economic basis. The government thought this was the first kind of ‘no’ and a bluff, but actually, it was the second kind. Oops.

This was part of the intent. It was not a mistake. The goal of the Canadian government has always been to build up Canadian media. Keeping out big foreign companies is in line with that.


Sort of a "oh no! well anyway..." sort of situation


The move away from globalization has had an unfortunate side effect: greater political corruption, where local politicians enact laws for the benefit of their local corporate buddies, and pretend (usually successfully) that it is an act of nationalism and standing up to big tech.

That’s what happened here in Canada with Bill c-18, as mentioned in the article. It’s been rather sickening to watch the government defend this bizarre law that is little more then a shake down for their buddies at bell and Rogers.


This has always followed protectionism, I'd guess. Really the solution would have been to socialize the domestic losses caused by globalization (retraining workers, etc.), but there seems to be no interest or political will towards doing that.


The problem is that for a lot of places, globalisation seems to mean, paying money to US companies and harming the local tax base.

If the money is shipped overseas, how to you pay for the socialised losses?


Who are bell and rogers? Some sort of newspaper firm? Plus while corruption is possible, protectionism is the less greasy term, and equally likely to be driving things


They are the dominant telecom companies, but they are much more than that here and have had the politicians in their pockets, protecting them from competition and funneling tax-payers money to them, for years.

You might be puzzled that I mention them in relation to a bill that is supposed to be for journalism - but the official analysis of this bill confirms that the majority of the money actually goes to them. That is normal here.


I assume Bell Telephone (and its descendants) and Rogers Communications (https://en.wikipedia.org/wiki/Rogers_Communications)


BS. “Tech” as described here says no for one reason and one reason only. It will lose them money. Sometimes it will lose money over the short term. Other times over the long run.

That’s the only reason “tech” says no.

Individuals may say no for some of the reasons mentioned. But this article is basically describing companies and yes, they say yes if they think they will make money and no if they will lose money.

And this isn’t even a bad thing. Companies as a legal fiction exist for the purpose of making money.

What is a bad thing is people thinking that companies (especially public for profit ones) act on any basis other than whether they will make money or not.

Which is why this article is an f’ing disaster and naive beyond the extreme.


Back at university, one of my lecturers had a story about having to convince a company to cancel a contract they'd signed with a different lecturer, call them Bob, because the software Bob had agreed to write had provably impossible performance characteristics.

I forget the details, I think it was not quite as bad as "O(1) sorting for any length list" (not even 100% sure they actually told us the specifics) but it was something along those lines.


Are Apple, Meta, and Signal saying “no” to the UK’s demand for secure-to-GCHQ backdoors in E2E purely to defend their margins? Of course not; they’re saying “the laws of mathematics trump the laws of the United Kingdom”. Apple isn’t promising to pull iMessage (a huge part of their platform lock-in!) from the UK because they want to avoid the expense of complying with this law. Even a cynic would say that they’d love to comply, if only there was a way.

But they literally can’t.


How is it that none of these are “no, because that hurts our bottom line even if it’s good for the public”?

The framing in this article suggests that corporations and societies are usually aligned in their interests. This has not been my experience.


Either it's so obvious that it wasn't mentioned, or it's blatantly not mentioned in some bizarre gaslighting gambit.


>First, and this is the default, they’re saying no because they just don’t like it.

>Second, though, the tech industry (or the doctors, or the farmers) might be saying no because this really will have very serious negative consequences that you haven’t understood.

>If the second kind of ‘no’ is ‘that’s a really bad idea’, the third kind is ‘we actually can’t do that’.

I think a big industry is plagued by the first case. On the other hand, a specific tech worker is more prone to say no because of 2 and 3.


Where's the kind of "no" that means "that doesn't align with our business model, so we'll fight it tooth and nail"?


IMO, it’s option (1) if the effect is small enough. Option (2) if there are serious knock on effects for society.


Don't worry lads, there's always "consultations".

It's always so buteocratic and processy that we can pretend it functions like a democracy, where the will of the people is what matters, and not the will of lobbyists and elite class politicians who've never had to deal with rent prices and economic struggles.


The encryption thing seems like a bit of subterfuge because, while using bad encryption wouldn’t do what some lawmakers want, an app update could do whatever you like that’s possible with a code change.

It’s a whole lot more convenient to pretend that you can’t think of a way to do it.


Great article! Reminds me of the “Dear CEO: software is not magic” article in ACM Queue: https://queue.acm.org/detail.cfm?id=3325792


Forcing down the throat cookie notifications


lol... the option to not create a pervasive surveillance state online was there. The big players didn't want it.


You are naive believing this solves any issues.


Using a directional antenna to steal wifi after cracking people’s weak passwords, browsing through Tor, and not permitting javascript to be executed in my browser solve my issues.

I am an enjoyer of the cookie dialogue because I am a sadist and love the suffering it induces.

/s

There will be a lot of bad regulation before we get good regulation on the internet. Our lawmakers are still learning. If “move fast break stuff” is good enough for tech it’s good enough for the guys regulating tech.



Cheers for the links.

In my experience attempting to evade tracking by trying to mitigate these approaches will often get you a "you look like a bot" message from websites (by which they mean, "your HTTP requests do not permit us to track you").


There's another kind of 'no' - when the programmers and engineers and other technically expert workers are nearly universally against something, but the CEO- and lawyer-classes are for it. Like software patents.

Edit: Sources for the patent claim: That same afternoon, we talked to a half dozen different software engineers. All of them hated the patent system, and half of them had patents in their names that they felt shouldn't have been granted. In polls, as many as 80 percent of software engineers say the patent system actually hinders innovation. It doesn't encourage them to come up with new ideas and create new products. It actually gets in their way. - https://www.npr.org/sections/money/2011/07/26/138576167/when...

https://opensource.com/law/11/4/poll-patents-and-innovation


I know a few engineers who are quite proud of their software patents. And many more who seem ambivalent, but are happy to file them if they get a bonus out of it. So, I don't think it's as clear-cut as you think it is.


Engineers love nipping at the hand that feeds them, and talking moral while they profit from what they oppose.


They did say 'nearly universally.' I don't think what you said contradicts that?


It's not as nearly universal as he thinks. That's his point.


Did he ask every software engineer? How did he take this poll?


It's hard to claim that SW engineers are against SW patents since ICs are responsible for submitting and helping to complete the vast, vast, vast majority of the patents filed by big tech.

As with a bunch of the sketchy-as-hell "tech" businesses, a lot of tech people have zero moral compass if it makes them an additional $.


I agree with the principle of your argument, but not with "zero moral compass". You may have a moral compass that goes against software patents in general, but it would require a very strong one to sacrifice your ability to advance your career and your family's quality of life on this altar.


There are financial and career incentives for filing patents. Filing a patent, and being against software patents in principle, are perfectly compatible. Like proponents of FOSS using proprietary software.


Okay, so they made the claim that nearly every software engineer is against it. What is the proof?


Or, even more banally, open office floor plans.


if only the entire workforce of ad tech would just say no.


You're welcome to offer a more attractive alternative, if one exists. The market is always ready.


a more attractive one would be the ending of invasive tracking. according to those involved in ad tech, there is no market without the tracking, so...the alternative is no ad tech


Strange how there are ads in magazines, on billboards, TV, and radio, without tracking. Especially since they're so much more expensive to place.


> Strange how there are ads in magazines, on billboards, TV, and radio, without tracking.

But are they?

Magazines are dying, and ads placed in them do their best to make you hop into digital realm, where you can be tracked - think QR codes, "visit https://...", etc; billboards likewise. TV manufacturers are forcing "smart" TVs down customer throats, proper radio is a thing of the past - to the point that people get away with calling web streaming "radio", as if that bore any relation to broadcasting EM waves. Entertainment is generally consumed on-line, and legacy media are either dying or are retrofitted to be mere shells of legacy experience around the on-line core.

It's a subtle thing, really, that people too often miss. Yes, the leaflets are still the same dumb, analogue paper they were 30 years ago. But that QR code on them, should you scan it, is what plugs you into the surveillance economy.


> Magazines are dying

If they are, nobody can tell: https://www.statista.com/statistics/207850/total-gross-magaz...

> and ads placed in them do their best to make you hop into digital realm, where you can be tracked

So, the ads themselves don't track you. You're primarily concerned because... it encourages people to do things that could result in them being tracked?

This seems like a bit of a stretch to me. The original statement I was replying to was the lament of ad tracking still existing. Even if ad tracking didn't exist though, you would still be constantly confronted with non-tracking ads that are potentially even worse. The proof is across every highway and Nascar wrap and back-date newspaper you collect: we put ads on damn near everything. Tracking or not, people just pay to put content in places. Publishers think it's a fair deal. Unless the Free Market creates a more attractive alternative, you're more helpless than the people in hell begging for ice water.


> So, the ads themselves don't track you. You're primarily concerned because... it encourages people to do things that could result in them being tracked?

Given that majority of the population does not understand any of this, it's effectively the same.


That's irrelevant. The existence of non-tracking ads, ever, proves it is a viable business mode, absent competition from tracking ads (if e.g. regulation banned them). That magazines and TV are not competing well with websites and streaming does not affect this.


That's odd because there are tons of engineers with software patents.


Meaning that it's a necessity to have them due to the system, not that they like it.

Systemic issues usually create these dissonances, go read about it to become a bit less dull.


> Like software patents.

Considering ai folks want to steal people’s work without honouring licensing a lot more will be in favour of patents.


Call me a cynic, but I think a patent regurgitation machine will more likely have a successfully defended patent against the other AI makers than be shut down for the patent infringement it enables.


> but you cannot have encryption that can be broken only by our spies and not their spies. Pick one.

This doesnt seem true in practice. I would look at prism as an example https://en.wikipedia.org/wiki/PRISM


If you want PRISM to be evidence that you don't have to "pick one", you don't just need to demonstrate that the US could access encrypted content, you also need to demonstrate that no other nations could exploit any part of it.

That includes non-American courts making orders to use the same actual processes and procedures as PRISM, and which we might not even know about because those countries have not yet had their own version of Snowden revealing it.

One thing I wonder with "secret" courts (and court orders where you're de facto ordered not to discuss the order even with your attorney like Lavabit was):

How do you know if it's a legit court, and not an elaborate spear phish?


This is quite off-base, unfortunately. Don't read it as fact, but as a vibe, and take note it's an infohazard if you have any managerial responsibility in a tech company.

Among many, many, things, the EU was not beat back on messaging interoperability -- in fact, it's due to announce the draft list next week.

And it wasn't messaging interoperability -- its a set of requirements for gatekeeper services with over 45 million users.

I didn't expect this from Ben, but when I think back, I haven't seen much work from him over the last 5 years. Did he drop out of industry?


Originally, the EU had one paragraph on this. That paragraph literally said that you had to let 'any' third party interconnect. That turned into a separate 50 page draft once messaging people explained that 'any' meant spam farms and Chinese intelligence agencies, and a bunch of other stuff besides.

And no, nothing happened to me ;)


> The Canadian government told Google and Meta that if a link to a newspaper story ever appears in search, or if a journalist ever posts a link to a story on Facebook, then they have to pay the newspaper for sending business to the newspaper.

This is first-class bullshit. Google and Meta made their money by co-opting third party content, concentrating audiences into their own portal, and selling advertising against it - often through algorithmic dark patterns. This is a disreputable business model that’s as old as iframes, but somehow its acceptable for these gigacompanies to do it.

To characterise this as “they have to pay the newspaper for sending business to the newspaper” is just buying into the kind of self-interested “no” that the article purports to discuss. They systematically took these audiences away from the content providers, and then started to charge a premium to the providers for placement in the algorithms. Meta has been the most egregious of these but Google has been moving in this direction for years.

These companies have more than enough money to pay for their content, but that would break their protection racket business model.


Or: newspapers opted into the Google + Facebook ecosystem because it was beneficial for them to do so, customers choose to use Google and Facebook because they provide tangible user benefits, and competition is just a click away yet most newspapers fail to deliver a competitive value prop to visit/subscribe directly compared to the traffic they get from Google/Facebook. The reality is that the newspapers derive great value from Google/Facebook traffic (lets them get more ad money + subscribers) whereas the value of the newspapers to Google/Facebook is marginal as they are just more content in a feed/aggregator. There's no racket going on here just incentives/products/reality all around. If anything the Canadian government's asks are the real racket/shakedown.

I recommend reading this article from Stratechery if you want a more thorough breakdown of the incentives from when Australia tried to do something similar: https://stratechery.com/2020/australias-news-media-bargainin...


> and competition is just a click away yet most newspapers fail to deliver a competitive value prop

there are many points where big tech abuses their monopolistic advantage making competition not fair.


I don't disagree there are some instances of this in other areas, but in the case of links to news websites I don't think there's a meaningful argument against customers choosing which URL to visit as it relates to competition.


fb+google have monopoly on links discovery, customers can't open news URL unless visit google or fb and watch all their ads and give them data.


That's bad behaviour against users, not bad behaviour against newspapers.


For newspapers, datapoint would be to check how many users click on URL after reading title + citation at google or fb.


Huh? What does prevent Joe User from typing in https://www.nytimes.com?


The problem is with discoverability, i.e., knowing that you want to type "nytimes.com" instead of "theonion.com", or even "amazon.com".

Most people start at a search engine with keywords because that's how we've trained them for the past 25 years.


figuring out exact "nytimes.com" address without using google, unless he has all 100M web sites in his memory somehow.


Buy a newspaper - the URL is right up there in the masthead. Alternatively there is guessing the domain, which is how we found our stuff back in the day.


Fortunately, these days if you guess wrong, the browser will just open you results of Google search for the string you just typed in (what used to be) the address bar.


sure, you can even have friends, and exchange paper letters talking about news, but 95% people use google, and that's why it is called monopoly.


There are legitimate criticsms to be made about those practices, but which exact anti-competitive practices are employed by Google/Bing against newspapers, in particular?


I guess taking title + text snippets from original authors, and monetizing them without consent.

The question if it is anti-competitive is up to legislation, which looks like may change.


The title is almost certainly fair use, but I feel that if the newspaper wants to shoot itself in the head by not sharing the text snippet, search engines and Facebook should oblige them.


fair use or not it is up to legislation to decide.


The law is essentially a tax on links. His characterization of it is accurate and is the same as any in depth analysis I’ve seen. If you want a really good, independent analysis read Michael Geist, who is a law professor at Ottawa.

Sounds like you are letting your hatred of meta and Google warp your analysis of this bill (which is what the government is counting on).


Or, my understanding of the absolutely critical importance of media in a democracy is making me angry because we are killing it in favour of a couple of huge corporations who couldn’t give a shit about government, and often work actively to undermine it.


If you are correct, then those newspaper businesses will now enjoy an economic upturn, since Google and Meta no longer allow links to them on their platforms. But I think we'll see it hurt the Canadian businesses instead. At the very least, it will hit the small independent companies very hard, since it was the primary method readers found them in the first place. This may have the unfortunate side effect of reducing the number of players who can meaningfully contribute to public discourse in Canada.


> If you are correct, then those newspaper businesses will now enjoy an economic upturn, since Google and Meta no longer allow links to them on their platforms.

How does that follow? My position is that Google and meta have monopolised the audience. If links to news are no longer published then obviously the traffic is going to go down.

> But I think we'll see it hurt the Canadian businesses instead.

Obviously - that’s exactly why they’re doing it.


> My position is that Google and meta have monopolised the audience.

Meta have said that people don't come to them for news - [0]. Do you have numbers to counter?

What prevents Canada's government to create a news portal, then promote it everywhere, including on Google and Meta, telling people to go to that website for news? People who are looking for news will surely be visiting that site now, since they spend less time on FB. Call it news.ca. Of course big businesses will benefit. The smaller ones will inevitably suffer.

> Obviously - that’s exactly why they’re doing it.

Seems to be the government's intention too. They've been very well aware of it.

[0] - https://about.fb.com/news/2023/06/changes-to-news-availabili...


> Meta have said that people don't come to them for news - [0]. Do you have numbers to counter?

I think the question is does Meta have any numbers to this effect? Their quote is literally: "In contrast, we know the people using our platforms don’t come to us for news."

Meta seems to want it both ways as they state elsewhere: "Though news article links make up less than 3% of what Canadians see in Feed, we estimate that we sent Canada News Page Index-registered publishers more than 1.9 billion clicks in the last 12 months ending in April 2022." So it's both a tiny part for Meta, but also a big part of the Canadian news ecosystem.


The self-serving question "do users go to Facebook for news?" is asinine. Of course, most people don't. But if they get their news while they're using Facebook, then they have less incentive to seek out other news sources.

The same is true for all types of content, but reliable journalism is a key pillar of democracy, and we are watching the many different ways that democracy gets ruined when we don't take it seriously.


Thea's a lot of rhetoric, so let's get really specific about how these laws are supposed to work.

You do a search. You get ten blue links. One of them is to a newspaper website. Google has to pay that website - but not any of the other 9 websites.

You see a cool web page. You post a link on Facebook for your friends and say 'hey, read this!" If it's a link to a newspaper, but not to any other kind of website, Facebook has to pay them.

Google and Facebook not 'co-opting their content.' You need to be really clear about this - we are talking about links. In Meta's case it might be a link posted by the newspaper itself.

Let's extend the principle here: someone posted a link to my website on Hackernews - so Hackernews has to pay me. Really? They post it on Reddit and Reddit has ads, so I get a percentage of the ad revenue. Really??


All of this is arguably true, but conveniently forgets Google News and FB’s algorithmic feed, among other products.

There is so very much wrong with the way search works - its opacity, the enormous reach of Google advertising, the way Facebook manipulates sentiment to sell advertising.

This law may be clumsy, but lazy rhetoric is still bullshit.


Google News is tiny and irrelevant - that's not what the law is about.

Again - do you really think that if you do a web search, and one of the results goes to a newspaper, then the newspaper should get paid? And none of the other links?


> Google News is tiny and irrelevant - that's not what the law is about.

According to SimilarWeb, Google News ranks #114 globally (all websites), and #7 in the News category.

https://www.similarweb.com/website/news.google.com/#overview

It is much larger than WAPO, and not far behind NYT. Which makes it significantly larger than any news provider anywhere else in the world.

So maybe I'm weird but that hardly seems "tiny and irrelevant".

> do you really think that if you do a web search, and one of the results goes to a newspaper, then the newspaper should get paid? And none of the other links?

From the context, I doubt that you're arguing that everyone should get paid, so I'm not sure what your point is, because I don't see why they shouldn't.

If you do a web search and one of the results goes to a newspaper, and you click the result, then the newspaper can monetise that. I don't have a problem with that.

If you do a web search and it presents enough information that the user can answer their question without clicking, well, that's an interesting problem. The answer would not exist if not for the provider, yet the search engine makes money and the provider of the result does not. This is particularly true when the search engine pops up a news box as a result.

So if a search result provides useful information to a user, and the search engine makes money from providing that answer despite it being sourced from elsewhere - why shouldn't the source of that information be compensated?

As I said above, this law may be clumsy - it might not even be possible - but my objection is to the lazy rhetoric surrounding the issue, which is still bullshit.


> They systematically took these audiences away from the content providers

It sounds like those audiences have an equivalent chance of organically discovering those news without centralized search engines or news feed systems? Then what's the value of those systems? If they don't provide any values, then why don't those audiences just go to the new website instead of clicking a link from Google/Facebook/whatever.


> They systematically took these audiences away from the content providers

Let's not get carried away here. These newspaper businesses choose to upload their content to the open web. By making and distributing public links, they are accepting the fact that others will do the same. They didn't take anything that wasn't on the table.


Posting your content on the open web doesn't give permission for somebody to lift the content from your site and display it next to their own ads.

(I don't know why I'm defending news here, they could have fixed it themselves years ago. )


Come on, when you say "lift your content" you mean show a headline, a picture and maybe the top line of the story with a huge link back to the original website?

90% of the time that data is published as "Open graph" format by the website so it looks nice. The website is going out of it's way to make links to it on facebook look pretty.


> They systematically took these audiences away and then charge a price to get the audiences to return.

How exactly does the existence of search take audiences away?

Or do you mean the 15-word clickbait headline blurb takes audiences away? If your news story isn't worth reading after reading the headline, it's probably not worth reading.


> How exactly does the existence of search take audiences away?

Oh, that one's easy. Because the old media companies used to curate which stories people see, and now the search engines do it, but do it in a way that also shows competing sources. So then legacy media outlets lose a lot of traffic -- and political influence -- to random blogs and social media.

Somewhat ominously, Google has mostly stopped showing the latter in search results. If you like conspiracy theories, this happened not long after those companies filed a lot of lawsuits and pushed for legislation against them, and once they did the number of lawsuits and proposed legislation declined.


Google thinks its worth including in the index.


What kind of argument is that? Google's mission was literally to "organize the world's information" - of course they would include newspapers in the index, irrespective of any opinion about the articles.


The part you quoted and called bullshit is factually correct though. CAN has told Meta and Google that if they show links to news sites (which do drive business to those sites) then they have to pay those news sites. It’s not a characterization, it’s what has actually happened. How can you deny that?


The part that’s bullshit is this:

> they have to pay the newspaper for sending business to the newspaper

That’s an opinion.

Another way to explain the situation is: they are being forced to pay for content that they’ve monetised at the newspaper’s expense.

There are several other ways to view this situation, but Ben Evans has decided to push this version, which makes it sound like big tech is being somehow generous by sending traffic in the first place.

In fact, Google and Meta in particular have been pushing news producers against the wall for years.


They have to pay for showing links, and those links do send traffic to news sites. That is indisputable.

You’re phrasing is just incorrect, honestly. Linking to a website is not at all taking the site’s “content”. And monetizing search results doesn’t do anything “at the newspaper’s expense”, what expense has a news site incurred by someone sharing a link to their site?


And only made possible by the fact that levels of neutrality were enforced lower in the stack. These tech giants would not have been possible if the telecoms companies had the power to leverage their platforms for advertising.

That's not to say the social media model wasn't revolutionary. It was like the emergence of the Gnathostomata. But it seems crazy that people's online identities are tied directly to the companies trying to manipulate them.


Then how come the media backtracked this exact law in my country (which actually came into effect) as soon as they saw their audience plummet?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: