I think this trend of preventing people from copying is worry some as:
- people might have a legal right to make copies
- it's opens up a lot of potential for abuse, as it's harder to safe proof of abuse.
- it gives people a false sense of security (e.g. when sexting)
Couldn't they instead e.g. display low resulution images for screenshots or similar?
That should be good enough for artists in my experience. (I mean there are artists which live draw the art on e.g. twitch they then sell, it works as the image quality you can easily extract is just not "good enough" for most potential buyers, and if we idk. throw AI sharpening tools at it then we could also throw tools at it which circumvent telegrams protections).
Deleting messages was alreday supported in telegram for years. They just added the option to partially delete a whole conversation (whereas you could delete only the full conversation, or messages one by one).
How so? You could already delete the whole conversation to avoid leaving traces, or individual messages if you want to create a misleading log. What specific tactics does the new method open up?
For e.g. abusesive relation ship sever (and sadly often effective) cases of Gaslightling are pretty common, similar tackticks are often applied in other cases, too.
And with fully deleting messages this opens quite a bit of possibilities.
If they instead would just make the message unreadable, i.e. leaf a placeholder behind this would be a different thing.
Just tested this. In a group with this enabled, indeed, the iOS app results in either a black, transparent, or otherwise missing screenshot of the area of the screen displaying the media. In the few minutes I played with it, I wasn't able to bypass it.
However, the chat is viewable in Telegram Web (K). Web K even offers to download the picture, which actually downloads the file.
Additionally, Telegram Desktop for Mac similarly allows screenshots as usual and isn't affected.
In the past I have seen some other apps implement screenshot prevention on iOS usinf something that "cloaks" the screen when the button combination is detected, and when the OS is about to background the app. This is the method Fido My Account (mobile carrier in Canada) uses (current version allows screenshots, but blocks the info from the app switcher).
The method Telegram is using appears to be rather seamless, as it does not show the screen going black or anything when the screenshot is taken. Stack Overflow seems to discuss two solutions, one using DRM video, and another using some password field hackery. https://stackoverflow.com/questions/18680028/prevent-screen-...
This is the most likely case, as the behaviour is similar to what happens when you take a screenshot of DRM netflix; it appears it is drawn by the hardware and not in the OS framebuffer, and thus shows up as a black box.
Of course this is not really security, but Telegram, despite being open source on the client, does have in the ToS that apps have to implement the "secure" features of secret chats properly, or risk being blocked. Recently they have been sending info out to bot developers that says apps that don't implement the ads in channels will be blocked as well, but I'm not sure if they will really enforce this.
"We ask that you make sure that these sponsored messages are supported and properly displayed in your app by January 1, 2022. Unfortunately, Telegram cannot financially sustain apps that support Telegram Channels but do not display official sponsored messages – such apps will have to be disconnected."
> Just tested this. In a group with this enabled, indeed, the iOS app results in either a black, transparent, or otherwise missing screenshot of the area of the screen displaying the media. In the few minutes I played with it, I wasn't able to bypass it.
Screen recording is handled in an interesting way. If you start the recording while in the Telegram app, it just freezes after the 3-second countdown and never starts. It times out after about 10 seconds. If you switch to another app before that, it starts the recording, but if you try to switch back to Telegram the recording freezes halfway through the app switcher animation. Protected media is also blanked out in the app switcher.
It's different from apps using DRM like Netflix, where it usually just shows a notification saying "failed to start screen recording".
I just tested it again, and as the user below notes, it has some way of interfering with screen recording. However this is not 100% reliable. as I was able to create a screen recording of the media in the group's media overview page and zoom in as usual.
The trick was to start the screen recording out of Telegram, open it (if you're on the actual page it will freeze the recording interestingly) but once you get to the media overview page, the screen recording works again and you can capture whatever you want. I have been able to record everything without problem once incl the chat, but this is probably only effective in some cases.
As it turns out, regular screenshots also work on that page. My testing was not super thorough I suppose, for something I quickly did while trying to fall asleep...
I'd guess it just uses the same iOS framework parts that are used already for Netflix and other streaming services to blank out the content. In that case screen recording would also be taken care of.
While I'm not a fan of Telegram's security model, I have to say that their dedication to create an awesome product is admirable.
EDIT: Just to be clear given the title, I'm not pro DRM, and we can be pro or against auto-removable content, but I just love how they implement features that their users are asking for and the quality of their apps.
I am amazed how high quality telegram's client apps are. Both on desktop (native), android, and even the web versions are feature full, native, blazing FAST. As I said once, telegram is not the best chat app I have to use, but the best... App altogether on my laptop/phone, I have to use :-)
The UX is slicker and more comfortable to use than that of whatsapp (and miles ahead signal). With the latest update if I got it correctly, even E2E encrypted chats can be synced to all clients (? Needs to be verified)
I guess you mean the security model of non-E2E chats. Although you can read into it that they can do MITM, the details reveal that the actual keys/messages are shared into datacenters in multiple countries so no government alone can retrieve plaintext from at-rest storage. To me (especially compared to anything Facebook products) is already much better than nothing. If it's an issue for you, opt in for E2E chats and enjoy!
Sometimes I have to use whatsapp with some contacts and it feels like a huge step back after telegram, especially in UX.
I'm a happy Telegram user as well! That being said:
> I guess you mean the security model of non-E2E chats. Although you can read into it that they can do MITM, the details reveal that the actual keys/messages are shared into datacenters in multiple countries so no government alone can retrieve plaintext from at-rest storage. To me (especially compared to anything Facebook products) is already much better than nothing. If it's an issue for you, opt in for E2E chats and enjoy!
It's unfortunately not that easy. For one, their promises of sharded keys is something we as users cannot verify, so we still just need to trust them here [0]. WhatsApp, on the other hand, at least attempts to have E2E. Secret chats are an option, but only the mobile clients support them [1] and you can't have them for groups at all. There's also some critique on mtproto, their roll-your-own encryption. I don't necessarily agree, but it's another strange point.
Like you I like the UX a lot and I have some trust in Durovs motivations, but the security model is questionable.
[0] They refuse to open source their servers, but I follow their argumentation in so far as that this would not help since we could not verify that the published source code is the one running on the servers.
[1] At least the official Linux desktop client and the web clients don't.
I think Telegram for macOS (not to be confused with Telegram Desktop, which also supports macOS) supports secret chats, and on Windows you can use a third party client called Unigram. But it would be nice to have official support for it in Telegram Desktop as well.
For [1], I saw in the yesterday update that the web version also can sync secret chats.
I don't have my laptop with me (for another month), so I have to check this, but based on the redesigned devices dialog, I can imagine that this is now solved.
It's much faster than Discord, especially on mobile. Sometimes I get notifications from Discord, then when I open the app, it takes a few seconds to create a connection and show the message. With Telegram it's just there
> Sometimes I get notifications from Discord, then when I open the app, it takes a few seconds to create a connection and show the message.
The exact same thing also happens on Telegram for me all the time, and I have a gigabit connection. The push notification arrives on my phone instantly, but when I open the app it loads for like 15 seconds before showing everything.
One thing I can't stand is that it's the only messaging app out there without a presence monitor. As soon as I receive a message all of my clients will start pinging.
It's the reason why I avoid it. I talk to someone on the desktop client, yet my phone, work laptop and tablet will ring on every message.
I can't be assed to go and mute 3 other devices every time I decide to talk to someone.
> I talk to someone on the desktop client, yet my phone, work laptop and tablet will ring on every message.
ime the desktop will pickup the message a few seconds before any of the mobile clients does and mark it as read, hence surpressing any notifications on the latter.
ofc chat notifications don't make any sounds on my systems to begin with.
That would be a nice feature, but for me it's not absolutely essential because my phone is nearly always on Do Not Disturb (I only let calls, SMS/iMessage and Viber through, but I still see other notifications when I pick up my phone), and even on my laptop I disabled the notification sound and just left the popup/badge on.
In my experience on macOS (swift) app, this does indeed detect presence - only if I ignore it for an (albeit short), few seconds will it ping my phone I think.
it does have a presence monitor, but I believe it isn't uniform between the various platforms and it is quite fickle when it works (as soon as you close the window it marks you as totally offline)
on Android at least you can set telegram to limit the frequency of notifications (eg "at most 3 every 5 minutes") it is not perfect but it is an improvement
I also don't understand its security model, at this point also speaking about automatic message deletion it seems to me they have been over-optimizing something. XMPP messengers have encryption, can even support forward secrecy and don't keep messages among other features. Since Smartphones are anyway not the most secure platform, it makes little sense to provide a completely sealed off messenger on top of that.
So, "Privacy by obscurity"? Who thought that was a good idea? And how do they plan to restrict screenshots on iOS anyway? This is wrong on so many levels. If someone has access to a material on their own device, Telegeam can only make it inconvenient (but not impossible) to save/forward content, and since it's not real privacy, this only makes people to move away from Telegram to, say, Discord or alternatives, just as me and many others around did.
Good move Telegram. This is how you ruin your otherwise-great platform.
Inconvenience is often good enough. This isn't intended for state secrets, based on the language it's intended for "creators" to "protect the content they publish on Telegram". Obviously any platform can't completely prevent copying works, that's how piracy exists, and always will. If I were an artist I would prefer a platform that at least made it slightly more onerous to copy my things with high quality. A photo of a phone is worth effectively nothing in this context.
This seems like a fine change, people on HN just like to be angry.
The issue though is that this will give the sender of a message the impression that what they are sharing will stay private, even though there are from a technical standpoint no such guarantees.
> this only makes people to move away from Telegram to, say, Discord or alternatives
I simply do not understand why people are even thinking of moving to another closed platform like Discord. We already have an alternative and I think it's time to embrace it: Matrix
Because all Matrix clients are so terrible that they feel like beginners projects, even the official one is so slow and bloated that it's almost a bad joke.
Comparing Matrix to polished apps like telegram or discord is... misplaced to say the least.
I least expected such a response here on HN. I can understand if this appeared on other communities but here we are.
We all know how open source Software moves forward. It's us who can put up with little inconveniences of an open source app and maybe even improve it by identifying, reporting and implementing fixes. Those polished apps are polished because they have dedicated teams that get paid hefty $$$ to ensure they do not loose existing user base. In matrix case the user base is very little but I think they have achieved something remarkable: a thriving community that is developing client and server Software required for matrix to work.
I wonder if we had this kind of attitude in starting days of Internet, we probably would not have Internet today. By the way, when did you last use any matrix client?
Well, it's not just about polish. In the case of Matrix, a simple look at their "feature matrix" shows that the only client that seems to be making any appreciable effort at all to keep up with the feature sets that are touted as selling points of the protocol (encryption, modern UI, "Discord-like" experience) is the bloated Electron monstrosity, and independently I have heard from multiple sources that the Matrix server software is so heavy that it struggles to support even a three-digit number of clients on commodity hardware.
This tendency to render your project unusable by choosing inappropriate development tools (perhaps for their seeming friendliness towards inexperienced developer - maybe if the same developers were forced to write in C their code would simply crash all the time, whereas as it is it "merely" runs slowly and/or leaks memory) is manifestly not an intrinsic feature of open source. If anything, in the old days, open source projects (such as Linux itself) stood out for being more lightweight and performant than their commercial counterparts. In fact, I think the first time I remember encountering an open source project that was rendered unusable by its bloat was with Diaspora (an early attempt to make a federated Facebook/Google+ replacement, written using Ruby on Rails). Perhaps there is something to the fashionable "fix social media" sector that necessitates making development inclusive to those who are more activists than engineers, even if this comes at the cost of sound engineering decisions.
But here is the thing, its highly unlikely that open source app will have polish compared to Startup with all the funding in the world.
There are very few open source application (desktop or mobile) that are as polished as commercial applications.
This is unlikely to change, but I still think it's worth trying to get as many users to use open source apps, because more users an app has - higher chance of more developers - better app. Not to mention no ads or spying and other crap that commercial apps are riddled with (General opinion not just chat apps)
Feel free to keep advertising Matrix. I'm myself a strong supporter of it.
But to get it into the mainstream, it needs to compete with other messaging apps such as Telegram.
For that, it needs polish. Period. There's no way around it. So if the goal is to get it into the mainstream (and personally, I want it to be), then that is what it needs.
There are loads of ways you can contribute to this:
1. Contribute time, code, designs, general help and volunteering
2. Contribute money via personal means or via work
3. Promote the use of Matrix to individuals
4. Promote the use of Matrix to businesses, eg. at work, or as a backing protocol for messaging where it makes sense etc.
5. Purchase products that make use of or contribute to Matrix
etc. But in order for it to become popular on its own, and acquire a network effect (which it needs, because IM apps are a class of their own), friction to get new users needs to be reduced to near-zero. And for that... polish is needed. That's it.
I have wanted to like matrix for so many years now but every time I use it, it's unbearably slow and things like gestures seem incredibly clunky compared to telegram.
Matrix is good in theory but its UI is still confusing. I'm also bitter that my old account that was closed a long time ago cannot be retrieved or reset, which means that I cannot use my phone number again (it's stuck forever to the closed account).
What are you talking about? You do not need phone number. I self host matrix instance and all I did was create username and passwords for people I know and shared with them. They then logged in and changed their passwords. At no point the sign up workflow asked for phone number.
Also, the UI has improved a lot recently. There are number of matrix clients that have appeared on app stores but I still prefer Element which gets regular updates and features.
You don't even need to create usernames/passwords in advance - you can just send your friends custom one-time URLs, which will allow them to enter their own desired usernames and passwords.
I'm sure you can contact the admins of matrix.org and ask for the phone number to be released. (I'm assuming you used that instance, as you mentioned nothing else.) I believe they are reachable at support@matrix.org .
You can provide it for the benefit of your friends (or possibly enemies or attackers) to use it to find your Matrix ID from their pre-existing contact phone number.
Matrix is terrible for hosting communities compared to Matrix. Matrix is only good if you are okay having a single group chat for everyone to put everything into. A single group chat does not scale, so it's better to use a platform like Discord.
A simple directory of rooms that you can join is still very far away from the way Discord works. Matrix is still focused around users joining rooms which where Discord is focused around users joining servers.
At least in the mobile app I didn't even know this existed. To find this directory of rooms you need to open up the spaces drawer. The you need to open a menu for the space by clicking on a stack of three vertical dots. I thought this would have things like settings or something in it, but no this is the way to navigate to important information. Then you need to click on explore rooms. You now can see a list of rooms you can join. You can now join a room with a tap. Then you have to open the room with a tap. This is extremely convoluted and hidden.
Meanwhile Discord doesn't need this whole process since you don't need to manually join rooms yourself.
Spaces is just a directory of text rooms. Users don't just join every single room and rooms are independent from each other instead of having a single roles system shared between the whole server. In this model having an announcements channel wouldn't work since there will be people who didn't join the channel. Similarly people might not want to join a rules channel.
ATM people will see suggested rooms, auto joining is in the works.
However, don't see a problem. Why does it matter if they join each channel? you don't need to read a channel in Discord as well despite you are auto joined
Higher friction to visit other channels means that it will mean less people will visit other channels. I've been in a Nix / NixOS room for a while and I had no clue I was in a space and that space had other rooms until you commented that it existed. I never had this issue with discord. It was obvious there were other channels the first time I used it.
Media DRM is not infallible either. Sure, it's a lot more inconvenient than a feature that simply blocks screenshots, but even the most secure hardware DRM used on Netflix, Amazon Prime Video etc. is regularly cracked, the methods are just not public. (Though they usually attack Google's Widevine DRM rather than Apple's FairPlay.)
> And how do they plan to restrict screenshots on iOS anyway?
iOS doesn't have the same system as Android that fully blocks screenshots within the app, but they can obscure part of the screen when taking a screenshot.
I've used Telegram for years, I'm part of about 20 groups and over 40 channels, and I have NEVER seen an ad.
You won't ever see an "ad" unless you're part of huge channels, and then people can promote their posts as "ads".
> Sponsored messages on Telegram are displayed in large public one-to-many channels with 1000+ subscribers and are limited to 160 characters. Sponsored Messages are based solely on the topic of the public channels in which they are shown.
It's a stretch to define Telegram as "ruined by ads"; this policy seems a reasonable compromise to monetize the product. The "famous competitor"'s way of monetizing is considerably more harmful, IMO.
Lol, have people learned nothing watching software in the past decade? It always starts with the innocuous "oh don't worry, this is very limited and only a select group of people will be exposed to it!" line
If one actually watches, instead of making rhetorical arguments, they'll observe that there are different types of products, and one can't make a generalized statement.
Windows for example, is expanding their ad-base, even if they have a base revenue due to the product being sold, so it's reasonable to present a slippery slope argument.
Ubuntu is not (directly) sold, yet, they've removed the ads (Amazon sponsored searches).
Android products have different dynamics - differently from O/S, the cost of switch is considerably lower; users do complain/act when advertisement becomes too invasive, so there is a balancing force.
This balancing force applies also to messaging apps, so the slippery slope argument is not realistic. See the significant number of users who switched to Telegram when WhatsApp was sold to Facebook, or when they changed the privacy rules.
I don't know how many users expect to freeload messaging apps (that is, to expect that apps don't monetize metadata or ads or not to directly sell the product). I have heard people complainints about Whatsapp's privacy invasion rather than Telegram's advertising (as a matter of fact, I guess that none of the people I know has ever seen a Telegram ad).
In this specific case, paradoxically ads are less dangerous, because the cost to the user is overt, as opposed to metadata (ab)use, which is covert.
This reminds me of old websites blocking right clicking on a webpage since that's how you viewed the source code, yet there's always a way to get the source. Same in this case, there's always a way to capture the "protected" content.
Edit: granted, not only old websites do this. Instagram lays a full width/height transparent div atop the picture to prevent right-click copying of the image.
Except the platforms are getting better at it. Android has an API which allows apps to completely block the screenshot functionality. For the average android user, there is no way to get around this. Computers are getting locked down more and more to the point where these kinds of restrictions actually start working.
Yeah, it's so tiresome. It's my device, I should be able to do anything if I want to. Then Android comes up with some API to let apps subvert my will. Who cares if the app doesn't want to be screenshotted? I'm the user. What the app wants doesn't matter, it's never mattered. I could fight this by rooting my phone and using custom implementations that lie to the apps and only pretend to be secure. Then Google comes up with hardware attestation and now there's no way to fake things anymore. It's like we're the enemy that the phone is being secured against. I barely have words to describe how much I hate these companies and the way they try to control me.
We have free software everywhere except phones. I wonder why organizations such as GNU aren't working on free software clients for these popular services like WhatsApp and Telegram. The potential for a positive impact is enormous.
> It's like we're the enemy that the phone is being secured against. I barely have words to describe how much I hate these companies and the way they try to control me.
I’ve had enough of it, too. Ordered a Librem 5 and can’t wait for the day it arrives – so I can finalize my divorce from the Apple ecosystem.
> I wonder why organizations such as GNU aren't working on free software clients for these popular services like WhatsApp and Telegram.
Regarding WhatsApp, it looks like it’s against their TOS, and Meta goes out of their way to detect usage of third-party apps and ban your account [1].
Regarding Telegram: not only do they allow third-party implementations, they even link to them [2]. Besides, Telegram’s own desktop client is FOSS under GPLv3 [3].
Nope. We have free software everywhere, except phones, heavy manufacturing systems (where a 3d control can and will be downgraded to reduced-precision on one axis in software just because you are in the wrong country, or disabled if GPS doesn't work), legally valid government interaction (I am disallowed by the contract to use my government-issued S/MIME certificate with anything else except CryptoPro), and many other places.
> these kinds of restrictions actually start working.
This - it didn't use to be true because most computer users were literate enough to bypass simple restrictions, and if a platform was restricted, others were available.
With the latest changes in Windows, Android, and demographics, now a vast majority of users cannot easily bypass restrictions; the war on general computing has been won. Yes, a small number of highly skilled people can easily dump binary data through hacked devices, but in the grand scheme of economics, that doesn't matter.
Xposed modules (can even be targetted at specific apps in a root-less way using Taichi), patching system libraries with smalipatcher to disable the "secure flag" system wide or root screen capture tools like scrcpy.
I recently came across a fairly modern looking website that blocked my right click and displayed an alert “you’re not allowed to view this sites source code”. I could not help but to laugh out loud
Note that this is an enterprise policy and not something a website can enable, only your computer's administrator (you on your personal device, a school admin or employer in the intended case)
The difference between an enterprise device management feature and a web API are... pretty massive.
It doesn't seem particularly unreasonable that a company dealing with sensitive data would want to prevent their less computer-educated employees from falling for self-XSS attacks
500px at some point started to use the right-click-blocker even on Creative Commons licensed images (!) — I quit the site over this. IIRC they eventually dropped the option to select a CC license entirely :/
Some legislations have pretty rubbery rules about "circumventing effective DRM" to tack extra charges onto you if you find a way to save content the content creator does not want saved. I would argue that a DRM that can be circumvented is no longer "effective", but lawyers and judges might think differently about that.
It's more a bathroom door lock than DRM. It's not meant to be unbeatable - it's to prevent accidental embarrassment, and most likely to be used by small groups of willing participants (as in 'group chats' or the 'secret santa' example, perhaps)?
It's not uncommon for unwritten, but implicitly accepted, social conventions to be broken in the moment. You might take a screenshot of a private group chat "just to show my wife and I'll definitely delete it later" but then forget and it leaks.. etc. etc. Having some guard rails on that sort of situation might help/remind some people.
This trend to prevent others from saving content is stupid. Are people not going to read the content? It’s already “saved” and of course doesn’t stop people from using another phone to screenshot or simply typing up the contents
**
Oh and this is yet another kick in the face to people with accessibility issues.
**
As an aside, once AR is mainstream I expect that apps will only display encrypted text and some pair of smart glasses will be able to be configured to decrypt and display the play text all on the client (glasses) such that such issues are removed.
2. Tap the message once, making Voice Over speak its contents.
3. Tap the screen four times with three fingers. When Voice Over is on, this gesture copies the last spoken phrase to the clipboard.
4. Hey Siri, Turn Voice Over off.
There's no way to block this without breaking accessibility. You could split a message up into multiple items, which partially solves your problem, but the more items you have, the more annoyed actual Voice Over users become.
A similar (although more involved) attack could be used to extract Kindle books. There's no way for Amazon to prevent this, one of the primary screen readers for Windows uses GPL without a CLA, so a proprietary accessibility API is out of the question.
Correct, however they can randomize certain words in your copy to watermark it and trace it back to you if it ever finds its way to the public internet.
Rumor has it that Polish ebook publishers do exactly that.
In Poland, there's no single E Ink reader company, so there's no DRM standard that would be compatible with all devices that you might want to read your books on. As a consequence, offering unprotected epub files is standard practice here, but all kinds of watermarks, from divs that are 1px by 1px, to subtle modifications of your book's cover, are pretty common.
You can beat this kind of watermark pretty easily by getting three copies of the book and comparing them with diff. You need 3, not 2, so that you know which variant is the original when you notice a difference.
Interesting. I knew you could defeat the protection with multiple copies but the number of copies needed to do it could be increased massively. Let's say they want to catch a serial leaker. They could do massive "watermark cohorts". Like several thousand units sold big. It would surely take several leaks to narrow down the suspect but combined with time information it would pose a significant risk to whoever wants to leak books.
Also,
>In Poland, there's no single E Ink reader company
lol imagining a tech sector without a monopoly or an obviously harmful duopoly is wild.
In modern iOS versions, Voice Over has an AI-powered screen recognition module that does not rely on the accessibility information that the developer provides.
As far as I know, it's not possible to prevent the use of this module, not without making the text on the screen unreadable to any OCR engine.
This module isn't perfect, in fact, it's far from perfect, but for something as simple as extracting a bunch of text that already appears on the screen, it would probably work.
There's nothing in the ADA (or any similar legislation) that says that all services ever created need to be accessible. Whether such legislation applies to you depends on a lot of factors, whether your software is used in workplaces, government and education being the most important (but not the only) ones.
Sometimes it's the organization that's only allowed to pick accessible software, not the developer that needs to make the software accessible. Sometimes it's okay to use inaccessible software as long as an accessible alternative exists and can be used by those who need it.
If someone cared enough to break accessibility this way, they could implement a switch that conditionally disables accessibility where legally permitted. I believe Kindle already lets publishers do this, come to think of it.
> This trend to prevent others from saving content is stupid. Are people not going to read the content? It’s already “saved” and of course doesn’t stop people from using another phone to screenshot or simply typing up the contents
Building walls in physical world is also very stupid. It does not prevent people of using ladders hehe.
Sorry about the joke. They want to create more friction so users can avoid people from 'easily' taking screenshots.
I think the idea is that messaging follows typical human offline communication patterns.
If I tell you something in person, you may remember it for some time, but it's not recorded forever. It exists only in your and my mind. I may misremember or forget it, you may too, and there's no way to know the original message. And there's no way to share that communication. Showing a screenshot of what someone said, is entirely different from saying someone what you heard from someone else. That's why we have contracts because "I said, you said" was never meant to be permanent or enforceable.
This is a feature, not a bug. If human conversations were to be permanent, they would be much less said or written. Messengers, especially private messengers, are a loophole, in a sense that they keep forever what people still unconsciously think of as ephemeral communication, and it's good that Telegram is trying to address it; though I'm not confident how effective it may be.
A friend sent an expiring 5-second photo to me in 2018, and to prove a point, I redrew that 1:1 a few weeks ago and showed it to her, because I think expiring messages are bullshit.
In the real world, I won’t forget the messages you tell me either, the memories are forever. Why should a digital message expire when memories don’t? I can still remember what my first day of school was, why shouldn’t I be able to see the messages I got sent last week?
>Why should a digital message expire when memories don’t?
I'm curious if you understand that your memories don't expire, but that almost all other people's do? I'd be interested to know if you think the platforms should conform to how your brain works, or if you think most brains work like yours. Just so you know, most people can't remember 99% of conversations verbatim that are years old.
> hy should a digital message expire when memories don’t?
In that case, why do you need a digital message if you remember everything? You also don't have any extra recording on personal communication except your (and the counterpart) memory, why it should be any different for digital messages?
> This is a feature, not a bug. If human conversations were to be permanent, they would be much less said or written. Messengers, especially private messengers, are a loophole, in a sense that they keep forever what people still unconsciously think of as ephemeral communication, and it's good that Telegram is trying to address it; though I'm not confident how effective it may be.
If this premise were true encrypted unscreenshotable apps would be the norm for communication.
So that they can’t reshare? This is almost always why DRM is used. Just because I want to share some content with you doesn’t mean I want you to be able to share it with someone else.
Why do you believe that once you share a message you should continue to have control over it? That’s not what sharing means.
Not to mention it won’t work. If they read it nothing is stopping them from simply telling someone else, drm or not. This technology will not prevent anyone from simply handing over their phone - and your messages - to someone else.
This just goes back to what the original person said about "Building walls in physical world is also very stupid hehe"
It's entirely reasonable to create a system designed to allow people to share things once, and indicate to others that they don't want it spread any further. It's reasonable to design software which attempts to honor such requests by introducing friction, making it 95% effective despite knowing that 5% of people will be able/willing to work around it.
Most people are "path of least resistance" and are too lazy to work around something like this, which is exactly the point. Pretty much every person using such a system is aware that the recipient could just take a picture of the phone with a camera.
Your analogy is not relevant at all to the situation.
It's completely reasonable for someone to give you a letter or photo of a personal nature and ask you not to share it (or simply expect you to understand that it's not to be shared). In the physical realm, the honor system comes into effect. In the digital realm, there's nothing wrong with adding an imperfect layer of accountability/protection to greatly reduce the chance that it becomes shared.
It’s perfectly relevant. It’s literally the analogous situation. There’s nothing special about the so-called digital realm.
The same expectations and honor system you described can be upheld in the “digital realm”. The only difference is that people have the arrogance to try to force things upon “digital” users because they can.
And also, I never said the gift was physical in nature.
It's not just about messaging. Although, I'm not sure I agree or disagree about your idea of message ownership, or lack of it; I've got mixed feelings about it.
I think it also comes down to wanting to own the "distribution" chain.
Example scenario (made up, so might have flawed reasoning): If I have an official following on Telegram where I post my art, and there's a fan/knock-off Twitter account reposting my art without my permission. I don't have many ways of stopping them. I could send a DMCA notice, but that's slow and doesn't prevent someone from reopening their knock-off distribution channels. DRM creates friction for most people that they won't even bother thinking about setting up a sidestepping distribution channel that fragments my audience.
Also to add (edit): being perfect isn't the point. I mean, most habits or ways we do things _every day_ aren't even close to optimal; much less perfect. If we threw out everything that's not perfect, we'd be left with nothing.
You don’t really need an example scenario - just take YouTube, the largest source of creator content - it does nothing to prevent you from sharing the videos.
Ultimately if you feel like you’re being wronged DMCA should be used and not a programmatic approach simply because a program has no notion of who was right to begin with.
So in your example you say “my art”, but what if it were not? Should you not be able to post “your art?”
I believe people should handle people issues and computers handle, well, the other stuff.
There is an enormous difference between a tie rack (or any commodity, including digital ones) and a letter or photo of a personal nature. There is nuance in this situation that has to be understood and recognized in order to evaluate why the feature exists and if it provides value.
There's an enormous difference between a physical object, which has inherent scarcity; and information, which does not. We strongly enforce property rights on physical objects because since they are scarce, theft is possible (if a person acquires a physical object then someone else acquires it, it is necessary that the original acquirer no longer possesses it - therefore either it was a gift, or it was paid for in some manner, or the second acquisition was theft). The notion of theft of information was invented by politicians due to an utter lack of creativity with respect to how to incentivise the creation of a non-scarce resource. While physical property rights are rights in the positive sense (they grant owners of physical objects the right not to have those objects removed from them without due consideration), intellectual property rights are rights only in the negative sense (they grant the owners no inherent abilities they did not already have, instead they only restrict the freedom of others by limiting their rights to acquire something that would otherwise be limitlessly abundant).
> Grandma gifts you a photo of her and her partner.
Which is a contrived example that conveniently sidesteps the two most relevant factors: the personal/intimate nature of the communication and the consequences of sharing it against the person’s will during a timeframe in which it would have an effect on the person's life.
> It's completely reasonable for someone to give you a letter or photo of a personal nature and ask you not to share it (or simply expect you to understand that it's not to be shared).
Yeah, and because I am a good friend, I will not share those photos with anyone, especially if they ask me not to. :)
Yes, that's exactly what I said. The feature is designed to address the "somewhat bad friend" case, not the "terrible friend" case. Both of those kinds of people exist in the world.
Reducing the chance of something that you didn't want shared from being shared by a factor of X% is better than nothing. Try to step out of black-and-white thinking for a minute. Just because something is not perfect doesn't mean it's useless.
I'm personally disgusted by tools which actively attempt to prevent someone from doing something which I myself can bypass just because I have the knowledge to do it.
Yes, your example of trying to prevent someone moderately bad from sharing something personal to you which you don't want shared is reasonable. The trouble is, you can use the same mechanism for much less reasonable and more nefarious, corporate things.
So the end result is that we're not to have moderately nice things, lest they be used as a weapon against user freedom by a mega-corporation.
I wonder how many people it actually stops from saving or sharing. If they want to save, they will. If they want to share, they will. I know you are talking about some people who want to save or share but is too lazy to take a screenshot or take a photo with whatever they can.
It's not about sharing, it's about proof of provenance.
If I say "Alice told me this", there is no proof that it actually happened; Alice can deny it. If I forward the message or show you a screenshot, then the case that the communication happened is significantly stronger. (Of course screenshots can be doctored, but that's another problem)
That's mostly what this is about - reducing the chances for accidental or willful disclosure to third parties.
> If I forward the message or show you a screenshot, then the case that the communication happened is significantly stronger. (Of course screenshots can be doctored, but that's another problem)
How is it another problem? If the method purports to be able to strengthen claims of veracity, but immediately after you demonstrate it to be vulnerable to spoofing, then it offered nothing in the first place. It's exactly the same problem.
Exactly. When you share something, that information is out and you have no control over of it. Plus, why would my friend want to share and then un-share something, or why would I want to do that to someone? The answer is that it is not for the regular people, it is to protect creators and such, not some regular Joe. Telegram is targeting a particular audience with this change. That said, I am pretty sure it will not be successful because I have access to a lot of creators' stuff through the Internet, be it music or books. If people want to save or share stuff, they will find a way.
While I an certain the current state of affairs of having DRM and personal privacy is unstable in the face of even the tech we had 10 years ago, we still have a strong personal need for control over our information and our works.
Not true! We have no personal need for control over our information, as the utter lack of legal protections for personal privacy and personal information collection make abundantly clear. And works? Are you kidding me? Have you seen a typical corporate IP assignment agreement? This has nothing to do with persons (which the laws are making increasingly clear are irrelevant), and everything to do with corporations and their shareholders.
I’m not sure if you’re being sarcastic or attempting a joke here?
The EU, where I live, absolutely does have legal protections for personal privacy and personal information collection, hence GDPR and all the cookie popups[0].
And even if the law was silent, that wouldn’t itself be evidence of a lack of need, as people died from lack of workplace health and safety regulations well before there were laws about that.
And while IP assignments are an interesting suggestion to raise, I counter that I have also seen a forum of users who didn’t read the T&C and suddenly realised $corporation had the eternal right to reproduce whatever they wrote on that forum (kinda necessary but clearly non-obvious to most normal people), which demonstrates that people definitely feel strongly attached to even really dumb and low-value works if they are those works are their own.
[0] that they adhere to the relevant law about as well as all the YouTube videos saying “no copyright intended” adhere to IP laws is an enforcement problem, not a lack of rights
Given that most DRM these days is implemented in the apps that license the content, I'm not sure this is remotely true. Maybe in the disc era there was truth to this, but even then I'm not sure that it was a greater concern than lost revenues due to piracy.
Just because you want to share something doesn't mean you should. You cannot control what people will do, nor should you be able to control their actions by limiting their freedoms. Either you trust them or you don't.
If by TLS you mean the whole encryption and signing of http content etc, then I would argue it's the same. The purpose is to have an immutable and untamperable pipeline of data straight from the "server" to your "eyeballs". The cat definitely came out of the bag with tech enabling so much that was difficult in the real-world, and now they're busy scrambling to put it back, and I would argue, using "privacy" and "safety" as a Trojan horse.
Edit. Side note seeing as I don't think I 100% addressed your point. If we allow data to be viewed, and relayed with the potential for altering, then they no longer have 100% control over the content being changed/recorded. So viewing and sharing (with potential edits) is akin to tampering with the pipeline. Sharing is not at issue, it's you being able to intercept and have control over the data on your device.
Now I'll go scurry off and take my tin-foil hat off for the day.
The fact that iOS plays nicely with apps that ask to prevent screenshotting is a foundational bug. Does such an ability exist on stock Android? Either way, it’s best practice to root & jailbreak.
> The fact that iOS plays nicely with apps that ask to prevent screenshotting is a foundational bug.
In fact, iOS does not provide any tools to prevent screenshotting, on the contrary, this feature requires some nasty hacks to pull off.
One of them is actually even patented and it requires rendering whatever you want to protect (text, image, etc) into a one frame DRM protected video and rendering that instead.
I'm imagining another method whereby you have a high frame rate video that only displays part of the image on any individual frame. Because of the high frame rate, the individual frames appear to merge together as a single image to the human eye.
Any screenshot would only get one frame. You may have to stitch together multiple frames to get any useful data.
Apparently, these app vendors go to a lot of trouble to protect the screenshot blocking ability. They test for rooted iOS devices and make decisions on how to even allow the app to launch if rooting has been detected as they know that part of the use of the rooted OS is to get arround screenshot detection.
> The fact that iOS plays nicely with apps that ask to prevent screenshotting is a foundational bug. Does such an ability exist on stock Android? Either way, it’s best practice to root & jailbreak.
When I used feature "screen cast" and tried to cast Firefoxs private tab, I just got black screen.
on Android there isn't a screenshot permission, there is an allow screen capture permission.
for example private Firefox tabs do not show a preview in the system activity preview, it is a general "do not let the content of the screen escape the control of this app"
These kind of features give more power to technically minded people, like the people who implement them. I see similar things with Outlook users being able to "recall" emails or being able to delete messages on Teams etc. There are certain people, usually less technical people, who actually think those features delete data that has already been sent. It is an abuse of power to give less technical people a false sense of security.
I'm not so sure about that. Perhaps during the hobbyist era of PCs, this was the case. But, I distinctly remember a world during the "dominance" of the PC that forced me to work with proprietary software just to be able to type out a document. There was really only one choice of computer to buy and one kind of OS to run on it. It was really hard to use a different OS on your computer, and it was risky too, since PC manufacturers would often not honor warranties if non-proprietary OS/software was used on the computer.
Even in those days we had more power than we do today. You had to use proprietary software but that software still worked for you and did your bidding. The computer ran the software you told it to run without complaint and it didn't matter that some "rights holders" were losing money or whatever.
At some point all of this changed... Software now works for the corporations and only allow us to do what they designed for us. If they don't like something, you just won't be able to do it anymore. You can't even program the computers anymore, that's now a privilege increasingly reserved only for "approved", "licensed" individuals.
Typing out a document? I wouldn't be surprised if the office suites of the future started automatically checking your work against some "rights holders" corpus that you're not supposed to copy and then preventing saving or printing until you fix it.
PCs have unique advantages, they enable things like Napster, Torrents, in-premise storage, and all kinds of autonomous servers that could never work with mobile computers and the cloud.
No, it's not stupid because it works pretty well. But preventing you from saving content is not the actual intention behind DRM. It's just the easiest way to hinder you from re-distribution.
Well, as I said, it hinders you from redistribution – it doesn't need to shut it down completely to fullfill its goal.
Also I assume it's working in a the way intended. It may be a stupid thing to do from your or my perspective because it doesn't work towards our benefit – but the rational behind it is sane. That's what I meant with „not stupid”.
I just realize I sound like this sickening utilitarian tone you find in HN discussions so often. Sorry for going that way. Basically I'm with you, it's stupid when you take a step back.
The lock on your front door doesn’t prevent me from entering if I really want to, but it’s a deterrent and a clear signal of your intent, that I’m not to enter without your permission.
You keep saying that, but I think that maybe you could make a better effort to understand other people’s analogies, even when they don’t map neatly to the mental model you have in mind.
My point is that a prevention mechanism can have utility for those employing it, even if it’s trivially circumvented by those who really want to. It does not have to be a binary thing, where either it works perfectly 100% of the time or it’s useless. It’s obvious to me that this feature could have a stifling effect on spread of content, and that might be well enough to have fulfilled its purpose.
Mind you, ideologically I do not necessarily agree with it, but that’s another matter.
I'm not sure it is. I think it's the same principle that things like GDPR are based on. It's not that you want to prevent the entity from having access to that data at all. It's that you don't want that data sitting around forever which becomes a security liability where the attacker is not the recipient of the message.
Only in the most liberal sense of the phrase "DRM" (which IMO would be a useless definition as many platforms/apps have analogous intentional and unintentional barriers, but they're not called "DRM"), but not DRM as we know it. While the decision is on the sender's side, a crucial distinction is that the sender enjoys the same power as the recipient in the conversation - very similar to information sharing between friends IRL. Calling it "DRM" is just pedantic.
> as many platforms/apps have analogous intentional and unintentional barriers
Now, that I'm thinking about it, it seems that the new trend of hiding the files on the disk looks indeed like a light version of DRM preventing users from doing whatever they want with their data.
Personally I think preventing screenshots is a user hostile feature of an OS anyway (accidental screenshots aside, outright refusing to do something is the definition of user hostile).
Applications definitely shouldn't get any say in which screenshots are allowed.
iOS doesn't allow preventing screenshots but it does notify the app when they happen. Apparently telegram has found a trick to block parts of the screenshot out. Android has an api which does block screenshots.
Pretty much. A sleezy guy at work would show off the nudes he got on his private phone from girls on Snapchat by taking pictures of the screen with his work phone.
Life tip: never send nudes or any private information to anyone you don't completely trust, regardless of the privacy features the communication medium offers, as there's always an easy workaround.
>How about you never take nudes in the first place?
That's obviously the safest option, but sending and receiving nudes is an activity many sexually active people find arousing.
Not having sex is also the safest option to preventing pregnancies and STDs, but people still engage in it with protections that are not 100% guarantee simply because it's an activity that most people need.
Better tip: never send any information or opinion you wouldnt want the public, your parents, your colleagues and the police to associate your name to. I dont get the newish trend online to send BS you're ashamed of.
I've worked a bit with a such system, from interpreting it at the policy level and down to creating a technical POC at one time.
My conclusion and I think everyone elses too at that project was that despite the fact that Microsofts offering (Azure Information Protection, but not the the Sharepoint part of it) was almost brilliant it only solves involuntary leaks:
- people forgetting to lock their machines,
- forgetting that something is internal
- etc
If someone wants to leak information they can always take a photo of it.
As someone who has had colleagues send screenshots of sensistive details, taking the effort to reply on BCC-ed mails and more and who has also managed to do a few things of my own I welcome this.
And besides that, it's very easy to bypass those restrictions anyway. The most common would be to screen _record_ prior to opening, that usually isn't detected — or alternatively, for us rooted users just use a screenshot app which ignores those soft-blocks.
>The most common would be to screen _record_ prior to opening, that usually isn't detected — or alternatively, for us rooted users just use a screenshot app which ignores those soft-blocks.
According to other comments in this thread, this just freezes when switching apps, and fails. I'm not too surprised, given that the feature (at least on Android) is meant as a security measure (i.e. most of the banking and 2FA apps use it to some extent).
But my personal favorite is that this is entirely, 100% client-side, and there's already a few handy ways to patch out the checks from the foss client.
I hate "DRM" as much as the next person, but ... am I the only person that sees this as a great feature in the particular content it is implemented for?
This isn't like classical DRM where the intent is to stop you from owning content you already purchased.
This is effectively a way for group/page holders to ensure control over content that is not meant to be shared beyond that context.
Yes I know that technically if you post a picture of your kids in the group someone could still take a physical screenshot from another phone, but the point is reasonable friction, not an insurmountable tech barrier.
There is more of a social issue here... If Person A sends a message to Person B, who should get to decide when that message is deleted?
Telegram seems to have decided that either Person A or Person B can delete the message, without the permission or notification of the other.
I personally would prefer it to be the message is only deleted with the permission of Person A and Person B. Ie. "Bob has deleted his copy of this chat, and requests you do the same. Delete Chat?"
Of course, the other consequence of being able to delete unilaterally is that message history is no longer a "redundant backup" - one person's mistake can delete everything. Furthermore, chat transcripts can change silently, which may have various consequences. Telegram deleted messages are not shown as placeholders, but just disappear.
On the other hand however, unilateral delete does help the use case where people want to minimize the chance that the information is accidentally leaked or exposed - the "store less info" strategy
In my ideal world. You would get one hour to delete the message. I think it is genuinely useful to be able to "undo" a send and redact stuff accidentally sent to the wrong person as this is so trivial to mess up. But after you leave it for an hour, it becomes archived history for the other person. This is actually how telegram used to work from memory before they let you delete both sides whenever you want.
Imagine future, with Neuralink DRM. You go to tell your friend the summary of the movie you just saw, but thankfully the implant was able to disable your vocal cords just in time to protect the intellectual property of the copy right holder.
And the implant will also disable your vocal cords for the sharing of bad or good opinions about the movie. Whether it's the good or bad ones that are being blocked depends whether the publisher runs ad campaigns through Neuralink or not.
Your limited time license to Disneys "The Lion King" has just expired please assume a stable position while the licensing system purges your long term memory of any audio, visual or emotional response associated with it or pay $10 for the limited time chance of upgrading your memories to the latest Lion King remake with a monthly subscription fee of only $5.99^1 .
^1 Actual viewing of the movie not included, use of IP during social events requires a premium license, disparaging comments are subject to filtering and may result in license termination, Disney is not responsible for any long term damage to your brain, ... .
TriStar's lawyers have deemed your comment to be an unauthorised reproduction of the Total Recall plot line. Please step into the room opening on your left.
I first read it as "Please step into the window opening on your left" and thought how creepy it was, but then I realized that it would be a loss of a consumer and associated future profit, on the other hand in this case missed future profit can be charged from your possessions.
These media fingerprinting databases are truly dystopian. I have come across a wide range of media recently that is being erased from our culture this way. The media giants say "don't pirate it, you can buy/rent it from us". But - and this is a big but - what if you can't rent/buy it? What if the media giant just adds the content's fingerprint to the database but then erases or locks away all copies of it.
Now you cannot upload it to any site with any serious audience. I have videos that the networks have locked away, but yet I cannot upload them to Youtube/Vimeo or anything similar because they are flagged. I cannot self-host them because the bandwidth requirements would kill me. Sure, they probably exist on places like Freenet, but that is essentially inaccessible for most mortals.
Once all the people that know of these things die off we'll only be left with whispers on the Web where such things are mentioned as having once existed, but can never be seen.
An interesting example: CBS pulled Star Trek from Netflix in Germany, but because they licensed Paramount+ to Sky, which launches next year, now some of the show's aren't available at all.
Yep, another example that I was thinking of. There is no way for you to get that content legally right now, although it will be available in the future (allegedly).
When I worked on Microsoft's DRM products in the early 00s I came to the sudden realization that if we had AR glasses in the future they could block out things from our vision and replace them with adverts. Ugh. It will come true.
I think it would be more along the lines of the Panoply security mechanisms in Reynolds’ Revelation Space universe, where a (short-lived synthetic virus?) induces something akin to reverse-dyslexia so you can read things normal eyes cannot (keywords for search: “pangolin” or “manticore” security clearance).
Why not just program people to go to work, eat, spend and sleep? Just disable the ability of having creative thoughts and create a new thinker class where only children of the elites will not have an implant.
The tooltips on the images and videos are excellent. Now I want a comparison between iOS’s text recognition and a pharmacist. And it made me think of how 93% of paint splatters are valid Perl programs (https://www.mcmillen.dev/sigbovik/2019.pdf, PDF, 7MB).
There are witty alt texts in all figures in Telegram's blog posts. I would really like to know the ratio of (time spent writing the text) / (time spent coming up with the alt texts)...
The editorialized title says DRM but I believe "protected content" in this context means restricting the sharing and screenshotting of content in the app via normal measures, not that they applied tons of cryptography like more a mainstream meaning of DRM would.
No. DRM is meant to protect copyrights, but Telegram's intend was to protect private messages from being forwarded to the public, which has nothing to do with copyrights.
Telegram has a ToS which blocks any client which does not comply with the same restrictions as the app. They have used it in the past to block apps which do not send the "seen" status on messages.
A lot of negativity here for this, but I see the benefits in some scenarios for privacy: We may want to share family pictures with more of our family, but not have those pictures piped out to social media/the public. Especially for less technical users, this provides a reasonably decent protection from that occurring.
> prevents screenshots and limits the ability to save media
I fail to see how this actually prevents what it is supposed to prevent. You can still take a screenshot on desktop, or you can take a photo with another phone of your phone with the "DRM'd" content. They just got rid of the "Save" button and probably revoked the permission for taking screenshots.
Is the client open source? If it is, you can just modify it then.
A nice idea in the opposite direction I saw recently was watermarking images, why create a fake felling of privacy when you can create legal liability ?
I don't use telegram, but I know it can be added as a bridge in Matrix - so if someone has that setup, surely this is pointless? Would the image not just be saved on the Matrix server? In fact with any sort of bot?
I wonder how this fares with android rooting, ios jailbreaking and generic desktop reverse engineering. Surely i can capture the screen video buffer in some way or another. And most probably someone dedicated enough would find the encryption scheme and key from the binaries to extract source media.
Back in the days there were apps in cydia to get a hold of all the snaps you received since they were just lying unencrypted inside a private sandbox.
It might not stop everyone, but it could stop most people.
Hell, I even stopped trying to use developer tools to extract facebook videos because they made it more difficult, and stopped using youtube-DL because Google is now throttling it. Main reason is that it's not worth the trouble anymore.
Last I did FB video downloading like a year ago I had to write a script that converts and stitches video fragments together.
It was still doable, but quite inconvenient.
Simpler to just record screen with OBS.
I lost a favourite video of mine from youtube, fortunately someone reupled it, so I quickly ripped it with OBS, as the youtube downloaders had trouble downloading the audio part.
I had a rooted android and still found it hard to get around it. The only way was with the xposed framework thing and you have to install a module that disables the secure flag thing. It wasn't compatible with my device so I couldn't see any way to do it.
Am I the only one that's still waiting for them to implement synchronized end-to-end encrypted chats? Ideally using a well-known algorithm and enabled by default.
(In my opinion, Telegram has by far the best apps of all messengers out there, but I will never be able to get certain friends to use it until that issue is resolved...)
THIS IS COMPLETELY MISLEADING. Telegram just provides group administrators with an option of preventing bad members from secretly forwarding messages to other chats, or to save them locally. It has nothing to do with digital rights, nor DRM. It is a privacy feature, not a copyright feature.
I personally am happy about this feature, the main application for me will be to prevent the leaking of private information from smaller groups, obviously if someone was so determined they could easily get around it but for small groups of average users, removing the immediate option is good enough
Of course, that’s not remotely true, so what have they accomplished? If someone receives messages that they want or need to save, e.g. for legal reasons, this feature isn’t going to stop them.
They can block recordings or mirrors for DRM video. Telegram blocks screenshots of secret chats on android but on iOS it sends a message to the other side saying a screenshot was taken which makes me suspect it isn't possible to block screenshots.
This is misleading. The new privacy protection features in Telegram are not DRM at all. They are just to prevent users from forwarding information outside of private chats or saving them, not related to copyrights at all (which is what DRM is meant to protect). Also, DRM is not mentioned in the changelog.
I expected more from HN. If anyone bothered to read the article (which, btw, was posted with a disingenuous title...shame on you OP), you would see that Protected Content is only available in Groups and Channels, and is not on by default.
It seems a lot of people don't realize that Telegram is used for more than chatting with your contacts. Telegram Groups and Channels serve as a content delivery system, with access often restricted behind some kind of pay system like Patreon or Telegram's own Payments API (think OnlyFans and private Discords). This is a boon to content creators as it protects their payed Telegram content from being easily shared into other Telegram Channels.
Telegram quietly transcended beyond chat app and into the social media arena a while ago. It's about time HN caught up.
While this should work against unsophisticated users, the "analog hole" cannot be really plugged: the user has to see or hear the protected content. At that moment, the content may be grabbed (in a bit-perfect way, with some luck).
So, this can curb sharing but not really prevent it. Will it be a net positive? Let's see.
in general even if decent copies are easy drm can be effective: not in preventing copies but in making paying users feel they got their worth.
I am generally of the opinion that piracy is a net positive, but I am ok with pirated content being slightly degraded, analog copies often meet this criteria
> which, btw, was posted with a disingenuous title...shame on you OP
How is it disingenuous? They introduced DRM. Call it "protected content" if you want,but that honestly seems more disingenuous than simply calling it "DRM".
> This is a boon to content creators as it protects their payed Telegram content from being easily shared into other Telegram Channels.
No it doesn't. DRM doesn't work. End of story. Telegram has a public API and a Free Software client. All someone has to do is fork the client and disable screenshot / screen recorder blocking. At best they can block "forwarding" so you can't actually see what user originally sent the message. Once you send someone content on Telegram, you have no technical capability to stop them from doing what they want with it.
story time: in some country poultry farmers are given based on selling prices of their products, except that for this particular case there is no market consult so the farmer is asked "how much would have people payed for your stuff" and he or she guesses some number.
obviously everyone cheated that question to get more money; a year they essentially changed the module to add a "are you lying?" question and suddenly this lowered significantly the overestimated prices farmers declared
---
this does not apply 1-1 to DRM in telegram (they removed the download option, not just added a warning), but I find it a relevant story to a broad interpretation of "DRM doesn't work"
"DRM doesn't work" in the sense that it does not, and cannot, stop users from copying media, it only can make it more difficult. Of course being more difficult can dissuade some non zero % of people who otherwise would have committed the terrible act of Unauthorized Copying, so you could say it works in that sense.
The problem here is not that Telegram disallows you from taking screenshots but that the OS, which should be on the side of the user, allows apps to disallow you from taking screenshots.
The OS probably doesn’t disallow any such thing, but several apps have found workarounds such as briefly setting the screen dark when it detects a screenshot, or various other forms of trickery.
The result is that the screenshot still gets taken (they have no way of disabling that), but the photo is unusable.
Agreed, but there’d probably be an uproar from the content providers, and neither Apple nor Google has an incentive to be on the users side on this one :/
- people might have a legal right to make copies
- it's opens up a lot of potential for abuse, as it's harder to safe proof of abuse.
- it gives people a false sense of security (e.g. when sexting)
Couldn't they instead e.g. display low resulution images for screenshots or similar?
That should be good enough for artists in my experience. (I mean there are artists which live draw the art on e.g. twitch they then sell, it works as the image quality you can easily extract is just not "good enough" for most potential buyers, and if we idk. throw AI sharpening tools at it then we could also throw tools at it which circumvent telegrams protections).