Hacker Newsnew | past | comments | ask | show | jobs | submit | jamp897's commentslogin

The concern around deep fakes is that they could be used to trick people, fair enough, but apparently tricking people doesn’t require much, if any, believably. Throughout history up to today people are tricked by the most obvious untruths with disastrous consequences on large scales.


The end result of this is a bunch of trolls in Russia might be out of a job soon and get replaced by a server farm running in the target country pumping out similar but not identical stories.

What this might usher in is the era of cryptographically signed news articles. Not just credibility but verifiability. Blocking


> What this might usher in is the era of cryptographically signed news articles.

Actually, how about cryptographically signing videos as they get written on the recording device?

Maybe there even are ways to sign data so that the integrity can get validated on shorter segments, so that clips can be cut. Write a signature every 5 seconds for the past 5 seconds?

Edit: This exists and the term for it is 'video authentication'.


> Actually, how about cryptographically signing videos as they get written on the recording device?

That wouldn't prove much besides that the person sharing the video had access to the device's private key. I think the best you can do is timestamp the video by uploading a hash of it to a blockchain, but even then that only proves the video existed sometime before that instant.


> What this might usher in is the era of cryptographically signed news articles. Not just credibility but verifiability. Blocking.

Huh, I'd never even considered that you could do that.


For what it's worth, it's almost never the case that the lack of proof of the identity of a news article author is what causes it to be fake news. More often, it's:

- a fact that has been distorted to be interpreted in a 180 degree way (Americans paying tariffs to the US Gov for buying Chinese goods = Trump saying "China is finally paying us!"), or

- a total untruth slipped in between valid concerns (like the fake Russian Black Lives Matter pages piggybacking off of civil rights abuses mentioned by the American Black Lives Matters campaigns), or just

- incitement of uncertainty in more or less solved problem domains (anti-vaxxers)

If you are interested in learning about more (failed attempts at) verified news platforms, though, try looking up verrit, and pravduh


Yes, but people quoted or referenced can provide their signature as proof to say they not only agree this is correct but that they also confirm it is not taken out of context or misconstrued.


I agree, you could probably attach some kind of social proof key-ring to news articles. That said, I feel like this would devolve into a "social currency for real currency" under-the-table paid sponsorship kind of deal rather quickly. We seem to have plenty of stealth ads nowadays, and it's especially disconcerting because iirc only 1 in 10 could discern them. I guess it could still be worth giving a shot in the hands of the right tinkerer.


The problem is that it doesn’t matter if someone is told something isn’t true, their beliefs aren’t changed.


I wonder if there's an extent it can be brought to that is so absurd that not even the most ignorant people can continue to buy into.


It would be nice if this was one tool, rather than two overlapping tools with some incompatiblies between them.


They can both build images, and the commands may differ, but the resulting images are all compatible. Under the hood podman uses buildah to build images. What's the specific complaint?


In that it adds complexity to have two tools vs one. And in the wild will add risk of mistakes and wasted time due mixups since they overlap and have incompatibilities. I’d rather give a team one tool, the only reason this is two tools is because they were independent projects, but they realy should be one conceptually.


What you're describing is pretty much contrary to the philosophy behind podman, buildah, skopeo, etc. though which is to have fairly narrowly scoped tools that serve a specific purpose rather than a big application that does everything.


I may not fully understand what the tools can do but it seems overly narrow scoped. Also in that Unix philosophy you don’t duplicate functionality that’s slightly incompatible between tools to the point you need paragraphs and tables to explain when to use which one.


Buildah specializes in building OCI images. Podman allows you to pull/run/modify containers created from OCI images. These are distinctly separate tasks, and it seems a lot more straightforward to me than having a daemon (always running, as root...) that handles both tasks.

Podman does allow you to build containers, but my suspicion is it’s intended for easier transitioning from docker (you can alias docker=podman and it just works). Also the build functionality is basically an alias for “buildah bud” so it’s more of a shortcut to another application than re-implementing the same functionality.

Edit: more reading on the intended uses of each tool if you feel like understanding them better https://podman.io/blogs/2018/10/31/podman-buildah-relationsh...


I think that explanation is a little clearer, however the repos and the article don’t make this clear and the fact that podman also builds images makes it less crisp.

> Some of the commands between the two projects overlap significantly but in some cases have slightly different behaviors. The following table illustrates the commands with some overlap between the projects.

And this makes no sense at all if you’re purposely designing a tool.


podman uses buildah to implement "build a container like Docker" functionality... what aspect of that is difficult to understand?

That functionality probably wouldn't be necessary at all if Docker didn't pollute the common understanding of containers in the first place.


See the table of the subtle differences, why does podman create images that aren’t compatible for example? Regardless of what Docker does, if you make tools that are for a specific use case why blur the lines?


The images are compatible. I’m not sure where you’re seeing otherwise.

What is blurry to you about the purpose of either tool?


I don’t think you’re reading the article, it says:

> Each project has a separate internal representation of a container that is not shared. Because of this you cannot see Podman containers from within Buildah or vice versa.

> Mounts a Podman container. Does not work on a Buildah container.

^ this here is one of the problems, the containers are not compatible is my interpretation.

The tool feature sets overlap with subtle differences according to the article, that blurs the line on what each one is for. They need to pick a direction, if you’re making a build tool and a run time, the the build tool must only build, and the run time must only run, or just make one tool. Intentional and truthful (meaning the words mean only what they say) design limits the chaos that happens in the wild, these tools aren’t doing that. It may seem clear to you, but the article is littlerly about how it’s not clear and how they overlap confusingly. So you’re going to come across a mess at some point due to this mistake, that or they could explain their rationale for the overlap but they don’t.


The difference is that buildah's only job in the world is to build OCI Images. Podman is more about running containers so It's containers are a lot more generalized.

Buildah containers and buildah run are far different in concept then podman run is. Buildah run == Dockerfile RUN. So we don't support a lot of the additional commands that are available for podman run and have decided to keep the format different. Podman has a large database, that we felt would confuse matters when it came to podman run.

I tell people if you just want to build with Dockerfiles, then just use podman, and forget about buildah. Buildah and its library are for building OCI images and hopefully embedding into other tools in addition to podman like OpenShift Source2Image and ansible-bender. As well as allowing people to build containwer images using standard BASH commands rather then requiring everyone to use Dockerfile. Podman build only supports Dockerfile.


It sounds like you’re mixing up containers and images.


I’m just taking the article at face value, they use the word container and say they’re not compatible. So maybe the article could be better, not sure.


The format shared between the tools is an OCI image. Earlier you stated the images are incompatible, which is false. Then you switched to worrying about the internal representation of a container differing between the tools.

Why are you concerned about buildah’s internal representation of a container, unless you’re contributing to the codebase?


In all fairness, the blog is a bit confusing. I know that podman ad buildah both comply with the OCI image spec and that pod man in fact calls buildah. Which makes the various discussion around visibility etc. somewhat confusing to me. It may well be irrelevant in which case perhaps there’s a clearer way of explaining the relationship.


We get this question all the time, and I totally understand the frustration. In a nutshell, here's the breakdown. I will highlight this in blog entries as RHEL8 comes out and emphasizes podman, buildah and skopeo, so you will see more :-)

If you break containers down into three main jobs, with a sort of forth meta-job:

RUN (& FIND) - podman BUILD - Buildah SHARE - Skopeo

If you think about it, that's what made docker special, it was the ability to FIND, RUN, BUILD, and SHARE containers easily. So, that's why we have small tools that map to those fairly easily.


It works quite well, but you need a host that isn’t blocked, and they tend to block AWS IPs, so you have to change the IP every now and then. For browsing SwitchyOmega works well with ssh.


My AT&T iPhone wasn’t blocked when roaming in China, it seems they make exceptions for foreign phones. They’re censors are very clever at knowing where the limits are.


Coz your roaming data are literally in an AT&T VPN all the way back to America.


He never explains why he wants to use HTTP, it’s only about why he thinks HTTPS isn’t nesscary.


HTTP is the null hypothesis, since it's simpler. Usually there is a great reason to reject this null hypothesis - it prevents security vulnerabilities. But if there is no added value, then there is no reason to do it.

Consider, why not double-wrap your stream? Put TLS on top of TLS on top of HTTP?


It's worth noting that a large number of people don't agree with you that HTTP is the null hypothesis. Instead, they think that HTTPS is a security/privacy best practice and a great part of defense in depth.

You can see this pro-HTTPS opinion all over this discussion.

As for your "consider", I personally do double-wrap many streams: I have a VPN for my browser. The VPN is great for hiding my home traffic from being spied on by my ISP. Without the VPN, HTTPS streams would reveal hostnames (SNI) and IP addresses to my ISP.


> Consider, why not double-wrap your stream? Put TLS on top of TLS on top of HTTP?

If it's the exact same implementation then that doesn't really add a second layer. If, however, I am provided the option to run HTTPS over a VPN tunnel, then I would happily do that in a heartbeat. In fact, I frequently do run my web traffic over a proxy, thereby giving it at least two layers of encryption.


Yet it’s actually not simpler for the user, since their transfer can then be tampered with either by accident or intentionally leaving the user with a broken download and then what do they do? A redownlaod from a different mirror makes no difference.


The situation you described is the same thing that happens with a MITM attack with HTTPS. You would get a failed download from any mirror.

Do you have a response to my question? "Consider, why not double-wrap your stream? Put TLS on top of TLS on top of HTTP?"


It’s not the same, Comcast and other ISPs don’t tamper with HTTPS, and if they break the HTTPS connection then it’s a clearer problem for the ISP to troubleshoot than corruption.

Sorry I don’t understand what double wrapping has to do with it, or why you’d ever do that.


Because that just makes things slower for no good reason?


Sounds like an argument for rejecting HTTP+TLS single-wrap too. (For apt — not in general.)


I was being glib because I didn’t think I needed to explain fully, but here we go.

Double-encrypting something with the same technique is pretty much always a sign of cargo cult crypto. Modern ciphers, like those used by TLS, are strong enough that there’s no reasonable way to break them applied once, and the downside is that applying them twice is making things slower than they need to be for zero added benefit.

On the other hand, TLS and PGP are very different things serving very different purposes, so nesting those makes sense. There is an added benefit from TLS, namely that you ensure that everything is protected in transit - including the HTTP protocol itself (which is currently not protected and which might be subject to manipulation as shown in this post). Plus, it provides some resistance to eavesdropping (and with eSNI + mirrors hosted on shared hosts, that resistance should improve further).


also, the apt way to fix this would be to a) move release.gpg out of the package path and b) require the release.gpg to be wrapped and signed with the previous valid key instead of being accepted blindly


Some country's firewalls distrupt https, which makes downloading things via https difficult.


So you create an non default http mirror for that minority, instead of making the majority insecure.


so if north korea is subjugating some poor souls over there, the whole world must suffer along? there could be a setting with a big warning to disable the default HTTPS behaviour...


Which countries? I’ve only seen HTTP connections tampered with in practice, and China’s GFW blocks HTTP no different than HTTPS from what I’ve seen.


Also some companies, to allow IDS to inspect traffic without having to extract keys from clients.


It’s missing customer acquisition cost, this is pretty important imo because it gives you a framework to measure and explore ways to improve it.


If the data we need to calculate it is exposed, and there is a demand, then this is something we can add.

Good feedback, thank you.


That’s my core understanding too, the model is essential and that not only rubs statisticians the wrong way but also the current luminaries in deep learning get visibly irritated by that too. The irony is that JP was very successful with statistics having invented Bayesian Networks, and now has moved on.

Here he asks a very simple question and look at the body language from the panel: https://www.youtube.com/watch?v=mFYM9j8bGtg&t=50m47s


Bitbucket’s killer feature is integration with JIRA and the rest of the Atlassian stack, so I doubt it’ll have much impact.


I guess that depends on your definition of “killer”, particularly wrt Jira.


CenturyLink isn’t exactly an underfunded provider though.


The meta issue here is that they didn’t have a plan to then validate the changes afterwards. And Grab’s contractors being from India know that streets can be unpredictable and change since it’s assuredly the same for them too and not just a quirk in Thailand. It’s an astonishing disconnect from reality based on western standards of paying attention, but I think this is pretty normal in SE Asia and causes a lot of problems. Hopefully they’ll be able to shed these habits at some point.


> Hopefully they’ll be able to shed these habits at some point.

...or, you know, companies like Grab can host their own data and provide their own services.


Having a single spot where broadly useful data can live is sort of the point of OSM.

Grab would already be using the OSM data in bulk to host their own services. That's sort of the model that the OSM community has pursued, the openstreetmap.org website and associated services are a tech demo with no service guarantee, so you wouldn't want to rely on them much for a business.


But then the data is tied up in their proprietary service. Like, I appreciate the attempt to contribute this kind of thing back, even if it was a bit ham-fisted.


> But then the data is tied up in their proprietary service.

It doesn't need to be proprietary. They can follow the lead of OSM and publish their data so that others can use and edit it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: