Hacker Newsnew | past | comments | ask | show | jobs | submit | ls612's commentslogin

This is exactly what Golden Dome proposes to do for missile defense. It improves the math of missile defense a lot compared to past proposals and it’s not completely crazy but it’s still not certain if the technical capabilities would be there even with a fully operational Starship.

these videos are saying Golden Dome (or at least missile defense) is currently not good enough

the one above and this one https://www.youtube.com/watch?v=KdPTpRfhdWM

which you know credibility who knows

edit: the "not good enough" part I mean kill vehicle trying to lock onto the right target not decoys

but I've only started looking into this, not that I really have a say in it as a civilian just along for the ride


Yeah it’s a long and complex topic. Perun takes the same view as those people you linked but I think there’s several interlinked developments driving this, one of which is the pace of advancement in space access. Even if it won’t work today this is realistically a 20 year project regardless of Trump’s desire to have it by 2028, and if launch costs fall another order of magnitude between now and then and you can make really small interceptors it changes the math a lot. There’s also the broader game-theoretic and strategic stability (or really strategic metastability but this is a whole long and complicated digression) issues which tldr means that if you think such a thing could be possible it would allow any actor who could accomplish it to alter the fundamental MAD equilibrium we have lived under for 80 years and this would come with immense first mover advantages.

The others would also build their own I imagine but yeah, the tech is cool despite the purpose

Checking out Perun


It is unlikely that ID requirements for the internet would pass constitutional muster in the US. SCOTUS looks poorly on anything resembling a speech licensing regime.

He's not wrong though. A Europe that has to be responsible for its own defense either has to substantially reform its economy and society, or rely on France with its ASMPA doing the geopolitical equivalent of a drunk guy waving a knife around saying "stay away!".

He's not right either because he makes large vague claims. If we want to discuss it on clear subjects.

The "free" Europe has, and always will, thank the US for their help during WW2.

The US didn't protect us until now, because there was nothing they had to protect us from. US pulled it's allied NATO members into war in Iraq, if we're tallying things up.

With the invasion of Ukraine by Russia, both the EU and the US came to aid because a sovereign, close to EU, nation is invaded. No one was forced to do so. It's a matter of adhering to principles, that in theory are, shared by the "western" countries.

Realistically the US would have to jump to our aid if Russia's attempts outreach their Ukraine war. But that is because that's part of the deal we all made when becoming members of NATO.

And in terms of "subsidizing". That's the most outlandish claim, the US military industrial complex is so large for a reason. It's due to benefitting directly from the international government contracts, the technology it's building and selling (let's leave aside the shady and corrupt aspects of it for this topic)

The small amount of investment spent on military by European NATO members is a fair claim, but let's not kid ourselves, the grander scope of such spending will, and is going, towards US military tech. In a sense having higher spending is all about pushing more money to the US.


And if the US doesn't jump to your aid? If in 2028 Trump goes "you guys had six years to prepare and pissed it away, why should we bail you out?" what can Europe do? They could certainly try putting together a multinational European army and taking it into combat but I doubt there is much political will for that in France or Britain, who crucially possess a lot of the air and seapower.

> And if the US doesn't jump to your aid?

Pretty banal answer, countries will "just" fight.

Even without NATO, the EU countries already have a defensive pact.

Which as a sidenote is why the dismantlement of EU looks like appealing proposition to both Russia, and the US (for different reasons).

The US would make such a war easier, and fewer lives would be lost, given it's tech, and intelligence network. With or without the US, Russia would lose such a war.

Only way nukes play into it is if a shithead like Putin says "fuck it", seals himself in the bunker and hits the nukes. But then we are all cooked, whichever country does that, since mutual assured destruction comes into play.

In terms of political and societal effects, really interesting question worth pondering about. How would the other NATO member countries retaliate if the US wouldn't join in defense. That would be a big betrayal, so I hope that at the very least all US assets are seized, and US companies nationalized across the EU.


I feel like the madman with nukes approach would probably be enough - just ask North Korea if they've been invaded lately ;-)

It wasn't the nukes that kept them safe. It was artillery. But the principle of mutually assured destruction is the same.

Europe as a theme park/museum with nukes actually is kinda funny to me as an American.

My guess is that if it went to trial Netflix would win tbh. That’s why Paramount is having to raise its bid substantially, they can’t rely on getting Trump to serve WB up on a platter.

All I can say as an American is that the Europeans did it to themselves. They have not been strategically serious countries for almost a century and have not had to compete with the rest of the world as a de facto US protectorate. Now they are discovering the consequences of spending 40% of their GDP on social programs.

This criticism may be correct on its own, but I would say it still unbalanced because it leaves out some relevant things. America has a lot of debt and a lot of spending on social programs as well. There is debt everywhere - the federal government, states, cities, companies, and people. This will become a problem soon. It is propped up by the reserve currency status of the Dollar but that may go away before the end of this century or a lot sooner.

Calling Europe a de facto US protectorate is also ignoring the fact that the US has a geographical advantage of being relatively separated from hostile world powers, which let it avoid most of the effects of the world wars - and that’s really pretty recent in historical terms. Is that really something America gets credit for, or is it just luck?

Finally, the US had benefited a lot from immigration but the most vocal American voices that attack Europe seem to ignore this reality, and are also clamoring for a shutdown of programs like F1, H1B, etc. - despite half the biggest American companies being founded by immigrants or their children. If you glimpse into the future, is America any more “strategically serious” than Europe? Or is it just another has been that turns to racism and isolationism to deal with its problems?


America has guns and is willing to use them which is really the only thing that matters. The issue with Europe is that they were idealistic enough to actually buy the whole 'rules' based foreign order nonsense while savvy people realized that rules and firepower mean the same thing to the guy holding the gun.

Given the security situation today repeating the peace dividend is not an option, so it would be much tougher than the early 1990s was.

It’s not Google room whatever, it’s Cloudflare room whatever. That’s why you don’t hear much about undermining encryption standards anymore, who needs that when you have SSL termination for 40% of the internet?

Some of the examples listed are using the wrong paper title for a real paper (titles can change over time), missing authors (I’ve seen this before on Google Scholar bibitex), misstatements of venue (huh this working paper I added to my bibliography two years ago got published now nice to know), and similar mistakes. This just tells me you hate academics and want to hurt them gratuitously.

> This just tells me you hate academics and want to hurt them gratuitously.

Well then you're being rather silly, because that is a silly conclusion to draw (and one not supported by the evidence).

A fairer conclusion was that I meant what is obvious: if you use AI to generate a bibliography, you are being academically negligent.

If you disagree with that, I would say it is you that has the problem with academia, not me.


There’s plenty of pre-AI automated tools to create and manage your bibliography. So no I don’t think using automated tools, AI or not, is negligent. I for instance have used GPT to reformat tables in latex in ways that would be very tedious by hand and it’s no different than using those tools that autogenerate latex code for a regression output or the like.

The legal system has a word to describe software bugs --- it is called "negligence".

And as the remedy starts being applied (aka "liability"), the enthusiasm for software will start to wane.

What if anything do you think is wrong with my analogy? I doubt most people here support strict liability for bugs in code.


I don't even think GP knows what negligence is.

Generally the law allows people to make mistakes, as long as a reasonable level of care is taken to avoid them (and also you can get away with carelessness if you don't owe any duty of care to the party). The law regarding what level of care is needed to verify genAI output is probably not very well defined, but it definitely isn't going to be strict liability.

The emotionally-driven hate for AI, in a tech-centric forum even, to the extent that so many commenters seem to be off-balance in their rational thinking, is kinda wild to me.


I don’t get it, tech people clearly have the most to gain from AI like Claude Code.

Computer code is highly deterministic. This allows it to be tested fairly easily. Unfortunately, code productionn is not the only use-case for AI.

Most things in life are not as well defined --- a matter of judgment.

AI is being applied in lots of real world cases where judgment is required to interpret results. For example, "Does this patient have cancer". And it is fairly easy to show that AI's judgment can be highly suspect. There are often legal implications for poor judgment --- i.e. medical malpractice.

Maybe you can argue that this is a mis-application of AI --- and I don't necessarily disagree --- but the point is, once the legal system makes this abundantly clear, the practical business case for AI is going to be severely reduced if humans still have to vet the results in every case.


Why do you think AI is inherently worse than humans in judging whether a patient has cancer, assuming they are given the same information as the human doctor? Is there some fundamental assumption that makes AI worse, or are you simply projecting your personal belief (trust) in human doctors? (Note that given the speed of progress of AI and that we're talking about what the law ought to be, not what it was in the past, the past performance of AI on cancer cases do not have much relevance unless a fundamental issue with AI is identified)

Note that whether a person has cancer is generally well-defined, although it may not be obvious at first. If you just let the patient go untreated, you'll know the answer quite definitely in a couple years.


What if anything do you think is wrong with my analogy?

I think what is clearly wrong with your analogy is assuming that AI applies mostly to software and code production. This is actually a minor use-case for AI.

Government and businesses of all types ---doctors, lawyers, airlines, delivery companies, etc. are attempting to apply AI to uses and situations that can't be tested in advance the same way "vibe" code can. And some of the adverse results have already been ruled on in court.

https://www.evidentlyai.com/blog/ai-failures-examples


Very good analogy indeed. With one modification it makes perfect sense:

> And as the remedy starts being applied (aka "liability"), the enthusiasm for sloppy and poorly tested software will start to wane.

Many of us use AI to write code these days, but the burden is still on us to design and run all the tests.


On a related note, why are release groups not putting out AV1 WEB-DLs? Most 4K stuff is h265 now but if AV1 is supplied without re-encoding surely that would be better?

I looked into this before, and the short answer is that release groups would be allowed to release in AV1, but the market seems to prefer H264 and H265 because of compatibility and release speed. Encoding AV1 to an archival quality takes too long, reduces playback compatibility, and doesn't save that much space.

There also are no scene rules for AV1, only for H265 [1]

[1] https://scenerules.org/html/2020_X265.html


AV1 is the king of ultra-low bitrates, but as you go higher — and not even that much higher — HEVC becomes just as good, if not more. Publicly-available AV1 encoders (still) have a tendency to over-flatten anything that is low-contrast enough, while x265 is much better at preserving visual energy.

This problem is only just now starting to get solved in SVT-AV1 with the addition of community-created psychovisual optimizations... features that x264 had over 15 years ago!


I'm surprised it took so long for CRF to dethrone 2-pass. We used to use 2-pass primarily so that files could be made to fit on CDs.

> Encoding AV1 to an archival quality takes too long

With the SVT-AV1 encoder you can achieve better quality in less time versus the x265 encoder. You just have to use the right presets. See the encoding results section:

https://www.spiedigitallibrary.org/conference-proceedings-of...


Yeah, is there any good(and simple)guide for SVT-AV1 settings? I tried to convert many of my stuff to it but you really need to put a lot of time to figure out the correct settings for your media, and it becomes more difficult if your media is in mixed formats, encodings etc.

I do a lot of AV1 encoding. Here's a couple of guides for encoding with SVT-AV1 from enthusiast encoding groups:

https://wiki.x266.mov/docs/encoders/SVT-AV1

https://jaded-encoding-thaumaturgy.github.io/JET-guide/maste...


Yeah I’m talking about web-dl though not a rip so there is no encoding necessary.

All those rules and I still can't get subtitles on many shows and movies

Player compatibility. Netflix can use AV1 and send it to the devices that support it while sending H265 to those that don't. A release group puts out AV1 and a good chunk of users start avoiding their releases because they can't figure out why it doesn't play (or plays poorly).

h.264 has near-universal device support and almost no playback issues at the expensive of slightly larger file sizes. h.265 and av1 give you 10-bit 4K but playback on even modest laptops can become choppy or produce render artifacts. I tried all three, desperately wanting av1 to win but Jellyfin on a small streaming server just couldn't keep up.

Because pirates are unaffected by the patent situation with H.265.

But isn’t AV1 just better than h.265 now regardless of the patents? The only downside is limited compatibility.

Encoding my 40TB library to AV1 with software encoding without losing quality would take more then a year of not multiple years, consume lots of power while doing this, to save a little bit of storage. Granted, after a year of non stop encoding I would save a few TB of space. But it think it is cheaper to buy a new 20TB hard drive than the electricity used for the encoding.

HW support for av1 is still behind h265. There's a lot of 5-10 year old hw that can play h265 but not av1. Second, there is also a split bw Dovi and HDR(+). Is av1 + Dovi a thing? Blu rays are obviously h265. Overall, h265 is the common denominator for all UHD content.

> Blu rays are obviously h265

Most new UHD, yes, but otherwise BRD primarily use h264/avc


I avoid av1 downloads when possible because I don’t want to have to figure out how to disable film grain synthesis and then deal with whatever damage that causes to apparent quality on a video that was encoded with it in mind. Like I just don’t want any encoding that supports that, if I can stay away from it.

In MPV it's just "F1 vf toggle format:film-grain=no" in the input config. And I prefer AV1 because of this, almost everything looks better without that noise.

You can also include "vf=format:film-grain=no" in the config itself to start with no film grain by default.


I watch almost everything in Infuse on Apple TV or in my browser, though.

What's wrong with film grain synthesis? Most film grain in modern films is "fake" anyway (The modern VFX pipeline first removes grain, then adds effects, and lastly re-adds fake grain), so instead of forcing the codec to try to compress lots of noise (and end up blurring lots of it away), we can just have the codec encode the noisless version and put the noise on after.

I watch a lot of stuff from the first 110ish years of cinema. For the most recent 25, and especially 15… yeah I dunno, maybe, but easier to just avoid it.

I do sometimes end up with av1 for streaming-only stuff, but most of that looks like shit anyway, so some (more) digital smudging isn’t going to make it much worse.


Even for pre-digital era movies, you want film grain. You just want it done right (which not many places do to be fair).

The problem you see with AV1 streaming isn't the film grain synthesis; it's the bitrate. Netflix is using film grain synthesis to save bandwidth (e.g. 2-5mbps for 1080p, ~20mbps for 4k), 4k bluray is closer to 100mbps.

If the AV1+FGS is given anywhere close to comparable bitrate to other codecs (especially if it's encoding from a non-compressed source like a high res film scan), it will absolutely demolish a codec that doesn't have FGS on both bitrate and detail. The tech is just getting a bad rap because Netflix is aiming for minimal cost to deliver good enough rather than maximal quality.


With HEVC you just don't have the option to disable film grain because it's burned into the video stream.

I’m not looking to disable film grain, if it’s part of the source.

Does AV1 add it if it's not part of the source?

I dunno, but if there is grain in the source it may erase it (discarding information) then invent new grain (noise) later.

I'm skeptical of this (I think they avoid adding grain to the AV1 stream which they add to the other streams--of course all grain is artificial in modern times), but even if true--like, all grain is noise! It's random noise from the sensor. There's nothing magical about it.

The grain’s got randomness because distribution and size of grains is random, but it’s not noise, it’s the “resolution limit” (if you will) of the picture itself. The whole picture is grain. The film is grain. Displaying that is accurately displaying the picture. Erasing it for compression’s sake is tossing out information, and adding it back later is just an effect to add noise.

I’m ok with that for things where I don’t care that much about how it looks (do I give a shit if I lose just a little detail on Happy Gilmore? Probably not) and agree that faking the grain probably gets you a closer look to the original if you’re gonna erase the grain for better compression, but if I want actual high quality for a film source then faked grain is no good, since if you’re having to fake it you definitely already sacrificed a lot of picture quality (because, again, the grain is the picture, you only get rid of it by discarding information from the picture)


If you’re watching something from the 70s, sure. I would hope synthesized grain isn’t being used in this case.

But for anything modern, the film grain was likely added during post-production. So it really is just random noise, and there’s no reason it can’t be recreated (much more efficiently) on the client-side.


Everyone is affected by that mess, did you miss the recent news about Dell and HP dropping HEVC support in hardware they have already shipped? Encoders might not care about legal purity of the encoding process, but they do have to care about how it's going to be decoded. I like using proper software to view my videos, but it's a rarity afaik.

I'm not in the scene anymore, but for my own personal encoding, at higher quality settings, AV1 (rav1e or SVT; AOM was crazy slow) doesn't significantly beat out x265 for most sources.

FGS makes a huge difference at moderately high bitrates for movies that are very grainy, but many people seem to really not want it for HQ sources (see sibling comments). With FGS off, it's hard to find any sources that benefit at bitrates that you will torrent rather than stream.


I've seen some on private sites. My guess is they are not popular enough yet. Or pirates are using specific hardware to bypass Widevine encryption (like an Nvidia Shield and burning keys periodically) that doesn't easily get the AV1 streams.

Smaller PT sites usually allow it

Bigger PT sites with strict rules do not allow it yet and are actively discussing/debating it.Netflix Web-DLs being AV1 is definitely pushing that. The codec has to be a select-able option during upload.


I'm seeing releases pop up on Pirate Bay with AV1 this year.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: