Hacker Newsnew | past | comments | ask | show | jobs | submit | m132's commentslogin

This. Also, for phones that don't support Android virtualization, there's a user-space hack, part of Termux upstream, that allows for root-less chroots via LD_PRELOAD: https://wiki.termux.com/wiki/PRoot.

systemd won't boot with this (needs to be PID 1), but a lot of software will work just fine and there's nearly zero emulation overhead.


I don't think it uses LD_PRELOAD, it uses ptrace to intercept system calls (hence the name). Unfortunately this does have performance overhead, although I've never bothered to measure it. Actually that would be an interesting thing to benchmark.

My bad, I must have confused it with something else. Yes, it uses ptrace; there definitely is some overhead around system calls, but that still should be better than running atop a full-scale CPU emulator. That being said, I haven't benchmarked it myself, just remember it being pretty snappy.

Thanks for your correction!


But does it synergize paradigms?

Creating a new capability is like making a new flashlight.

Maybe the new light can see wider, or further and you see something you didn’t before that was possible.

You can synergizr the looksmaxing while cooking if you like :)


> Runs on (your target hardware or environment)

Nice try, OpenClaw


The README mentions ARMv7-M, RISC-V, and AVR, but no actual SoCs or boards, and the source code contains unconditional inline assembly for Arm. Similarly, there are measurements of context switch time on RISC-V, while the scheduler is one big stub that doesn't even enter a task, only returns from itself using Arm-specific assembly [0]. The examples rely on this scheduler never returning, so there's no way any of them can run [1]. The bootloader is also a stub [2]. Not a single exception vector table, but plenty of LLM-style comments explaining every single line.

Others (well, two people really) have also noted the lack of a linker script, start-up code, and that the project doesn't even build.

82 points at the time of writing, which is 4 hours from the post's submission. Already on the main page. The only previous activity of the author? Two other vibe-coded projects of similar quality and a few comments with broken list formatting, suggesting that they were never even reviewed by a human prior to posting.

Does anybody read past the headline these days? Had my hopes higher for this site.

[0] https://github.com/cmc-labo/tinyos-rtos/blob/2a47496047fdb45...

[1] https://github.com/cmc-labo/tinyos-rtos/blob/2a47496047fdb45...

[2] https://github.com/cmc-labo/tinyos-rtos/blob/2a47496047fdb45...


HN these days is filled with people saying basically "Show HN: I had an LLM shit out something I wanted, I didn't read it, but you should!".

And then a bunch of green new accounts commenting on how it's cool and they learned something. It's just a never ending attack on our attention.


HN used to provide a really high signal to noise ratio for me, but it's degrading pretty quickly. There are new accounts below saying "hey I just learned what RTOS means, thanks!"

I reflexively reload HN many times per day, but I'm wondering if I need a walled garden with some sort of curation of individuals - which sucks - to get the signal level I want.


This probably isn't just an HN problem (GH's model is broken now). It's so cheap to make software that the previous process of releasing something new associated with a person is probably outdated. AI knows what sounds impressive too. So now we're drowning in software releases and attributing software to a person is meaningless.

I wish HN upvote data was public, I feel like I could build some kind of improved algorithm that reduces the vote weight of people who upvote slop.

This site is very much drowning in all the slop. It's over half of posts now I think, not just the "Show HN" posts. Those are 100% slop, as are all the non-show-hn new project announcements.

All the moderators have done is drop Show HN posts by newish accounts. It fixed nothing. I have to hope they have some ambitious plan along the lines of what you suggest.


As a rule people do not read the linked content, they come to discuss the headline.

The first indication to me this was AI was simply the 'project structure' nonsense in the README. Why AI feel this strong need to show off the project's folder structure when you're going to look at it via the repo anyway is one of life's current mysteries.


Honestly, maybe this is the problem.

A web-of-trust-like implementation of votes and flags, as suggested below, might be a solution, but I feel like it's an overkill. I've recently flagged a different clickbait submission, about Android Developer Verification, whose title suggested a significant update but that merely linked to the same old generic page about the anti-feature that was posted here months prior. Around 100 points too, before a mod stepped in, changed the title, and took it down.

Maybe the upvote button is just too easy to reach? I have a feeling that hiding it behind CSS :visited could make a massive difference.


Don't forget the 71 stars on github, and counting!

Oh wow, was 60 just a while ago. Guess the dead Internet theory is no longer just a theory.

It's a problem, but I really dislike the solution. Putting a website with known security issues behind Cloudflare's Turnstile is comparable to enforcing code signing—works until it doesn't, and in the meantime, helps centralize power around a single legal entitiy while pissing legitimate users off.

The Internet was carefully designed to withstand a nuclear war and this approach, being adopted en masse, is slowly turning it into a shadow of its former self. And despite the us-east1 and multiple Cloudflare outages of last year, we continue to stay blind to this or even rationalize it as a good thing, because that way if we're down, then so are our competitors...


I wouldn't call this "known security issues", it's an inherent problem with any signup or forgot password page.

Also, I doubt this is going to be pissing users off since they added Turnstile in invisible mode, and selectively to certain pages in the auth flow. Already signed in users will not be affected, even if the service is down. This is way different from sites like Reddit who use their site-wide bot protection, which creates those interstitial captcha pages.


> I wouldn't call this "known security issues", it's an inherent problem with any signup or forgot password page.

It's not inherent, though! Easy, definite fix: Reverse the communication relation. If the user has to open their mail app anyway, you could simply require them to send an email to you, instead of vice versa. This would solve the problem completely. (If spoofing the sender could be done reliably, the service wouldn't be involved in the first place.)

Now, it would slightly increase friction and lower convenience. That's why it's not done. It's inherently incompatible with dark patterns, data collection and questionable new user acquisition, but this too could be solved through standards and integration - without making Cloudflare de facto infrastructure necessity!

Possible convenient, better solutions: Have the browser send this mail, either by passing a template to the mail app, integrating SMTP into the browser/addon, or instate a novel authentication protocol, which in fact may remove the human interaction completely.

As if 2FA security was the main motivation for asking for email, and/or phone anyway. Companies want user IDs, if possible UIDs, as soon as possible to increase user data value and gain marketing opportunities. I once had a "welcome mail" after typing in the address, before sending the form. Yeah...


Nothing with email can ever be an easy fix, although the idea is amusing. It is inherently the problem.

'Inherent' has an absoluteness, which I disproved. Relying on email, is inherently troublesome, I agree.

But as I said, it's not about what's technically, or ethically mandated, but what's ensuring users won't get annoyed (getting bombed with mails is bad PR). Companies collect all these IDs for their (future) shareholders first and foremost. Asking for email doesn't alert people. Phone number would be more alarming, but that's still becoming the norm. They would ask for a picture of your passport too, but ... oh, wait!

Casually integrating Cloudflare into everything (incl. TLS termination lol), only makes data collection incentives greater. Let's not give in by declaring Cloudflare a fundamental necessity. Or do, but don't complaint about your disowned life as cattle.


Cloudflare has a stranglehold on the internet, but its marketshare is much lower than the incumbant email giants. Aprroximately 70-90% of all email goes through Google & Microsoft. You're trading one benevolant toll keeper for another... except those two give you no recourse should you end up on a sh*tlist or don't meet their unspecified and forever changing criteria for being a recognised mail provider.

There is no trade tho.

So your solution would be to do nothing?

Cloudflare is an excellent solution for many things. The internet was designed to withstand a nuclear war, but it also wasn’t designed for the level of hostility that goes on on the internet these days.


But cloudflare is also just difficult, I’m on Starlink (because where I am my only other option is Hughes net), and my browser of choice is Safari. No vpn, and only boring ad blockers.

I routinely blocked by Cloudflare from viewing things and occasionally, I am blocked from buying things. Just this weekend, it was $100 worth of athletic wear. I just keep clicking the box and it never lets me complete the purchase. After the 7th or 10th time I go and find another vendor that would actually sell to me. I was more annoyed than usual because the website already had my credit card at this point – but as this article proves there are reasons to block an order even with a credit card.


Cloudflare is becoming a single point of failure. That is not a solution.

And these people weren't validating the email address on signup. To "reduce friction" i guess.


Cloudflare is not the solution

What is a better solution?

You have to think hard about the problem and apply individual solutions. Cloudflare didn’t work for the author anyway. Even if they had more intrusive settings enabled it would have just added captchas, which wouldn’t likely have stopped this particular attacker (and you can do on your own easily anyway).

In this case I assume the reason the attacker used the change credit card form was because the only other way to add a credit card is when signing up, which charges your card the subscription fee (a much larger amount than $1).

So the solution is don’t show the change card option to customers who don’t already have an active (valid) card on file.

A more generic solution is site wide rate limiting for anything that allows someone to charge very small amounts to a credit card.

Or better yet don’t have any way to charge very small amounts to cards. Do a $150 hold instead of $1 when checking a new card

As far as cloudflare centralization goes though, you’re not going to solve this problem by appealing to individual developers to be smarter and do more work. It’s going to take regulation. It’s a resiliency and national security issue, we don’t want a single company to function as the internet gatekeeper. But I’ve said the same about Google for years.


None of your solutions seem useful in this case, especially a $150 hold. Site-wide rate limiting for payment processing? Too complicated, high-maintenance, and easy to mess up.

You can't block 100% of these attempts, but you can block a large class of them by checking basic info for the attempted card changes like they all have different names and zip codes. Combine that with other (useful) mitigations. Maybe getting an alert that in the past few hours or days even, 90% of card change attempts have failed for a cluster of users.


>None of your solutions seem useful in this case, especially a $150 hold.

Attackers are going after small charges. That's the reason they're going after these guys in the first place.

>Site-wide rate limiting for payment processing? Too complicated, high-maintenance, and easy to mess up.

And then you give a solution that is 10x as complicated, high maintenance, and easy to mess up.

>You can't block 100% of these attempts, but you can block a large class of them by checking basic info for the attempted card changes like they all have different names and zip codes.

This is essentially a much more complex superset of rate limiting.


A $150 hold would clearly be noticed by the victim, so the attacker wouldn't even try it.

Maybe if my bank emailed me, otherwise I doubt it. Local gas stations routinely use $200 holds and I'd have to go way out of my way to see it happen.

The point is whether every user actually notices it, it's that enough of them do that attackers are specifically looking for the ability to do small charges. If you remove that capability, they will look elsewhere.

Yeah… no it wouldn’t. I’ve watched users have their bank accounts emptied (by accident) because they kept refreshing. A measly £150 isn’t going to register until it’s too late anyway.

There's a reason attackers exploit any site that lets them do small charges, it's because enough users will notice a larger charge.

Whether every user notices it or not, attackers are looking for the ability to do small charges, and if you remove that they'll move on.


Since they updated the flow to only ever push 1 email to unverified users, I would say that's as patched as it can realistically be before you bring in the captchas.

I fully agree with your comment. Wouldn't it be possible to just put off sending welcome emails until the user actually engaged with the product in some way? And if an account wigh no engagement persists for more than say three months just delete the account again under the premise of 'eroneousely created'?

I had a similar issue and evaluated alternatives. Sadly, there were none that did the job well enough.

How do you suggest to implement bot prevention that works reliably? Because at this point in time, LLMs are better at solving CAPTCHAs than humans are.


And your solution is assume everyone on the internet is a good actor?

How would you solve this at scale?


Op basically said that the firewall rules and email confirmation alone would've mostly mitigated this.

But also Anubis is a good alternative to slow bots.


How about a signup flow where the user sends the first email? They send an email to signups@example.com (or to a generated unique address), and receive a one-time sign-in link in the reply. The service would have to be careful not to process spoofed emails though.

Another approach is to not ask for an email address at all, like here on HN.


"The user just needs to be careful not to step on a landmine. Exact steps left as an exercise to the reader".

Anybody can send email with all of the dmarc stuff, how do you "be careful" with spoofed email?


> how do you "be careful" with spoofed email?

You actually verify DKIM and SPF—you know, that “dmarc stuff”. That’s enough to tell you the mail is not spoofed.


Oh god. Tell me you've never dealt with those in real life without telling me lol

Usually the very best you can do IRL is "probably fine" or "maybe not fine" and that's just not good enough to justify blocking customers. Email is an old tech and there's a lot of variation in the wild.


That is how you get your conversion rate to drop to the floor, sadly.

Every extra field in the sign-up form already lowers the conversion rate.


It sounds appealing at first because it flips the trust model... instead of the service initiating contact the user proves control of their email up front That feels cleaner and arguably more robust against certain classes of abuse

But from a UX standpoint its a nonstarter

Youre asking users to

- leave the site/app

- open their email client

- compose a message or at least hit send

- wait for a reply

- then come back and continue

Thats a lot of steps compared to enter email -> click link. Each additional step is a dropoff point especially on mobile or for less technical users. Many people dont even have a traditional mail client set up anymore, they rely on webmail or app switching which adds even more friction

It also introduces ambiguity

- What exactly am I supposed to send

- did it work

- What if I dont get a reply

From the service side youre trading a simple well understood flow for a much more complex inbound email processing system with all the usual headaches (spoofing parsing delivery delays spam filtering)

In practice most systems optimize for minimizing user effort even if that means accepting some level of abuse and mitigating it elsewhere. A solution that significantly increases friction... no matter how principled...just wont get adopted widely

So while the idea is interesting from a protocol design perspective its hard to see it surviving contact with real users


I think the main UX obstacle is that it is unfamiliar – no-one does signups like that currently. But the flow does not need to be quite as bad, if you use "mailto:" links. In the happy case:

- user click on the link

- their email client opens, with the To:, Subject:, Body: fields pre-filled

- user clicks "Send"

- a few seconds later a sign-in link arrives in their inbox


`mailto` opens the Mail application on my mac, which I never ever used. I'd be surprised if that wasn't the case for most people.

> But from a UX standpoint its a nonstarter

Disagree. The UX would be pretty similar. Click a mailto link which opens the email client with to, subject and body precomposed. Click send. Server receives mail and the web page continues/finishes the sign up process. No need for an email reply. It’s different, but it’s not crazy.


Ok, and a lot of -- maybe most -- people won't have their mailto handler set up correctly. I don't even know if I do on my current laptop and I have email old enough to vote

Mailto links are not that common these days.


> It’s different

Ignoring the fact that mailto won't work for most people (it opens my Mail app which i never used), "different" is enough to make your conversion rate tank. It'd be unreasonable for anyone in charge of making product decision to go with that


Amidst all the age verification and bot spam going on, anonymous private/public key proof of identity could work: the newly signed up service must pass a challenge from the mail server to prove the user actually intended to sign up. Though I guess that would be basically the same thing as the users server initiating the communication. Really, just an aggressive whitelist/spam filter that only shows known senders solves it too, but as I understand part of the attack is having already compromised the mail service of the target. Having a third decoupled identity provider would resolve that, but then that becomes a single point of failure…

Honestly I really like CloudFlare as a business. There's no vendor lock-in, just a genuine good product.

If they turn around later and do something evil, literally all I need to do is change the nameserver to a competitor and the users of my website won't even notice.


Then you're not using any of their services besides DNS, at which point you don't need to use Cloudflare at all.

As soon as you turn on any other service they offer, you need to actively migrate away. It's an inherent issue of services that actually provide a benefit. If you're saying "I can just migrate to any other nameserver" then you're telling me you have no use for Cloudflare in the first place. Because if you did, you couldn't just not use it anymore.

Let's say you're using their WAF. Sure, you can just change your domain's nameserver and you've migrated away. But now you no longer have a WAF. Same for their CDN. Or their load balancer. Or their object storage. Or their CAPTCHAs.


I think they also lock you into their DNS when you buy a domain from them, unlike other registrars who allow to change your NS freely. Sure, you can just transfer the domain elsewhere for a small price, but the point is they go the extra mile to force their NS, which I havent seen with other registrars.

I use their DNS and also their proxy.

Both are extremely useful and good products.

I assumed this is what GP was talking about when referring to the turnstile.


Heh, the original being entirely vibed had me thinking of an interesting problem: if you used the same model to generate a specification, then reset the state and passed that specification back to it for implementation, the resulting code would by design be very close to the original. With enough luck (or engineering), you could even get the same exact files in some cases.

Does this still count as clean-room? Or what if the model wasn't the same exact one, but one trained the same way on the same input material, which Anthropic never owned?

This is going to be a decade of very interesting, and probably often hypocritical lawsuits.


Appreciate the full prompt history

Well, it ends with "can you give me back all the prompts i entered in this session", so it may be partially the actual prompt history and partially hallucination.

fwiw you can dump the actual session in a format suitable to be posted on the web with this tool: https://simonwillison.net/2025/Dec/25/claude-code-transcript...

[flagged]


They do, the whole tone and the lack of understanding of Docker, kernel threads, and everything else involved make it sound hilarious at first. But then you realize that this is all the human input that led to a working exploit in the end...

Freebsd doesn't have docker. It has jails which can serve a similar purpose but are not the same in important ways

Please at least read the context before attempting to correct me...

Here's what I'm referring to: https://github.com/califio/publications/blob/7ed77d11b21db80...


God damn, how much time am I wasting by writing full paragraphs to the Skinner box when I could just write half-formed sentences with no punctuation or grammar?

> can we demon strait somehow a unpriv non root user

"demon strait". Was this speech to text? That might explain the punctuation and grammar.


The grammar doesn't matter. It's a total waste of time. Obviously not when writing to another human, when then it's a show of respect.

It's amazing what an intelligence that has infinite patience can do to understand barely comprehensible gibberish.

Now give an excuse for pushing so hard for docker on FreeBSD.

I'm not correcting you, I'm adding context for people who don't know much about freebsd.

Welcome to vibe coding. If you ever lurk around the various AI subreddits, you'll soon realize just how bad the average prompts and communication skills of most users are. Ironically, models are now being trained on these 5th-grade-level prompts and improving their success with them.

Just think about how your parents used google when you were a kid. What got better results faster?

we were taught google search query syntax by our librarian when I was in high school in 2002-ish. so...

I mean, I get it: vibe-coded software deserves vibe-coded coverage. But I would at least appreciate it if the main part of it, the animation, went at a speed that at least makes it possible to follow along and didn't glitch out with elements randomly disappearing in Firefox...

How is this on the front page?


It's on the front page because it looks really cool. You can complain about it being vibe coded, but it still looks good. If you ask Claude to allow the user to slow down the animation, it can do that quite easily, that's just not a problem caused by vibe coding. And I'm on FF and didn't notice anything glitching out.

A Co-Authored-By tag on the commit. It's a standard practice and the meaning is self-explanatory. This is what Claude adds by default too.

I make the commits myself, I don't let Claude commit anything.

If you accept the code generated by them nearly verbatim, absolutely.

I don't understand why people consider Claude-generated code to be their own. You authored the prompts, not the code. Somehow this was never a problem with pre-LLM codegen tools, like macro expanders, IPC glue, or type bundle generators. I don't recall anybody desperately removing the "auto-generated do not edit" comments those tools would nearly always slap at the top of each file or taking offense when someone called that code auto-generated. Back in the day we even used to publish the "real" human-written source for those, along with build scripts!


It's weird, because they should not consider it as their own, but they should take accountability from it.

Ideally, if I contribute to any codebase, what needs to be judged is the resulting code. Is it up to the project's standards ? Does the maintainer have design objections ?

What tool you use shouldn't matter, be it your IDE or your LLM.

But that also means you should be accountable for it, you shouldn't defend behind "But Claude did this poorly, not me !", I don't care (in a friendly way), just fix the code if you want to contribute.

The big caveat to this is not wanting AI-Generated code for ideological reasons, and well, if you want that you can make your contributors swear they wrote it by themselves in the PR text or whatever.

I'm not really sure how to feel about this, but I stand by my "the code is what matters" line.


Sounds bit like the label "organic (food)" coiuld be applied to hand-written code?

Some differences with the human source for those kinds of tools: (1) the resultant generated code was deterministic (2) it was usually possible to get access to the exact version of the tool that generated it

Since AI tools are constantly obsoleted, generate different output each run, and it is often impossible to run them locally, the input prompts are somewhat useless for everyone but the initial user.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: