Hacker Newsnew | past | comments | ask | show | jobs | submit | elesbao's commentslogin

Anthropic's report miss a fundamental information: did the attack was started by an inside person ? outside ? can I use my claude to feed these prompts and hack the world without even knowing how to get other companies source code or data ? That's the main PR bs, attribute to chinese group, don't explain how they got there, if they had to authenticate to anthropic platform after infiltrating the victims network, and if so where's the log. If not, it means they used claude code for free, which is another red flag.


That's IN the report. Yes, yes you can. You don't need to be an insider at Anthropic to use Anthropic's AIs.

They used a custom Claude Code rig as an "automated hacker" - pointing it at the victims, either though a known entry point or just at the exposed systems, and having it poke around for vulns.

They must have used either API keys or some "pro" subscribtion accounts for that - neither is hard to get for a cybercriminal. If you have access to Claude Code and can prompt engineer the AI into thinking you are doing legitimate security work, you can do the same thing they did.

How do you attribute an attack like this? You play the guessing game. You check who the targets were, what the attackers tried to accomplish, and what the usage patterns were. There are only this many hacker groups that are active at the work hours of the work days in China and are primarily interested in targeting government systems of Taiwan.


Been reading a lot about this hate from the other side perspective's - I've burned out in my first experience as tech manager. There are good materials on Will Larson's blog (and books) and substacks like this: https://gleicon.substack.com/p/the-burden-of-tech-managers. It was all on me for over extending on control and not using my previous experience as leverage to have the team on my side...


my dude really got angry but forgot almost all cloud message queue offerings over HTTP works like this (minus SSE). Eventually MCP will take the same route as WS which started clunky with the http upgrade thing being standard but not often used and evolved. Then it will migrate to any other way of doing remote interface, as GRPC, REST and so on.


I mean…the cloud message queues that use HTTP are not good examples of quality software. They all end up being mediocre to poor on every axis: they’re not generalizable enough to be high quality low level components in complex data routing (e.g. SQS’s design basically precludes rapid redelivery on client failure, and is resistant to heterogenous workloads by requiring an up-front redelivery/deadletter timeout); simultaneously, HTTP’s statelessness at the client makes extremely basic use cases flaky since e.g. consumer acknowledgment/“pop” failures are hard to differentiate as server-side issues, incorrect client behavior, or conceptual partitions in the consume transaction network link…”conceptual” because that connection doesn’t actually exist, leading to all these problems. Transactionality between stream operations, too, is either a hell-no or a hella-slow (requiring all the hoop-jumping mentioned in TFA for clients’ operations to find the server session that “owns” the transaction’s pseudo connection) if built on top of HTTP.

In other words, you can’t emulate a stateful connection on top of stateless RPC—well, you can, but nobody does because it’d be slow and require complicated clients. Instead, they staple a few headers on top of RPC and assert that it’s just as good as a socket. Dear reader: it is not.

This isn’t an endorsement of AMQP 0.9 and the like or anything. The true messaging/streaming protocols have plenty of their own issues. But at least they don’t build on a completely self-sabotaged foundation.

Like, I get it. HTTP is popular and a lot of client ecosystems balk at more complex protocols. But in the case of stateful duplex communication (of which queueing is a subset), you don’t save on complexity by building on HTTP. You just move the complexity into the reliability domain rather than the implementation domain.


lá ele ( ͡° ͜ʖ ͡°)


I used to have this same argument but apart from the few that I've used on solr, it is not trivial to have general search using it. Won't even comment on ES b/c they already target analytics better than search. I think it is worth exploring pg and other tools as all search cases are narrow/specific (ecomm, graphs, domain-specific documents etc), specially if you need facets and filtering. Also multilanguage ok to consider for a tool but products usually look for better recall at their original lang then to have same results in other languages.


I'd like to see that ! been fiddling with sqlite and fts5 to drop algolia from my application.


This post is great as the current state of network mesh is too complex for some users. That led me to write a simple rust daemon to run a TLS proxy and spawn the original app locally, reverse proxying requests as the cost of implementing a full mesh just to have tls across applications was too much for my team at the time. I didn't knew about ONRUN, s6 and all that. Also, why not tailscale as the mesh ?


There is no ONRUN: "you can think of docker mods as a missing ONRUN hook"


Dudes should bury that back. The exorcist taught us everything we needed about status with hard-ons.


The one mental health app I need is Gympass - Hitting the gym has helping me cope with anxiety


I'm generally not fond of monorepos but reading this and right away being targeted by this tweet https://twitter.com/nikitabier/status/1652764613962760196 made me think on how much time goes on decisions that won't impact users directly and how hard it becomes to justify that on regular companies in an environment where CEOs are jumping the layoff bandwagon for no reason.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: