This isn't surprising; the majority of programmers are using LLMs, and Claude is pretty good for coding. Penetration testing is also a pretty good fit for an agentic loop - you run a tool, read the output, and decide on your next move, rinse and repeat.
In VSCode + GitHub Copilot, agent mode it can propose bash command to run, and when you confirm it runs it in a console and can see the loop, so it can fix errors immediately if any. It tends to go off the rails pretty quickly if things start going badly wrong, but it can complete simple tasks with supervision.
Over two years ago, when this LLM stuff was pretty new, I saw a demo that put ChatGPT in a loop with Metasploit that could crack some of the easy HTB challeges automatically - I remember thinking it was the single most irresponsible use of AI I'd ever seen. While everybody else was trying to sandbox these things for safety, this project was just handing it command line access to the tools it would need to break confinement.
It seems there's actually a whole bunch of similar tools these days, marketed as "automated penetration testing," such as Cybersecurity AI[1]. I used to think the whole cyberpunk "hackers can get in anywhere if they just type hard enough" trope was stupid because with cryptography the defender always has a huge advantage, but now we're looking at a world where AI is automating attacks at scale, while the defenders are vibe coding slop they have no idea how to secure, so maybe Gibson was right all along.
I don't see the objection to using LLMs to check for grammatical mistakes and spelling errors. That strikes me as a reactionary and dogmatic position, not a rational one.
Anyone who has done any serious writing knows that a good editor will always find a dozen or more errors in any essay of reasonable length, and very few people are willing to pay for professional proofreading services on blog posts. On the other side of the coin, readers will wince and stumble over such errors; they will not wonder at the artisanal authenticity of your post, but merely be annoyed. Wabi-sabi is an aesthetic best reserved for decor, not prose.
Yes, I agree. There's nothing wrong with using an LLM or a spell-checker to improve your writing. But I do think it's important to have the LLM point out the errors, not rewrite the text directly. This lets you discover errors but avoid the AI-speak.
The fact that you were downvoted into dark grey for this post on this forum makes me very sad. I hope it's just that this article is attracting a certain kind of segment of the community.
I'm pretty sure my mistake was assuming people had read the article and knew the author veered wildly halfway through towards also advocating against using LLMs for proofreading and that you should "just let your mistakes stand." Obviously no one reads the article, just the headline, so they assumed I was disagreeing with that (which I was not.) Other comments that expressed the same sentiment as mine but also quoted that part did manage to get upvoted.
This is an emotionally charged subject for many, so they're operating in Hurrah/Boo mode[1]. After all, how can we defend the value of careful human thought if we don't rush blindly to the defense of every low-effort blog post with a headline that signals agreement with our side?
Virginia Woolf didn't use a keyboard with no easy access to an em-dash, she used a pen.
When we're primarily conversing via quill and ink I'll be more likely to consider you actually decided to use a em-dash instead of AI. When normally typing, without AI powered grammar tools, nothing will automatically convert a regular dash and you will use a sentence construction more accessible to your fingers.
From a rhetorical perspective, it's an extended "Yes-set" argument or persuasion sandwich. You see it a lot with cult leaders, motivational speakers, or political pundits. The problem is that you have an unpopular idea that isn't very well supported. How do you smuggle it past your audience? You use a structure like this:
* Verifiable Fact
* Obvious Truth
* Widely Held Opinion
* Your Nonsense Here
* Tautological Platitude
This gets your audience nodding along in "Yes" mode and makes you seem credible so they tend to give you the benefit of the doubt when they hit something they aren't so sure about. Then, before they have time to really process their objection, you move onto and finish with something they can't help but agree with.
The stuff on the history of computation and cybernetics is well researched with a flashy presentation, but it's not original nor, as you pointed out, does it form a single coherent thesis. Mixing in all the biology and movie stuff just dilutes it further. It's just a grab bag of interesting things added to build credibility. Which is a shame, because it's exactly the kind of stuff that's relevant to my interests[3][4].
> "Your manuscript is both good and original; but the part that is good is not original, and the part that is original is not good." - Samuel Johnson
The author clearly has an Opinion™ about AI, but instead of supporting they're trying to smuggle it through in a sandwich, which I think is why you have that intuitive allergic reaction to it.
I've been building up a similar list of topics that nearly every programmer will at some point be forced to learn against their will and which are not adequately covered in undergrad:
* Text file encodings, in particular Unicode, UTF-8, Mojibake
* Time: Time Zones, leap day / seconds, ISO-8601
* Locales, i18n, and local date/number formats
* IEEE 754 floats: NaN and inf, underflow, overflow, why 0.1 + 0.2 != 0.3, ±0, log1p
* Currencies, comma/dot formats, fixed-point decimal representations, and exchange rates
* Version strings, dependencies, semantic versioning, backwards compatibility
There's another list for web/REST developers, and one for data scientists, but this is the core set.
Having colleagues for who this topic is "daily business", I really don't know what you intend to teach about this topic to computer science students:
It's either
- basically trivial: you use the provided exchange rate tables which can vary from day to day; you thus just have to thoroughly pay attention concerning the exchange rates of which day you have to use for a given calculation (but this is something the business people will tell you), the rest is like unit conversion, which you learn in school: If the "exchange rate" between yards and inches is 36 in/yd, then 2.5 yd = 2.5 yd * 36 in/yd = 90 in. Similarly, if the f/x rate that is to be used is 1.1612 USD/EUR, then 2.50 EUR = 2.5 EUR * 1.1612 USD/EUR = 2.903 USD (you now just need to ask the business people whether they want to use this raw result, or the result is to be rounded. In the latter case, they will tell you which kind of rounding they want).
- or it is something that you rather need to become an auditor (or a similar qualification) for.
I just don't want them to design a data model with a single `numeric(10,2)` columns for "sale_price", or hard-code their PowerBI report to show the last five years of data using whatever the exchange rate was on the day they wrote the report. You're right - it could be covered in five minutes, but since we don't currently bother, every junior has to learn it the hard way...
In a wider scope, I’ve always thought there is an entire area on data processing and manipulation that is missing in CS (and CI) curricula. Not just CSV files, but XML, JSON, maybe some HL7, Pivot Tables, today Excel dynamic array formula, SQL and some functional style processing like data structure LINQ. Plus tools for doing processing like RE, grep, sed, maybe even AWK.
> carl sagan called METI “deeply unwise and immature"
It's repeated ad nauseum online, but always verbatim, just those few words and never a full passage, and never with a citation. In other words, it has all the hallmarks of an apocryphal quote or misattribution.
The reason I'm suspicious is because Sagan contributed to the Aricebo message[1], which is literally sending such a radio signal, and the the Voyager disc[2], which is similar. He even wrote an entire sci-fi novel[3] about it.
He describes radio contact in generally positive and hopeful terms in his book Cosmos. He of course acknowledges the dangers of encountering a more technologically advanced civilization, but he goes out of his way to contrast the frightening example of the Aztecs with other more peaceful first encounters such as the Tlingit. He also argues that any significantly more advanced species that had survived millions of years would necessarily have achieved zero population growth and would likely be peaceful. You don't have to take my word for it, you can read his own words in the Encyclopedia Galactica chapter of his book on the Internet Archive[4].
So, if the quote you cited was true, it would represent a late-in-life and somewhat surprising change of heart from cautious optimism to "dark forest" style paranoia. Personally, I believe it's simply one of the many falsely attributes quotes floating around the Internet.
As far as I can tell the quote comes from Science 2.0's site [0], and is frequently quoted somewhat verbatim in other places like reddit, quora and articles. But I can't really find the original (Carl Sagan) source.
Videos don't do well on Hacker News, but I encourage people to at least watch the first couple minutes of this one. The oscilloscope visual overlay is interesting and the editing is really good.
Also, given the topic (audio equalizers) there's no way it could have been a blog post.
I would hope mostly what doesn't do well is useless titles - like one word, or a pithy joke that makes sense only in retrospect. Unfortunately there is also that guidelines which discourages doing better.
I think this is interesting and partially true: humans are scary. But it's important to remember the opposite is true as well: humans are the most cooperative species out there by a wide margin.
Eusocial insects and pack animals are in a distant 2nd and 3rd place: they generally don't cooperate much past their immediate kin group. Only humans create vast networks of trade and information sharing. Only humans establish establish complex systems to pool risk, or undertake public works for the common good.
In fact, a big part of the reason we are so scary is that ability to coordinate action. Ask any mammoth. Ask the independent city states conquered by Alexander the Great. Ask Napoleon as he faced the coalition force at Waterloo.
We are victims of our own success: the problems of the modern world are those of coordination mechanisms so effective and powerful that they become very attractive targets for bad actors and so are under siege, at constant risk of being captured and subverted. In a word, the problem of robust governance.
Despite the challenges, it is a solvable problem: every day, through due diligence, attestations, contract law, earnest money, and other such mechanisms people who do not trust each other in the least and have every incentive to screw over the other party are able to successfully negotiate win-win deals for life altering sums of money, whether that's buying a house or selling a business. Every century sees humans design larger, more effective, more robust mechanisms of cooperation.
It's slow: it's like debugging when someone is red teaming you, trying to find every weak point to exploit. But the long term trend is the emergence of increasingly robust systems. And it suggests a strategy for AI and AGI: find a way to cooperate with it. Take everything we've learned about coordinating with other people and apply the same techniques. That's what humans are good at.
This, I think, is a more useful framing than thinking of humans as "scary."
I'm not sure if it's your intention but this reads as a strong critique of the technolibertarianism world philosophy that dominates our industry. We lose something by replacing high-trust cooperative systems with ones that are mutually antagonistic. We fall into the bad square of the prisoners dilemma by not only not exorcising the defectors but holding them up as the highest moral good and the example to follow.
This is going to sound a little weird, but I think trust is part of the problem.
Canadians have a high-trust culture, but their stock market is historically full of scams[1], and some analysts think its causally related. (It may just be because TSXV is a wild west, or because companies would IPO on NYSE or Nasdaq if they were legit, but it could be the trust thing. Fits my narrative, anyway.)
When I look at politics, crypto rug pulls, meme stocks with P/E ratios over 200, Aum[2] and similar cults, or many other modern problems I don't see negotiations breaking down because of a lack of trust; I see a bunch of people placing far too much trust in sketchy leaders and ideas backed by scant evidence. A little skepticism would go a long way.
That's why I emphasize robust coordination: more due diligence, more transparency, more fraud detection, more skepticism, more financial literacy, more education in general. There's a cost associated with all this, sure, but it still gets you into a situation where the interaction is a coordination game[3] and the Nash equilibrium is Pareto-efficient. Thus, we fall into the "pit of success" and naturally cooperate in our own best interests.
There's nothing wrong with empathy, altruism, or charity, but they are very far from universal. You need to base your society on a firm foundation of robust coordination, and then you can have those things afterwards, as a little treat.
For me, as for a lot of people, lack of sleep is the big one... if I build up 4+ hours of sleep debt over a week, I'm at risk. So anything you can do to make that easier to log, like integration with a sleep tracker, would be good.
Also, a plug for Oliver Sacks's Migraine which taught me a lot about migraine with aura.
This really nice. For `torch.mul(x, y)`, it would be nice if it highlighted the entire row or column in the other matrix and result. Right now it shows only a single multiplication, which gives a misleading impression of how matrix multiply works. I wouldn't mention it, except that matrix multiplication is so important that it's worth showcasing. I've bookmarked the site and will share it at a pytorch training session I'm leading in a couple of weeks.
In VSCode + GitHub Copilot, agent mode it can propose bash command to run, and when you confirm it runs it in a console and can see the loop, so it can fix errors immediately if any. It tends to go off the rails pretty quickly if things start going badly wrong, but it can complete simple tasks with supervision.
Over two years ago, when this LLM stuff was pretty new, I saw a demo that put ChatGPT in a loop with Metasploit that could crack some of the easy HTB challeges automatically - I remember thinking it was the single most irresponsible use of AI I'd ever seen. While everybody else was trying to sandbox these things for safety, this project was just handing it command line access to the tools it would need to break confinement.
It seems there's actually a whole bunch of similar tools these days, marketed as "automated penetration testing," such as Cybersecurity AI[1]. I used to think the whole cyberpunk "hackers can get in anywhere if they just type hard enough" trope was stupid because with cryptography the defender always has a huge advantage, but now we're looking at a world where AI is automating attacks at scale, while the defenders are vibe coding slop they have no idea how to secure, so maybe Gibson was right all along.
[1]: https://github.com/aliasrobotics/cai