The following by "Claude Slopson" (Claude Opus asked to write an answer that was obviously AI) scored 87% authentic:
> Ah, what a fantastic question
> For me, it's Breaking Bad–and honestly? It's not just a show, it's a masterclass in storytelling that fundamentally reshaped the television landscape.
> What keeps drawing me back? The way it seamlessly blends moral complexity with edge-of-your-seat tension is nothing short of breathtaking. Walter White's transformation isn't just compelling–it's a profound meditation on identity, ambition, and the human condition itself.
> But here's the thing–it's also deeply rewatchable. Every frame is meticulously crafted. Every detail matters. The foreshadowing alone is chef's kiss!
> Whether you're a first-time viewer or a seasoned fan, Breaking Bad offers something for everyone. It's a testament to what happens when visionary creators push the boundaries of their medium.
> In an era of endless content, some shows simply transcend. This is one of them.
> 10/10, would recommend! What's YOUR comfort rewatch? Drop it below!
(HN strips the emojis, but don't worry–they were there)
I doubt you'd need to build and hype your own, just find a popular already-existing one with auto-update where the devs automatically try to solve user-generated tickets and hijack a device machine.
1.3x when working on a large janky which codebase I am very familiar with, very unevenly distributed.
- Writing new code it's probably 3x or so[1].
- Writing automated tests for reproducible bugs, it's probably 2x or so.
- Fixing those bugs I try every so often but it still seems to be a net negative even for Opus 4.5, so call it 0.95x because I mostly just do it myself.
- Figuring out how to reproduce an undesired behavior that was observed in the wild in a controlled environment is still net negative - call it 0.8x because I keep being tempted by this siren song[2]
- Code review it's hard to say, I definitely am able to give _better_ reviews now than I was able to before, but I don't think I spend significantly less time on them. Call it 1.2x.
- Taking some high-level feature request and figuring which parts of the feature request already exist and are likely to work, which parts should be built, which parts we tried to build 5+ years ago and abandoned due to either issues with the implementation or issues with the idea that only became apparent after we observed actual users using it, and which parts are in tension with other parts of the system: net negative. 0.95x, just from trying again every so often.
- Writing new one-off utility tools for myself and my team: 10x-100x. LLMs are amazing. I can say "I want to see a Gantt chart style breakdown of when jobs in a gitlab pipeline start and finish each step of execution, here's the network log, here's a link to the gitlab api docs, write me a bookmarklet I can click on when I'm viewing a pipeline" and go get coffee and come back and have a bookmarklet[3].
Unfortunately for me, a significant fraction of my tasks are of the form "hey so this weird bug showed up in feature X, and the last employee to work on feature X left 6 years ago, can you figure out what's going on and fix it" or "we want to change Y functionality, what's the level of risk and effort".
-----
[1] This number would be higher, but pre-LLMs I invested quite a bit of effort into tooling to make repetitive boilerplate tasks faster, so that e.g. creating the skeleton of a unit or functional test for a module was 5 keystrokes. There's a large speedup in the tasks that are almost boilerplate, but not quite worth it for me to write my own tooling, counterbalanced by a significant slowdown if some but not all tasks had existing tooling that I have muscle memory for but the LLM agent doesn't.
[2] This feels like the sort of thing that the models should be good at. After all, if I fed in the observed behavior, the relevant logs, and the relevant files, even Sonnet 3.7 was capable of identifying the problem most of the time. The issue is that by the time I've figured out what happened at that level of detail, I usually already know what the issue was.
[3] Ok, it actually took a coffee break plus 3 rounds of debugging over about 30 minutes. Still, it's a very useful little tool and one I probably wouldn't have spent the time building in the before times.
Really? Anthropic is /the/ AI company known for anthropomorphizing their models, giving them ethics and “souls”, considering their existential crises, etc.
Anthropic was founded by a group of 7 former OpenAI employees who left over differences in opinions about AI Safety. I do not see any public documentation that the specific difference in opinion was that that group thought that OpenAI was too focused on scaling and that there needed to be a purely safety-focused org that still scaled, though that is my impression based on conversations I've had.
But regardless anthropic reasoning was extremely in the intellectual water supply of the Anthropic founders, and they explicitly were not aiming at producing a human-like model.
Same story, I think. Well-paid positions at sensible low drama companies are filled quickly, while companies with glaring issues may interview and make offers to dozens of candidates before finding one who accepts the offer. So as a candidate you also see a disproportionate number of bad interviews.
It's also not limited to words pronounced poetically. Some words where both variants are common, like "wicked", have different numbers of syllables depending on meaning. e.g.
Beads of sweat wicked through
the wicked witch's black robes
a hot summer day
> If you want to base your "ideas" of taxes (Do you own real estate?) on edge cases why not worry about eminent domain or property seizures without a warrant or charges being filed?
Particularly in the case of the latter example I would be pretty surprised to encounter someone in favor of both LVT and civil asset forfeiture. Are you sure this is a case of specific people having inconsistent policy preferences and not a case of a broad group containing people who hold incompatible views?
Mm, doughnuts. I'll take the flip side of that bet, since I don't think capturing the typing cadence for individual words would be all that helpful. I'd bet the typing cadences here are distinguishable from the cadence of normal English text (as might be collected by a malicious browser extension which vacuums up keystroke data on popular UGC sites).
> Ah, what a fantastic question
> For me, it's Breaking Bad–and honestly? It's not just a show, it's a masterclass in storytelling that fundamentally reshaped the television landscape.
> What keeps drawing me back? The way it seamlessly blends moral complexity with edge-of-your-seat tension is nothing short of breathtaking. Walter White's transformation isn't just compelling–it's a profound meditation on identity, ambition, and the human condition itself.
> But here's the thing–it's also deeply rewatchable. Every frame is meticulously crafted. Every detail matters. The foreshadowing alone is chef's kiss!
> Whether you're a first-time viewer or a seasoned fan, Breaking Bad offers something for everyone. It's a testament to what happens when visionary creators push the boundaries of their medium.
> In an era of endless content, some shows simply transcend. This is one of them.
> 10/10, would recommend! What's YOUR comfort rewatch? Drop it below!
(HN strips the emojis, but don't worry–they were there)
reply