Hacker Newsnew | past | comments | ask | show | jobs | submit | relativeadv's commentslogin

Of course its fun. Making slop _is_ very fun. Its a low-effort dopamine-driven way of producing things. Learning is uncomfortable. Improving things using only your braincells can be very difficult and time consuming.


I have learned more - not just about my daily driver languages, but about other languages I wouldn't have even cracked the seal on, as well as layers of hardware and maker skills - in the past two years than I did in the 30 years leading up to them.

I truly don't understand how anyone creative wouldn't find their productivity soar using these tools. If computers are bicycles for the mind, LLMs are powered exoskeletons with neural-controlled turret cannons.


To extend the metaphor, which provides better exercise for your body? A bicycle or a powered exoskeleton with turret cannons?


I don't bike for exercise. I bike to get where I'm going with the least amount of friction. Different tools for different jobs.

Also: I think we can agree that Ripley was getting a good workout.


The rate at which I'm learning new skills has accelerated thanks to LLMs.

Not learning anything while you use them is a choice. You can choose differently!


How are you using AI to learn? I see a lot of people say this but simply reading AI generated overviews or asking it questions isn't really learning.


I'm using it to build things.

Here's an example from the other day. I've always been curious about writing custom Python C extensions but I've never been brave enough to really try and do it.

I decided it would be interesting to dig into that by having Codex build a C extension for Python that exposed simple SQLite queries with a timeout.

It wrote me this: https://github.com/simonw/research/blob/main/sqlite-time-lim... - here's the shared transcript: https://chatgpt.com/s/cd_6958a2f131a081918ed810832f7437a2

I read the code it produced and ran it on my computer to see it work.

What did I learn?

- Codex can write, compile and test C extensions for Python now

- The sqlite3_progress_handler mechanism I've been hooking into for SQLite time limits in my Python code works in C too, and appears to be the recommended way to solve this

- How to use PyTuple_New(size) in C and then populate that tuple

- What the SQLite C API for running a query and then iterating though the results looks like, including the various SQLITE_INTEGER style constants for column types

- The "goto cleanup;" pattern for cleaning up on errors, including releasing resources and calling DECREF for the Python reference counter

- That a simple Python extension can be done with ~150 lines of readable and surprisingly non-threatening C

- How to use a setup.py and pyproject.toml function together to configure a Python package that compiles an extension

Would I have learned more if I had spent realistically a couple of days figuring out enough C and CPython and SQLite and setup.py trivia to do this without LLM help? Yes. But I don't have two days to spend on this flight of curiosity, so actually I would have learned nothing.

The LLM project took me ~1 minutes to prompt and then 15 minutes to consume the lessons at the end. And I can do dozens of this kind of thing a day, in between my other work!


With all due respect you were reading, not learning. It's like when people watch educational YouTube videos as entertainment, it feels like they're learning but they aren't.

It's fine to use the LLMs in the same way that people watch science YouTube content, but maybe don't frame it like it's for learning. It can be great entertainment tho.


The YouTube analogy doesn't completely hold.

It's more like jumping on a Zoom screen sharing session with someone who knows what they're doing, asking for a tailored example and then bouncing as many questions as you like off them to help understand what they did.

There's an interesting relevant concept in pedagogy called the "Worked example effect", https://en.wikipedia.org/wiki/Worked-example_effect - it suggests that showing people "worked examples" can be more effective than making them solve the problem themselves.


Ok but you didn't ask any questions in the transcript you provided. Maybe that one was an outlier?

In order to learn you generally need to actually do the thing, and usually multiple times. My point is that it's easy to use an AI to shortcut that part, with a healthy dose of sycophancy to make you feel like you learned so well.


Yeah in this particular case I didn't ask any follow-up questions directly to Claude Code - I pasted a few things into Claude chat though, here's one of those conversations: https://claude.ai/share/9c404b38-efed-4789-bea1-06bca5f5d6e4


Disagree, it can be learning as long as you build out your mental model while reading. Having educational reading material for the exact thing you're working on is amazing at least for those with interest-driven brains.

Science YouTube is no comparison at all: while one can choose what to watcha, it's a limited menu that's produced for a mass audience.

I agree though that reading LLM-produced blog posts (which many of the recent top submissions here seem to be) is boring.


So if I read a book about something, I'm not "learning it"?

Schools must be the biggest scam ever then :D


[flagged]


don't be an asshole


The OP is not talking about making slop, he's talking about using AI to write good code.


won't someone think of the shareholders?


it feels like people keep attempting this idea, largely because its easy to build, but in practice people aren't interested using others' prompts because the cost to create a customized skill/gpt/prompt/whatever is near zero


People want inspiration rather than off-the-shelf prompts

More like a gallery than a marketplace


Think a bit harder on your statement. You'll get there eventually.


We have the power to buy 50 lbs of fentanyl and drip it into our veins continuously for months experiencing perfect bliss. Such great fortune, why would we resist it.


Its not uncommon for Apple and others to compare against two generations ago rather than the immediately preceding one


I referenced everything about comparing to M4. I left outside the comparison with M1.


is this effectively what Cursor did as well? I seem to remember some major pricing change of their in the past few months.


In a way I would say they were even worse, instead of outright saying "we've increased our prices", they "clarified their pricing".


> it's just gotten better.

Couldn't agree more. Like many, I've had my honeymoon phase with AI and have now learned what it is good for and what it is not. What it has truly been good for is satisfying the nauseating number of topics I want to learn about. I can spend $20 a month and drill down into any topic I like for as long as I like in an incredibly efficient way. What a time to be alive.


> "This is a good way to test how "AI-native" the candidate is."

yuck


In a world of AI-natives, be an AI-drunken-British-football-fan-tourist.


yuck.


Oracle is doing something petty and absurd? Are you sure?


The hell, you say!


"To shreds, you say?"


i love my studio displays but they would be absolute dogwater for gaming


Yeah, unfortunately Apple have backed themselves into a corner there with their ~220ppi Retina display standard. Nobody makes monitor-sized panels that dense which also support high refresh rates, they're all limited to 60hz. The Apple monitors still don't support VRR either, even though the software support has been in place on macOS for a while it only works with third party monitors.


I have a 27” 5K display with 218 ppi running at 75Hz. It’s a newly released monitor from Viewsonic.

75Hz would be acceptable however macOS has funky mouse acceleration that makes gaming feel weird.


You can turn off mouse acceleration IIRC.


I should be more clear, it's not simply about mouse acceleration, but rather the overall movement curve. It is different from Windows (that's ok) and it feels strange in games if your primary platform has been Windows.


Thatseems fairly subjective . Specs similar to a MBP? Because mine has a retina, 120 Hz display with HDR with max brightness of 1200 nits.

That MBP display utterly crushes my other standalone monitors- a 4K@60 w/ HDR@300-nits. Or a 1080p @90 Hz (SDR).

but of course. can't wire my linux gaming pc up to my laptop's display without some surgery.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: