The biggest problem is you get conditioned to instant and constant dopamine hits, which works directly against a lot of the things one is supposed to learn in school.
Kids learn the A-Z in record speed in 1st grade. But they don't learn to concentrate or that learning things can sometimes be challenging and the value of perseverance and that understanding eventually comes.
So in later grades they pay for learning the A-Z too fast through the iPad. Because they didn't learn how to learn.
The net effect in Norwegian classrooms over past 5 years of iPad education seems to be negative and it is not about what kids are exposed to. It is about not learning to concentrate.
However, if you move from "bespoke" to just "very small niches", I think lower production costs of software may well open up opportunities that were earlier unprofitable.
Not commenting on this specific law, but I do believe the premise that children should be exposed to everything is wrong, and that the overall view on humans in this post is naive.
These days, exposing an immature brain to the raw internet is basically just handing the brain and personality over to be molded by large corporations and algorithms.
And humans have never been rational, self-contained actors that self-educate perfectly when exposed to information, converging on an objectively good and constructive worldview. Quite the opposite.
Humans develop in relation to one another, increasingly in relation to algorithms, and sometimes become messed up, and sometimes those mess-ups would have been avoidable had relations or exposure been different.
In fact I would say you as a parent is not doing your job if you are not trying to make sure a 12 year old isn't pulled into, say, an anorexia rabbit hole.
Whether that is best done through making sure exposure doesn't happen, or through exposure and education, will depend on the child and parent (and society) in question. What worked best for a highly rational self-reliant geek teen may simply be a disaster for another human. And what worked for an upper class highly educated family may not work for a poor family with alcoholized parents or working 18 hours a day to make ends meet.
And parents are not perfect -- if all parents were perfect, there also would be no alcoholics and drug addicts or poverty or war. But people are imperfect, and it's natural to make laws to mitigate at least the worst effects of that. (Again, haven't read this specific law proposal, but found the worldview of OP a bit naive.)
> These days, exposing an immature brain to the raw internet is basically just handing the brain and personality over to be molded by large corporations and algorithms.
You make the case of todays internet being insuitable for young children.
But has this been different, ever, maybe apart from the very first days of the internet?
While access through phones has reshaped the internet fundamentally, I'd propose that it has always been dangerous. When I was 12, a single wrong click could destroy your machine, or lead to a physical bill being sent to my parents home (which has happened), or lead to most disturbing pictures and videos.
So I think it's not the case that we should allow kids completeley unsupervised access (like it always has been), but it's also naive to think that we can regulate our way out of this (on state or household-level, like it always has been).
When my generation "accessed the internet", there was a massive dial-up sound and the single family PC was in the living room, visible to everyone.
Even later when the computer was in my room, I still had to go look for the creepy shit, it didn't appear in my email inbox.
Kids this age browse the internet through algoritmic apps built to maximise engagment in a corner on their bed in their room. Parental controls for most apps and operating systems are a fucking joke.
Agreed, but isn't this a parental issue? Why aren't parents moving back to a "shared pc in the living room" model?
I absolutely would not allow a kid to have an unregulated smartphone and then further compound the problem at home by allowing them to access it privately and without interruption. Device management enrollment is trivial on iphones.
I think there is a drastic difference between being once off exposed to bad images, and an algorithm making a choice of whether to subtly over time expose the Pokemon-interested child to racist Pokemon videos vs non-racist Pokemon videos on Tiktok. (Or anorexic Pokemon videos, or..)
Amount of time spent and repeated exposure being the key.
The question is really what kind of human is raised, rather than raw exposure as such.
So for that reason things are different IMO than than 20 years ago.
Yes, of course some people would fall into internet forum rabbit holes 20 years ago, and papper-letter-friend-induced rabbit holes 100 years ago. But it did help that it was like 5% of the population instead of 95% of the population spending their time there.
Regarding your last point, I don't necessarily disagree (again I didn't check up on this law, I care more about the laws in my own country), but I think arguing against the law will go better if one does not display naivety when making the arguments
Don't say "it will be better if all kids are exposed to everything early" (it won't), instead say "the medicine will not work and anyway the side-effects are worse than the sickness it intends to cure" (if that is the case).
Even as late as the mid-aughts the internet was mostly nerdy technical information, real people sincerely discussing various topics, and the very worst thing was a little bit of (mostly still-image) porn if you were looking for it.
Kids back then weren't targeted by a stream of continuously A/B tested algorithmic content intended to tell them what to think and shape their brains. Overwhelming evidence exists that social media (as it exists today) is bad for the mental health of young people (and probably adults, too, but at least adults have the presence of mind and lack of social pressure to delete Facebook).
> Even as late as the mid-aughts the internet was mostly nerdy technical information, real people sincerely discussing various topics, and the very worst thing was a little bit of (mostly still-image) porn if you were looking for it.
This is the naive take. In the early to late 2000's, you could buy drugs on the clearnet. You could discuss taking those drugs on forums and sites like Erowid.
This was the age of shock sites, gore, extreme porn, 4chan, etc.
At one point a pornstar actress crushing kittens to death was a meme. 2 girls 1 cup was a meme. Tubgirl was a meme. Goatse was a meme. Ogrish, LiveLeak, etc were all open access. I once watched someone get burned to death for being a witch*.
These are all things that were one click away, your friends would send them to you for the lulz.
* I am actually glad I saw that. It showed me that those types of things were not in the distant past, civilized people can still be driven by moral panics to do horrific things. Discriminatory ideology still exists, and gone unchecked, leads to wanton violence and reprehensible things, some things that I've experienced myself, but not to that extent. It served as a potent reminder of human nature, and I've watched its template play out over and over again. The delight I saw in the faces of those who perpetrated it is the same delight you see in the faces of those engaging in today's secular witch hunts, moral panics, hate crimes, etc.
I agree, and I believe too many geeks who are now parents (including the author of the blog post) do not realize that the computers they grew up with, and in particular the Internet they grew up with, is nothing like computers (phones) and the Internet kids have access today.
The Grimm fairy tales (1819) are full of graphical violence, child abuse, anti-semitism, and incest. They are much more harmful than anything that I've encountered on the Internet. So why are we discussing internet harms instead of book harms? Because people are fucking stupid.
And why are we getting concerned about "sharing private information with random web sites" when that's not the solution being discussed. The solution is a simple handshake:
service: Is the person assigned this device old enough to use this service?
> believe the premise that children should be exposed to everything is wrong
imo this is what is wrong with modern parenting. the reality does not care about the child's feelings and if it is old enough to have a screen with internet unattended it is old enough for anything
I've seen this view applied to things like TikTok and Instagram. Especially with the recent lawsuit. But then when to comes to addressing it most people seem to flip completely and bemoan parenting and internet freedom. It just ends up in a circular pattern of "this is awful, but we shouldn't do anything about it. These companies are poisoning kids, but any attempts to rectify that are infringing on my right to the internet." Makes a lot of conversations around this topic feel entirely pointless.
The CA/CO law only requires the option to enable parental controls on an account, and as the article points out, can be worked around by a sufficiently determined child using something like a virtual machine. This is not really the government deciding how children should be raised. The parent still has the ability to choose to apply the parental controls.
It's more like the rule that minors can't buy alcohol in bars - parents can still buy alcohol at the supermarket for their children, and sufficiently determined children can find some other adult to buy it for them.
Probably by the time you know how to install a virtual machine, you can handle the unrestricted internet.
The bigger problem is it sets us on a possible path towards completely government-controlled computing devices. The fact that so many countries are pursuing ID requirements online is somewhat of a canary for this whole OS age check thing imo.
If you are not perfect, then don't have kids then. If you can't take care of them and nurture them with the attention that they need and rightfully deserve.
My view is that this must be left entirely to the parents. The only time a government should be allowed to interfere is when there are child abuse or neglect cases against the parents and the children are put under child protective care.
It is in my view crazy and irresponsible to allow the government override the parents' decisions about what media their children can consume. It is guaranteed that this power will be abused.
The CA/CO law is literally the government writing a law that says it shall be left to the parents but the device must give the parents the options they need.
Because having one OS for a device with parental protections that parents can install is enough to achieve the goal, so the laws are obviously overreaching by mandating age controls for every OS when that's clearly not necessary. Having one Linux with age control that parents can install is much less intrusive and much more achievable than mandating every minuscule Linux distribution developed by hobbyists in their spare time to implement age control (which is practically impossible and never going to happen). And let's not even get started on the Internet of things...
Does it effectively outlaw general computing for minors by requiring account holders to set up accounts for minors where account holders are defined as being 18+?
Im honestly not sure; but I could see that being the result of the law and companies like best buy disallowing minors from purchasing hardware with cash for fear of liability.
for instance, the government can effectively ban you from saying something they don't want you to say by forcing all companies that may provide any substantial platform to you to implement their code speech
that way they have enforced a ban on you by proxy
the same way they can verify/certify the id of people totally or partially when they go online, by forcing all vendors who provide the systems that you may use to go online to enforce it for them
I've obviously read about how bad adult literacy in the US is, but I didn't realize how many "technologists" were impacted by it. The law is short and clear and doesn't involved attestation or age verification. Yet all these "hackers" claim it does just that. The reading comprehensions and critical thinking skills seem to match the national average.
I think most people here are extrapolating the intent behind this law, the triviality with which it can be bypassed by minor account holders, and what that means for the future. Once this law is in effect, it will be ineffectual. Minors that current don't know what VMs are, what live booting is, what keyloggers are, etc. will learn immediately once blog posts start circulating about bypass mechanisms. Parents will then go back to the legislature and say the law as-written sucks, and they will demand better laws, but the only way to get better is to force all devices to authenticate with the isp with a gov-issued id/token to prove the account is not a minor. But the only way to prevent even further workarounds like the OS lying is to force hardware based remote attlestation. And that means the death of general computing and the death of any anonymity.
Most laws are ineffectual. Kids can't drink alcohol but they still can; theft is illegal but I still got your car keys; murder is illegal but people still die. In this one, there's no punishment for bypass, just like there's no punishment for a kid who gets alcohol. Unlike the alcohol law this one doesn't even mandate the use of the child protection features - just their existence.
You know the simple fix to your problem is to mark VMs as adult only apps, anyway.
But what happens when a nefarious actor fills the void and publishes a root-kited VM and marks it as safe for children? These restrictions breed black markets that usually cause even more harm.
> I think most people here are extrapolating the intent behind this law,
This is a revisionist fucking lie. People like you argue against the facts you have absolutely wrong. And when proven wrong you latch onto some tangential argument. But you have no integrity so you pretend it was actually about the other thing and not the thing you actually called out. You don't participate in good faith. You deserve no response in good faith.
ok, that is the argument with merit in favour of shielding kids from the internet - now let's consider how does it look like when the locus of responsibility is governments
it's true that kids are vulnerable to certain forms of content on the internet
it's also true that adults are vulnerable to certain forms of content on the internet
it's also true that governments cannot police "harmful content" on the internet effectively, or even meaningfully, if most people can easily surf the internet pseudonymously
it's also very true right now that what's on "social media" is very Sybill-vulnerable, and inordenately so right now with the advent of LLMs
what do you think the playbook will look like once there is some sort of tight OS level system that is enforced across the board to certify or verify information about the user?
do you think this level of coordination to push for identifying the user at all levels that is happening across the world in a matter of weeks is genuine concern for the kids alone?
My startup is making software for firefighters to use during missions on tablets, excited to see (when I get the time) if we can use this as a keyboard alternative on the device. It's a use case where avoiding "clunky" is important and a perfect usecase for speech-to-text.
Due to the sector being increasingly worried about "hybrid threats" we try to rely on the cloud as little as possible and run things either on device or with the possibility of being self-hosted/on-premise. I really like the direction your company is going in in this respect.
We'd probably need custom training -- we need Norwegian, and there's some lingo, e.g., "bravo one two" should become "B-1.2". While that can perhaps also be done with simple post-processing rules, we would also probably want such examples in training for improved recognition? Have no VC funding, but looking forward to getting some income so that we can send some of it in your direction :)
Interesting. Can we get in touch? I just sold my webapp/saas where I used NB-Whisper to transcribe Norwegian media (podcast, radio, TV) and offer alerts and search by indexing it using elasticsearch.
Edit: It was https://muninai.eu (I shut down the backend server yesterday so the functionality is disabled).
Isn't talking about "here’s how LLMs actually work" in this context a bit like saying "a human can't be a relevant to X because a brain is only a set of molecules, neurons, synapses"?
Or even "this book won't have any effect on the world because it's only a collection of letters, see here, black ink on paper, that is what is IS, it can't DO anything"...
Saying LLM is a statistical prediction engine of the next token is IMO sort of confusing what it is with the medium it is expressed in/built of.
For instance those small experiments that train a network on addition problems mentioned in a sibling post. The weights end up forming an addition machine. An addition machine is what it is, that is the emergent behavior. The machine learning weights is just the medium it is expressed in.
What's interesting about LLM is such emergent behavior. Yes, it's statistical prediction of likely next tokens, but when training weights for that it might well have a side-effect of wiring up some kind of "intelligence" (for reasonable everyday definitions of the word "intelligence", such as programming as good as a median programmer). We don't really know this yet.
Its pretty clear that the problem of solving AI is software, I don't think anyone would disagree.
But that problem is MUCH MUCH MUCH harder than people make it out to be.
For example, you can reliably train an LLM to produce accurate output of assembly code that can fit into a context window. However, lets say you give it a Terabyte of assembly code - it won't be able to produce correct output as it will run out of context.
You can get around that with agentic frameworks, but all of those right now are manually coded.
So how do you train an LLM to correctly take any length of assembly code and produce the correct result? The only way is to essentially train the structure of the neurons inside of it behave like a computer, but the problem is that you can't do back-propagation with discrete zero and 1 values unless you explicitly code in the architecture for a cpu inside. So obviously, error correction with inputs/outputs is not the way we get to intelligence.
It may be that the answer is pretty much a stochastic search where you spin up x instances of trillion parameter nets and make them operate in environments with some form of genetic algorithm, until you get something that behaves like a Human, and any shortcutting to this is not really possible because of essentially chaotic effects.
> For example, you can reliably train an LLM to produce accurate output of assembly code that can fit into a context window. However, lets say you give it a Terabyte of assembly code - it won't be able to produce correct output as it will run out of context.
Fascinating reasoning. Should we conclude that humans are also incapable of intelligence? I don't know any human who can fit a terabyte of assembly into their context window.
Any human who would try to do this is probably a special case. A reasonable person would break it down into sub-problems and create interfaces to glue them back together...a reasonable AI might do that as well.
I can tell you from first hand experience that claude+ghidra mcp is very good at understanding firmware, labeling functions, finding buffer overflows, patching in custom functionality
On the other hand the average human has a context window of 2.5 petabytes that's streaming inference 24/7 while consuming the energy equivalent of a couple sandwiches per day. Oh and can actually remember things.
Citation desperately needed? Last I checked, humans could not hold the entirety of Wikipedia in working memory, and that's a mere 24 GB. Our GPU might handle "2.5 petabytes" but we're not writing all that to disc - in fact, most people have terrible memory of basically everything they see and do. A one-trick visual-processing pony is hardly proof of intelligence.
I think the idea is that we may not store 2.5 petabytes of facts like wikipedia.
But we do store a ton of “data” in the form of innate knowledge, memories, etc.
I don’t think human memory/intelligence maps cleanly to computer terms though.
>So obviously, error correction with inputs/outputs is not the way we get to intelligence.
This doesn't seem to follow at all let alone obviously? Humans are able to reason through code without having to become a completely discrete computer, but probably can't reason through any length of assembly code, so why is that requirement necessary and how have you shown LLMs can't achieve human levels of competence on this kind of task?
> but probably can't reason through any length of assembly code
Uh what? You can sit there step by step and execute assembly code, writing things down on a piece of paper and get the correct final result. The limits are things like attention span, which is separate from intelligence.
Human brains operate continuously, with multiple parts being active at once, with weight adjustment done in real time both in the style of backpropagation, and real time updates for things like "memory". How do you train an LLM to behave like that?
So humans can get pen and paper and sleep and rest, but LLMs can't get files and context resets?
Give the LLM the ability to use a tool that looks up instructions and records instructions from/to files, instead of holding it in context window, and to actively manage its context (write a new context and start fresh), and I think you would find the LLM could probably do it about as reliable as a human?
Context is basically "short term memory". Why do you set the bar higher for LLMs than for humans?
Couldn't you periodically re-train it on what it's already done and use the context window for more short term memory? That's kind of what humans do - we can't learn a huge amount in short time but can accumulate a lot slowly (school, experience).
A major obstacle is that they don't learn from their users, probably because of privacy. But imagine if your context window was shared with other people, and/or all your conversations were used to train it. It would get to know individuals and perhaps treat them differently, or maybe even manipulate how they interact with each other so it becomes like a giant Jeffrey Epstein.
You're putting a bunch of words in the parent commenter's mouth, and arguing against a strawman.
In this context, "here’s how LLMs actually work" is what allows someone to have an informed opinion on whether a singularity is coming or not. If you don't understand how they work, then any company trying to sell their AI, or any random person on the Internet, can easily convince you that a singularity is coming without any evidence.
This is separate from directly answering the question "is a singularity coming?"
One says "well, it was built as a bunch of pieces, so it can only do the thing the pieces can do", which is reasonably dismissed by noting that basically the only people predicting current LLM capabilities are the ones who are remarkably worried about a singularity occurring.
The other says "we can evaluate capabilities and notice that LLMs keep gaining new features at an exponential, now bordering into hyperbolic rate", like the OP link. And those people are also fairly worried about the singularity occurring.
So mainly you get people using "here's how LLMs actually work" to argue against the Singularity if-and-only-if they are also the ones arguing that LLMs can't do the things that they can provably do, today, or are otherwise making arguments that also declare humans aren't capable of intelligence / reasoning / etc..
False dichotomy. One can believe that LLMs are capable of more than their constituent parts without necessarily believing that their real-world utility is growing at a hyperbolic rate.
Fair - I meant there's two major clusters in the mainstream debate, but like all debates there's obviously a few people off in all sorts of other positions.
There is more than molecules, neurons and synapses. They are made from lower level stuff that we have no idea about (well, we do in this instance but you get the point). They are just higher level things that are useful to explain and understand some things but don't describe or capture the whole thing. For that you would need to go to lower and lower level and so far it seems they go on infinitely. Currently we are stuck at the quantum level, that doesn't mean it's the final level.
OTOH, an LLM is just a token prediction engine. It fully and completely covers it. There is no lower level secrets hidden in the design nobody understands, because it could not have been created if there was. The fact that the output can be surprising is not evidence of anything, we have always had surprising outputs like funny bugs or unexpected features. Using the word "emergence" for this is just deceitful.
This algorithm has fundamental limitations and they have not been getting better, if you look closely. For instance you could vibe code a C compiler now, but it's 80% there, cute trick but not usable in real world. Just like anything, it cannot be economically vibe coded to 100%. They are not going back and vibe coding the previous simpler projects to 100% with "improved" models. Instead they are just vibe coding something bigger to 80%. This is not an improvement in limitations, it is actually communicating between the lines that the limitations cannot be overcome.
They're not powertools lol. Tech has plenty of powertools and we automated the crap out of our job already.
Writing code has never been the limiting factor, it's everything else that goes into it.
Like, I don't mind that there's a bunch of weekend warriors out here building shoddy gazebos and sheds with their brand new overpriced tools, incorrecting each other on the best way to do things. We had that with the bitcoin and NFT bros already.
What I do roll my eyes at is when the bros start talking about how they're totally going to build bridges and planes and it's gonna be soooo easy to get to new places, just slap down a bridge.
Uh huh. Y'all do not understand what building those actually entails lol.
But if you try some penny-saving cheap model like Sonnet [..bad things..]. [Better] pay through the nose for Opus.
After blowing $800 of my bootstrap startup funds for Cursor with Opus for myself in a very productive January I figured I had to try to change things up... so this month I'm jumping between Claude Code and Cursor, sometimes writing the plans and having the conversation in Cursor and dump the implementation plan into Claude.
Opus in Cursor is just so much more responsive and easy to talk to, compared to Opus in Claude.
Cursor has this "Auto" mode which feels like it has very liberal limits (amortized cost I guess) that I'm also trying to use more, but -- I don't really like to flip a coin and if it lands up head then waste half hour discovering the LLM made a mess the LLM and try again forcing the model.
Perhaps in March I'll bite the bullet and take this authors advice.
Yeah, I can’t recommend gpt-5.3-codex enough, it’s great! I’ve been using it with the new macOS app and I’m impressed. I’ve always been a Claude Code guy and I find myself using codex more and more. Opus is still much nicer explaining issues and walking me through implementations but codex is faster (even with xhigh effort) and gets the job done 95% of the time.
I was spending unholy amounts of money and tokens (subsidized cloud credits tho) forcing Opus for everything but I’m very happy with this new setup. I’ve also experimented with OpenCode and their Zen subscription to test Kimi K2.5 an similar models and they also seem like a very good alternative for some tasks.
What I cannot stand tho is using sonnet directly (it’s fine as a subagent), I’ve found it to be hard to control and doesn’t follow detailed instructions.
Out of curiosity, what’s your flow? Do you have codex write plans to markdown files? Just chat? What languages or frameworks do you use?
I’m an avid cursor user (with opus), and have been trying alternatives recently. Codex has been an immense letdown. I think I was too spoiled by cursor’s UX and internal planning prompt.
It’s incredibly slow, produces terribly verbose and over-complicated code (unless I use high or xhigh, which are even slower), and missed a lot of details. Python/django and react frontend.
For the first time I felt like I could relate to those people who say it doesn’t make them faster,” because they have to keep fixing the agent’s shot, never felt that with opus 4.5 and 4.6 and cursor
Codex cli is a very performant cli though, better than any other cli code assistant I've used.
I mean does it matter what code it's producing? If it renders and functions just use it. I think it's better to take the L on verbose code and optimizing the really ugly bits by hand in a few minutes than be kneecapped every 5 hour by limits and constant pleas to shift to Sonnet.
I promise you you're just going to continue to light money on fire. Don't fall for this token madness, the bigger your project gets, the less capable the llm will get and the more you spend per request on average. This is literally all marketing tricks by inference providers. Save your money and code it yourself, or use very inexpensive llm methods if you must.
I think we are going to start hearing stories of people going into thousands in CC debt because they were essentially gambling with token usage thinking they would hit some startup jackpot.
Compared to the salary I loose by not taking a consulting gig for half a year, these $800 arent't all that much. (I guess depending on definition of bootstrap, mine might not be, as I support myself with saved consulting income.)
Startup is a gamble with or without the LLM costs.
I have been coding for 20 years, I have a good feel for how much time I would have spent without LLM assistance. And if LLMs vanish from the face of the earth tomorrow, I still saved myself that time.
The biggest problem is you get conditioned to instant and constant dopamine hits, which works directly against a lot of the things one is supposed to learn in school.
Kids learn the A-Z in record speed in 1st grade. But they don't learn to concentrate or that learning things can sometimes be challenging and the value of perseverance and that understanding eventually comes.
So in later grades they pay for learning the A-Z too fast through the iPad. Because they didn't learn how to learn.
The net effect in Norwegian classrooms over past 5 years of iPad education seems to be negative and it is not about what kids are exposed to. It is about not learning to concentrate.
reply