You don’t have to, this ja the reason. There were multiple successful EU alternatives that were killed by the loads of money the US companies could muster to kill or hobble them. And Europe decided it was fine.
There isn’t even an European card brand that operates across the whole continent, the just accepted to use visa and Mastercard for everything. I hope they change it.
And several European countries had their own card systems. The banks have just decided that letting US companies do the work is more lucrative. It was definitely cheaper and it was necessary if they want to be part of US hegemony network and trade with Asian countries since many of them had bad relationships due to colonialism.
The local card systems still exist in most places, but they only work if you have a card from that country, for people travelling across Europe its useless as once you cross the border people won't accept that card anymore and you're back to taking only visa/mastercard.
Eurocheque existed for a long while for Europe: https://en.wikipedia.org/wiki/Eurocheque . But yes, trusting the US economic and political partnership and also choosing the cheaper option, the European banks eventually decided to not do the legwork of establishing a (global) payment network and settled on American Visa and Mastercard networks.
While I greatly dislike the hype and don't believe most of what people say is real or that whatever they're building is just bullshi, I can definitely see the improvement of productivity, specially when working with agents.
I think the problem is that people:
* see the hype;
* try to replicate the hype;
* it fails miserably;
* they throw everything away;
I'm on call this week on my job, one of the issues was adding a quick validation (verifying the length of a thing was exactly 15). I could have sat and done that but I just spun an agent, told it where it was, told it how to add the change (we always add feature flags to do that), read the code, prompted it to fix a thing and boom, PR is ready. I wrote 3 paragraphs, didn't have to sit and wait for CI or any of the other bullshit to get it done, focused on more important stuff but still got the fix out.
Don't believe the hype but also don't completely discount the tools, they are incredible help and while they will not boost your productivity by 500%, they're amazing.
The field of "AI assisted programming" is progressing extremely rapidly.
Claude code (arguably the most recent large change, even if it wasn't the first of it's type) was released one year ago.
After watching the video I'd say that it is similar to my own reaction when opening my own code that is 2 years old. (To be clear, code I myself wrote 2 years ago, without AI.) Or even more realistically, code I wrote 6 months ago.
Buy it mostly reads like someone making an exaggerating claim to get s boost from a populist narrative.
I've started disregarding any AI take that is all-or-nothing. These tools are useful for certain work once you've learned their limitations. Anyone making sweeping claims about vibecoding-style use being viable at scale or making claims that they're completely useless is just picking a side and running with it.
Different outlets tilt different directions. On HN and some other tech websites it's common to find declarations that LLMs are useless from people who tried the free models on ChatGPT (which isn't the coding model) and jumped to conclusions after the first few issues. On LinkedIn it's common to find influencers who used ChatGPT for a couple things at work and are ready to proclaim it's going to handle everything in the future (including writing the text of their LinkedIn post)
The most useful, accurate, and honest LLM information I've gathered comes from spaces where neither extreme prevails. You have to find people who have put in the time and are realistic about what can and cannot be accomplished. That's when you start learning the techniques for using these tools for maximum effect and where to apply them.
>> The most useful, accurate, and honest LLM information I've gathered
>> comes from spaces where neither extreme prevails
Do you have any pointers to good (public) spaces like this? Your take sounds reasonable, and so I'm curious to see that middle-ground expression and discussion.
"They're perfectly justified: the majority of hot new whatevers do turn out to be a waste of time, and eventually go away. By delaying learning VRML, I avoided having to learn it at all."
> The most useful, accurate, and honest LLM information I've gathered comes from spaces where neither extreme prevails. You have to find people who have put in the time and are realistic about what can and cannot be accomplished. That's when you start learning the techniques for using these tools for maximum effect and where to apply them.
This requires the level of professionalism that 97.56% of SWEs do not have
I use LLMs for coding in the exact opposite way as described in the video. The video says that most people start big, then the LLM fails, then they reduce the scope more and more until they're actually doing most of the work while thinking it's all the machine's work.
I use AI in two ways. With Python I ask it to write micro functions and I do all of the general architecture. This saves a lot of time, but I could do without AI if needed be.
But recently I also started making small C utilities that each do exactly one thing and for those, the LLMs write most if not all of the code. I start very small with a tiny proof of concept and iterate over it, adding functionalities here and there until I'm satisfied. I still inspect the code and suggest refactorizations, or putting things into independent, reusable modules for static linking, etc.
But I'm not a C coder and I couldn't make any of these apps without AI.
Since the beginning of the year, I made four of them. The code is probably subpar but they all work great! and never crash, and I use them every day.
I wonder what sort of training data the AI was fed with. It's possible that such if whatever was utilized most was put together into a reference cookbook a human could do most of the work almost as fast based on more normal searches of that data in an overall more efficient way.
What about stack/buffer overflows, use after free and all of the nasty memory alloc/dealloc security pitfalls? These are what I would worry about with C programs.
With respect for your priorities that may be different from mine, it would sadden me a little to always have to work like this, because it would rob me of the chance to think deeply about the problem, perhaps notice analogies with other processes, or more general or more efficient ways to solve the problem, perhaps even why it was necessary at all. Those are the tasty bits of the work, a little something just for me, not for my employer.
Instead my productivity would be optimised in service of my employer, while I still had to work on other things, the more important work you cite. It's not like I get to finish work early and have more leisure time.
And that's not to mention, as discussed in the video, what happens if the code turns out to be buggy later. The AI gets the credit, I get the blame.
"perhaps even why it was necessary at all" not being asked anymore is what I fear as well. Stumbling over problems repeatedly gets attention to architectural defects. Papering over the faults with a non-complaining AI buries the defects even deeper, as the pain of the defects isn't felt anymore.
>the chance to think deeply about the problem, perhaps notice analogies with other processes, or more general or more efficient ways to solve the problem, perhaps even why it was necessary at all. Those are the tasty bits of the work, a little something just for me, not for my employer.
You should be aiming to use AI in a way that the work it does gives you more time to work on these things.
I can see how people could end up in an environment where management expects AI use is expected to simply increase the speed of exactly what you do right now. That's when people expect the automobile to behave like a faster horse. I do not envy people placed in that position. I don't think that is how AI should be used though.
I have been working on test projects using AI. These are projects where there is essentially no penalty for failure, and I can explore the bounds of what they offer. They are no panacea, people will be writing code for a long while yet, but the bounds of their capability are certainly growing. Working on ideas with them I have been able to think more deeply about what the code was doing and what it was should do. Quite often a lot of the deep thinking in programming is gaining a greater understanding of what the problem really is. You can gain a benefit from using AI to ask for a quick solution simply to get a better understanding of why a naive implementation will not work. You don't need to use any of that code at all, but it can easily show you why something is not as simple as it seems at first glance.
I might post a show HN in a bit of a test project I started over the Christmas break. It's a good example of what I mean. I did it in Claude Artifacts instead of using Claude Code just to see how well I can develop something non-trivial in this manner. There have been certainly been periods of frustration trying to get Claude to understand particular points, but some of those areas of confusion came from my presumptions of what the problem was and how it differed to what the problem actually was. That is exactly the insight that you refer to as the tasty bits.
I think there is some adaptation needed to how you feel about the process of working on a solution. When you are stuck on a problem and are trying things that should make it work, the work can absorb you in the process. AI can diminish this, but I think some of that is precisely because it is giving you more time to think about the hard stuff, and that hard stuff is, well, hard.
I don't think that's really true. You still need to think to make what you want to make. You still got to design the program. You just do less typing.
In a sense, AI coding is like using a 3D printer. The machine outputs the final object but you absolutely decides how it will look like, how it will work.
Using AI for coding (or anything) is like being a manager. Successes get passed on to the team (with some to the manager) but the blame stops at the manager
I had to make a small CSS change yesterday. I asked the LLM to do it, which took about 2 min. I also did it myself at the same time just to check and it took me 23 seconds.
I agree the tools are amazing, if you sit back and think about it it’s insane that you can generate code from conversation.
But so far I haven’t found the tools to be amazingly productive for me for small things like that. It’s usually faster for me to just write the thing directly than to have a conversation, and maybe iterate once or twice. Something that takes 5 minutes just turns into 15 minutes and I still need to be in the loop. If I still need to read the code anyway it’s not effective use of my time.
Now what I have found incredibly productive is LLM assisted code completions in read-eval-print loop. That and helping to generate inline documentation.
Final diff is 235 lines added. I could definitely write the code faster but then the write/ci/run tests loop would make it take longer anyway, having the agent write the code and do the cycle based on what i asked it to do ended up being more productive. It obviously wasn't if len(thing) != 15 exception.
This isn't the usecase that's being criticized. Here you are responsible for the fix. You will validate and ship it and if something goes wrong, you will answer for it. The AI won't answer for it.
You have a pragmatic approach, but you don't have a PR unless you have inspected and tested the code. Just submitting code generated by a tool without personally vouching for it's correctness and sanity is not engineering.
Same, it's so good for these little things. And once you start adding rules and context for the "how to add the change/feature flags", etc you get that 3 paragraphs down. Now our head of product is able to fire off small changes instead of asking a dev, making them context switch, etc. Devs still review but the loop is so much shorter.
Probably not 7 days a week, but a couple days a week, sure.
And of course not everyone. Maybe 10%?
Not that it matters. What do I care about the needs of some Texans? (I mean that non perjorativly). I mean just because ranchers still need horses doesn't mean the rest of us have to use them.
The world will go EV, even much of the US will go EV, regardless of what some folks need.
Almost nobody is driving 200 miles to get to work. Almost everybody will move if their commute is more than half an hour - this is through out history and includes hunter gathers deciding to move the tribe, peasants walking to their field... There are a few people driving that far in the US, but either they are planning to move soon, or they don't expect the job to last long.
There are a lot of people driving more than 200 miles a day for work though. Many of them are in cars because their unique skills are why they need to be there (as opposed to delivery drivers who are bringing cargo).
There are also people who drive a long distance once a week. I know of a rural hospital that pays a lot of doctors to drive in on Thursday so locals don't have to go to the city. (they keep an ER, but the rest of the hospital is empty other than a few nurses the rest of the week)
Nobody is going to uncover anything on bets when they're done an hour before the event happens. This isn't the stock market and trying to connect it to what happens on the stock market makes no sense.
The sport who's leader shoved his head so far up Trump's ass he was able to taste his orange make-up. All for the sake of giving him a farce of a "peace" prize.
(I'm talking about FIFA in case you are not aware)
Because these are not solutions, they're just fluff. If they were actual solutions these people would kill them because they'd diminish the power their real overlords have.
Just look at the whole circus around the hyperloop instead of just building high speed trains.
It is much harder to blame meta because the content is disperse and they can always say "they decided to consume this/join this group/like this page/watch these videos", while ChatGPT is directly telling the person their mother is trying to kill him.
Not that the actual effect is any different, but for a jury the second case is much stronger.
world where everything is perfect and made to be consumed by LLMs
I believe the parent poster was clearly and specifically talking about software documentation that was strong and LLM consumption-friendly, not "everything"
You SHOULD be making things in a human/LLM-readable format nowadays anyway if you're in tech, it'll do you well with AIs resorting to citing what you write, and content aggregators - like search engines - giving it more preferential scores.
reply