My favorite was Thinking, as it tried to be helpful with a response a bit like the X/Y Problem. Pro was my second favorite: terse, while still explaining why. Fast sounded like it was about to fail, and then did a change-up explaining a legitimate reason I may walk anyways. Pro + Deep Think was a bit sarcastic, actually.
Those are all stateless MVC over HTTP, which is a very different architecture from stateful MVC for long-lived UI. The latter was invented for Smalltalk by Trygve Reenskaug, and is far more relevant to front-end web.
Stateful MVC uses Publisher/Subscriber (or Observer) to keep Views and Controllers up-to-date with changing Models over time, which is irrelevant for stateless MVC over HTTP. Plus, in stateful MVC the View and Controller are often "pluggable," where a given Controller+Model may use a different View for displaying the same data differently (e.g. table vs. pie chart), or a given View+Model may use a different Controller for handling events differently (e.g. mouse+keyboard vs. game controller). Whereas, in stateless MVC over HTTP, the controller is the "owner" of the process, and won't generally be replaced.
And in the world of front-end web, stateful MVC really is mostly dead. MVVM and Component-based architectures (using the Composite pattern) have replaced it. A runtime is usually responsible for wiring up events, rather than individual controllers. Controllers don't need to be swappable because events can be given semantic meaning in components, and Views don't need to be swappable because you can instead render a sub-composite to change how the data is shown.
Is the Controller not in a coupled pair with a View? We could imagine an interface where it could be completely separate (e.g. a kiosk TUI where stuff like "press 'r' for X" is displayed), but in the vast majority of UIs the View has state, and the Controller has to depend on that state (e.g. did this keypress happen with a text field focused). Sure, this is abstracted away via the UI framework and we operate on usually some form of event system.
But even then, I don't really see how we could have a non-coupled controller-view. In fact, I seem to remember that it was described in a similar way for Smalltalk even.
You can have decoupled Controllers from Views using React. That's the basis of the "original" Flux/Redux architecture used by React developers 10+ years ago when React was just beginning to get traction.
A flux/redux "Store" acts as a Model -> contains all the global state and exactly decides what gets rendered. A flux/redux "Dispatcher" acts as a Controller. And React "Components" (views) get their props from the "Store" and send "events" to "dispatcher", which in turn modifies the "Store" and forces a redraw.
Of course they aren't "entirely decoupled" because the view still has to call the controller functions, but the same controller action can be called from multiple views, and you can still design the architecture from Model, through Controller (which properties can change under what conditions) and then design the Views (where the interactions can happen).
I was asking more in the abstract. Web UI frameworks usually sit on top of considerable abstraction (in the form of the DOM, eventing system, etc), so I'm not sure your reply exactly answers my question.
Whether application state is short-lived (e.g., request/response CRUD) or long-lived (e.g., an in-memory interactive UI) is orthogonal to MVC. MVC is a structural separation of responsibilities between model, view, and control logic. The duration of state affects implementation strategy, not the applicability of the pattern itself.
MVC is a structural separation of responsibilities between model, view, and control logic.
Yes, but the “MVC” pattern used by various back-end web frameworks that borrowed the term a while back actually has very little to do with the original MVC of the Reenskaug era.
The original concept of MVC is based on a triangle of three modules with quite specific responsibilities and relationships. The closest equivalent on the back-end of a web application might be having a data model persisted via a database or similar, and then a web server providing a set of HTTP GET endpoints allowing queries of that model state (perhaps including some sort of WebSocket or Server-Sent Event provision to observe any changes) and a separate set of HTTP POST/PUT/PATCH endpoints allowing updates of the model state. Then on the back end, your “view” code handles any query requests, including monitoring the model state for changes and notifying any observers via WS/SSE, while your “controller” code handles any mutation requests. And then on the front end, you render your page content based on the back-end view endpoints, subscribe for notifications of changes that cause you to update your rendering, and any user interactions get sent to the back-end controller endpoints.
In practice, I don’t recall ever seeing an “MVC” back-end framework used anything like that. Instead, they typically have a “controller” in front of the “model” and have it manage all incoming HTTP requests, with “view” referring to the front-end code. This is fundamentally a tiered, linear relationship and it allocates responsibilities quite differently to the original, triangular MVC.
> However, it is important to ask if you want to stop investing in your own skills because of a speculative prediction made by an AI researcher or tech CEO.
I don't think these are exclusive. Almost a year ago, I wrote a blog post about this [0]. I spent the time since then both learning better software design and learning to vibe code. I've worked through Domain-Driven Design Distilled, Domain-Driven Design, Implementing Domain-Driven Design, Design Patterns, The Art of Agile Software Development, 2nd Edition, Clean Architecture, Smalltalk Best Practice Patterns, and Tidy First?. I'm a far better software engineer than I was in 2024. I've also vibe coded [1] a whole lot of software [2], some good and some bad [3].
[1]: As defined in Vibe Coding: Building Production-Grade Software With GenAI, Chat, Agents, and Beyond by Gene Kim and Steve Yegge, wherein you still take responsibility for the code you deliver.
I personally found out that knowing how to use ai coding assistants productively is a skill like any other and a) it requires a significant investment of time b) can be quite rewarding to learn just as any other skill c) might be useful now or in the future and d) doesn't negate the usefulness of any other skills acquired on the past nor diminishes the usefulness of learning new skills in the future
Agreed, my experience and code quality with claude code and agentic workflows has dramatically increased since investing in learning how to properly use these tools. Ralph Wiggum based approaches and HumanLayer's agents/commands (in their .claude/) have boosted my productivity the most. https://github.com/snwfdhmp/awesome-ralphhttps://github.com/humanlayer
As much as i loved the relation of vibe coding to slots and their related flow states in this article, I also think what you are stating is the exact reason these tools are not the same as slots, the skill gap is there and its massive.
I think there are a ton of people just pulling the lever over and over, instead of stepping back and considering how they should pull the lever. When you step back and consider this, you are for sure going to end up falling deeper into the engineering, architecture realm. Ensuring that continually pulling the lever doesn't result in potential future headaches.
I think a ton of people in this community are struggling with the lose of flow state, and attempting to still somehow enter it through prompting. The game in my view has just changed, its more about just generating the code, and being thoughtful about what comes next, its rapid usage of a junior to design your system, and if you overdue the rapidness the junior will give you headaches.
> I think there are a ton of people just pulling the lever over and over, instead of stepping back and considering how they should pull the lever
There are deeper considerations like why pull the lever, or is it the correct lever? So many api usages is either seeing someone using a forklift to go the gym (bypassing the point), using it to lift a cereal box (overpowered), or using it to do watchmaking (very much the wrong tool).
Programming languages are languages, yes. But we only use them for two reasons. They can be mapped down to hardware ISA and they’re human shaped. The computer doesn’t care about the wrong formula as long as they can compute it. So it falls on us to ensure that the correct formula is being computed. And a lot of AI proponents is trying to get rid of that part.
On the using AI assistants I find that everything is moving so fast that I feel constantly like "I'm doing this wrong". Is the answer simply "dedicate time to experimenting? I keep hearing "spec driven design" or "Ralph" maybe I should learn those? Genuine thoughts and questions btw.
More specifically regarding spec-driven development:
There's a good reason that most successful examples of those tools like openspec are to-do apps etc. As soon as the project grows to 'relevant' size of complexity, maintaining specs is just as hard as whatever other methodology offers. Also from my brief attempts - similar to human based coding, we actually do quite well with incomplete specs. So do agents, but they'll shrug at all the implicit things much more than humans do. So you'll see more flip-flopped things you did not specify, and if you nail everything down hard, the specs get unwieldy - large and overly detailed.
> if you nail everything down hard, the specs get unwieldy - large and overly detailed
That's a rather short-sighted way of putting it. There's no way that the spec is anywhere as unwieldly as the actual code, and the more details, the better. If it gets too large, work on splitting a self-contained subset of it to a separate document.
> There's no way that the spec is anywhere as unwieldly as the actual code, and the more details, the better.
I disagree - the spec is more unwieldy, simply by the fact of using ambiguous language without even the benefit of a type checker or compiler to verify that the language has no ambiguities.
People are keen to forget that programming languages are specs. And a good technique for coding is to build up you own set of symbols (variables, struct, and functions) so that the spec become easier to write and edit. Writing spec with natural language is playing russian roulette with the goals of the system, using AI as the gun.
Everybody feels like this, and I think nobody stays ahead of the curve for a prolonged time. There's just too many wrinkles.
But also, you don't have to upgrade every iteration. I think it's absolutely worthwhile to step off the hamster wheel every now and then, just work with you head down for a while and come back after a few weeks. One notices that even though the world didn't stop spinning, you didn't get the whiplash of every rotation.
I don’t think Ralph is worthwhile, at least the few times I’ve tried to set it up I spent more time fighting to get the configuration right than if I had simply run the prompt. Coworkers had similar experiences, it’s better to set a good allowlist for Claude.
I think find what works for you, and everything else is kind of noise.
At the end of the day, it doesn’t matter if a cat is black or white so long as it catches mice.
——
Ive also found that picking something and learning about it helps me with mental models for picking up other paradigms later, similar to how learning Java doesn’t actually prevent you from say picking up Python or Javascript
The addictive nature of the technology persists though. So even if we say certain skills are required to use it, then also it must come with a warning label and avoided by people with addictive personalities/substance abuse issues etc.
It's addictive because of a hypothesis I have about addiction. I have no data to back it up other than knowing a lot of addicted people and I have studied neuroscience, yet I still think there's something to it. It's definitely not fully true though.
Addiction occurs because as humans we bond with people but we also bond with things. It could be an activity, a subject, anything. We get addicted because we're bonded to it. Usually this happens because we're not in fertile grounds to bond with what we need to bond with (usually a good group of friends).
When I look at addicted people a lot of them bond with people that have not so great values (big house, fast cars, designer clothing, etc.), adopt those values themselves and get addicted to drugs. This drugs is usually supplied by the people they bond with. However, they bond with those people in the first place because of being aimless and receiving little guidance in their upbringing.
I'm just saying all that to make it more concrete with what I mean about "good people".
Back to LLMs. A lot of us are bonding with it, even if we still perceive it as an AI. We're bonding with it because when it comes to certain emotional needs, they're not being fulfilled. Enter a computer that will listen endlessly to you and is intellectually smarter than most humans, albeit it makes very very dumb mistakes at times (like ordering +1000 drinks when you ask for a few).
That's where we're at right now.
I've noticed I'm bonded with it.
Oh, and to some who feel this opinion is a bit strong, it is. But consider that we used to joke that "Google is your best friend" when it just came out and long thereafter. I think there's something to this take but it's not fully in the right direction I think.
> knowing how to use ai coding assistants productively is a skill like any other
No, it's different from other skills in several ways.
For one, the difficulty of this skill is largely overstated. All it requires is basic natural language reading and writing, the ability to organize work and issue clear instructions, and some relatively simple technical knowledge about managing context effectively, knowing which tool to use for which task, and other minor details. This pales in comparison with the difficulty of learning a programming language and classical programming. After all, the entire point of these tools is to lower the required skill ceiling of tasks that were previously inaccessible to many people. The fact that millions of people are now using them, with varying degrees of success for various reasons, is a testament of this.
I would argue that the results depend far more on the user's familiarity with the domain than their skill level. Domain experts know how to ask the right questions, provide useful guidance, and can tell when the output is of poor quality or inaccurate. No amount of technical expertise will help you make these judgments if you're not familiar with the domain to begin with, which can only lead to poor results.
> might be useful now or in the future
How will this skill be useful in the future? Isn't the goal of the companies producing these tools to make them accessible to as many people as possible? If the technology continues to improve, won't it become easier to use, and be able to produce better output with less guidance?
It's amusing to me that people think this technology is another layer of abstraction, and that they can focus on "important" things while the machine works on the tedious details. Don't you see that this is simply a transition period, and that whatever work you're doing now, could eventually be done better/faster/cheaper by the same technology? The goal is to replace all cognitive work. Just because this is not entirely possible today, doesn't mean that it won't be tomorrow.
I'm of the opinion that this goal is unachievable with the current tech generation, and that the bubble will burst soon unless another breakthrough is reached. In the meantime, your own skills will continue to atrophy the more you rely on this tech, instead of on your own intellect.
> The fact that millions of people are now using them, with varying degrees of success for various reasons, is a testament of this.
I do agree with you that by design this new tool lowers the bar to entry etc.
But I just want to state the obvious: billions of kids are playing with a ball; it's not that hard. Yet far less people are good soccer players.
> The goal is to replace all cognitive work. Just because this is not entirely possible today, doesn't mean that it won't be tomorrow.
> [..]
> I'm of the opinion that this goal is unachievable with the current tech generatiom
> [..]
> In the meantime, your own skills will continue to atrophy the more you rely on this tech [..]
Here I don't quite follow.
I agree that if this tech is ready to completely replace you, you won't need to use your brain.
But provided it is not there yet (like, at all), your intellect is needed quite a lot to get out of it anything more than toys.
The question is: do you benefit from using it or not? can you build faster or better by applying these tools in the appropriate way or should you just ignore it and keep doing things the way things used to be done up until a few months ago?
This is a legit question.
My point is: in order to anwswer this question I cannot base my intuition only on some vague first principles on what this tech stack ought to be able to do, or what other people say it's able to do, or what I suspect it will never be able to do: I need to touch it, to learn how to use it, just like every other tool. That's the only way I can truly get a sensible answer. And like any other skill, I'm fully aware that I can't devote just a few minutes trying it out and then reaching any conclusion.
EDIT: I do share a general concern about how new generations are going to achieve the full-picture understanding if they get exposed to these tools as the main approach towards software production. I come to this after a long career in system programming, so I don't personally see this as a threat to atrophy my own skills; but I do share a quite undefined sense of concern about where this is going
> billions of kids are playing with a ball; it's not that hard. Yet far less people are good soccer players.
I agree, but I don't see how that negates what I said.
Following your analogy, what's currently happening is that kids playing with a ball are now allowed to play in the major leagues. Good soccer players still exist, and their performance has arguably improved as well, but kids are now entering spaces that were previously inaccessible to them. This can be seen as both a good or a bad thing, but I would argue that it will mostly have bad consequences for everyone involved, including the kids.
> The question is: do you benefit from using it or not? can you build faster or better by applying these tools in the appropriate way or should you just ignore it and keep doing things the way things used to be done up until a few months ago?
That's a false dichotomy. I would say that the answer is somewhere in the middle.
These new tools can assist with many tasks, but I'm still undecided whether they're a net long-term benefit. On one hand, sure, they enable me to get to the end result quicker. On the other, I have less understanding of the end result, hence I can't troubleshoot any issues, fix any bugs, or implement new features without also relying on the tool for this work. This ultimately leads to an atrophy of my skills, and a reliance on tools that are out of my control. Even worse: since the tools are far from reliable yet, they provide a false sense of security.
But I also don't think it's wise to completely ignore this technology, and continue working as it didn't exist.
So at this point, the smartest approach to me is conservative adoption. Use vibe coding for things that you don't care about, that won't be published, and will only be used by yourself. Use assisted coding in projects that might be published and have other users, but take time and effort to guide the tool, and understand and review the generated code. Use classical programming for projects you care about, critical software, or when you want to actually learn and improve your skills.
I doubt this approach will be adopted by many, and that's the concerning part, since the software they produce will inevitably be forced on the rest of us.
What's really surprising to me is how many experienced programmers are singing the praises of this new way of working. How what they really enjoy is "building", but find the classical process of "building" tedious. This goes against most of the reasons I got into and enjoy working in this industry to begin with. Delivering working software is, of course, the end goal. But the process itself, pushing electrons to arrange bits in a useful configuration, in a way that is interesting, performant, elegant, or even poetic, learning new ways of doing that and collaborating with like-minded people... all of that is why I enjoy doing this. A tool that replaces that with natural language interactions, that produces the end result by regurgitating stolen data patterns in configurations that are sometimes useful, and that robs me from the process of learning, is far removed from what I enjoy doing.
I got your poor attempt at sarcasm. I just don't think it's a good argument.
The person who understands how lower levels of abstraction work, will always run circles technically around those who don't. Besides, "AI" tools are not a higher level of abstraction, and can't be compared to compilers. Their goal is to replace all cognitive work done by humans. If you think programming is obsolete, the same will eventually happen to whatever work you're doing today with agents. In the meantime, programmers will be in demand to fix issues caused by vibe coders.
And I got your cheeky, dismissive attitude which completely misses the forest for the trees.
> In the meantime, programmers will be in demand to fix issues caused by vibe coders.
Yes, I agree. They’ll be lower on the totem pole than the vibe coders, too. Because the best vibe coders have the same skill set as you - years of classical engineering background. So how can one differentiate themself in the new world? I aspire to move up the totem pole, not down, and leaning into AI is the #1 way to do that. Staying a “bug fixer” only is what will push you out of employment.
Agreed. I find most design patterns end up as a mess eventually, at least when followed religiously. DDD being one of the big offenders. They all seem to converge on the same type of "over engineered spaghetti" that LOOKS well factored at a glance, but is incredibly hard to understand or debug in practice.
DDD is quite nice as a philosophy. Like concatenate state based on behavioral similarity and keep mutation and query function close, model data structures from domain concepts and not the inverse, pay attention to domain boundary (an entity may be read only in some domain and have fewer state transition than in another).
But it should be a philosophy, not a directive. There are always tradeoffs to be made, and DDD may be the one to be sacrificed in order to get things done.
I'm doing a similar thing. Recently, I got $100 to spend on books. The first two books I got were A Philosophy of Software Design, and Designing Data-Intensive Applications, because I asked myself, out of all the technical and software engineering related books that I might get, given agentic coding works quite well now, what are the most high impact ones?
And it seemed pretty clear to me that they would have to do with the sort of evergreen, software engineering and architecture concepts that you still need a human to design and think through carefully today, because LLMs don't have the judgment and a high-level view for that, not the specific API surface area or syntax, etc., of particular frameworks, libraries, or languages, which LLMs, IDE completion, and online documentation mostly handle.
Especially since well-designed software systems, with deep and narrow module interface, maintainable and scalable architectures, well chosen underlying technologies, clear data flow, and so on, are all things that can vastly increase the effectiveness of an AI coding agent, because they mean that it needs less context to understand things, can reason more locally, etc.
To be clear, this is not about not understanding the paradigms, capabilities, or affordances of the tech stack you choose, either! The next books I plan to get are things like Modern Operating Systems, Data-Oriented Design, Communicating Sequential Processes, and The Go Programming Language, because low level concepts, too, are things you can direct an LLM to optimize, if you give it the algorithm, but which they won't do themselves very well, and are generally also evergreen and not subsumed in the "platform minutea" described above.
Likewise, stretching your brain with new paradigms — actor oriented, Smalltalk OOP, Haskell FP, Clojure FP, Lisp, etc — gives you new ways to conceptualize and express your algorithms and architectures, and to judge and refine the code your LLM produces, and ideas like BDD, PBT, lightweight formal methods (like model checking), etc, all provide direct tools for modeling your domain, specifying behavior, and testing it far better, which then allow you to use agentic coding tools with more safety or confidence (and a better feedback loop for them) — at the limit almost creating a way to program declaratively in executible specifications, and then convert those to code via LLM, and then test the latter against the former!
It presents the main concepts like a good lecture and a more modern take than the blue book. Then you can read the blue book.
But DDD should be taken as a philosophy rather than a pattern. Trying to follow it religiously tends to results in good software, but it’s very hard to nail the domain well. If refactoring is no longer an option, you will be stuck with a non optimal system. It’s more something you want to converge to in the long term rather than getting it right early. Always start with a simpler design.
Oh absolutely. It feels like a worthwhile architectural framing to understand and draw from as appropriate. To me I think - my end goal is being able to think more deeply about my domains and how to model them.
I was going to ask the same thing. I'm self taught but I've mainly gone the other way, more interested in learning about lower level things. Bang for buck I think I might have been better reading DDD type books.
I laugh every time somebody thinks every problem must have a root cause that pollutes every non-problem it touches.
It's a problem to use a blender to polish your jewelry. However, it's perfectly alright to use a blender to make a smoothie. It's not cognitive dissonance to write a blog post imploring people to stop polishing jewelry using a blender while also making a daily smoothie using the same tool.
This is basically the "I'm not evil, it's just a normal job" excuse. Like with any moral issues there will be disagreement where to draw the line but yes if you do something that ends up supporting $bad_thing, then there is an ethical consideration you need to make. And if your answer is always that it's OK for the things you want to do then you are probably not being very honest with yourself.
Your response assumes the tool is a $bad_thing rather than one specific use of it. In my analogy, that would be saying that "there is an ethical consideration you need to make" before using (or buying) a blender.
I've been working to overcome this exact problem. I believe it's fully tractable. With proper deterministic tooling, the right words in context to anchor latent space, and a pilot with the software design skills to do it themselves, AI can help the pilot write and iterate upon properly-designed code faster than typing and using a traditional IDE. And along the way, it serves as a better rubber duck (but worse than a skilled pair programmer).
The overall level of complexity of a project is not an "up means good" kind of measure. If you can achieve the same amount of functionality, obtain the same user experience, and have the same reliability with less complexity, you should.
Accidental complexity, as defined by Brooks in No Silver Bullet, should be minimized.
Complexity is always the thing that needs to be managed, as it is ultimately what kills your app. Over time, as apps get more complex, it's harder and harder to add new features while maintaining quality. In a greenfield project you can implement this feature in a day, but as the app becomes more complex it takes longer and longer. Eventually it takes a year to add that simple feature. At that point, your app is basically dead, in terms of new development, and is forever in sustaining mode, barring a massive rewrite that dramatically reduces the complexity by killing unnecessary features.
So I wish developers looked at apps with a complexity budget, which is basically Djikstra's line of code budget. You have a certain amount of complexity you can handle. Do you want to spend that complexity on adding these features or these other features? But there is a limit and a budget you are working with. Many times I have wished that product managers and engineering managers would adopt this view.
It absolutely does matter. LLMs still have to consumer context and process complexity. The more LoC, the more complexity, the more errors you have and the higher your LLM bills. That's even in the AI maximalist, vibe-code only use case. The reality is that AI will have an easier time working in a well-designed, human-written codebase than one generated by AI, and the problem of AI code output turning into AI coding inputs resulting in the AI choking and on itself and making more errors tends to get worse over time, with human oversight being the key tool to prevent this.
a bit of a nit but accidental complexity is still complexity, so even if that 1M lines could reduce to 2 kines its still way more complex to maintain and patch than a codebase thats properly minimized and say 10k lines. (even though this sounds unreasonable i dont doubt it happen..)
> The overall level of complexity of a project is not an "up means good" kind of measure.
I never said it was. To the contrary, it's more of an indication of how much more complex large refactorings might be, how complex it might be to add a new feature that will wind up touching a lot of parts, or how long a security audit might take.
The point is, it's important to measure things. Not as a "target", but simply so you can make more informed decisions.
I dunno about autonomous, but it is happening at least a bit from human pilots. I've got a fork of a popular DevOps tool that I doubt the maintainers would want to upstream, so I'm not making a PR. I wouldn't have bothered before, but I believe LLMs can help me manage a deluge of rebases onto upstream.
same, i run quite a few forked services on my homelab. it's nice to be able to add weird niche features that only i would want. so far, LLMs have been easily able to manage the merge conflicts and issues that can arise.
Pretty dang common. OS X and macOS (and maybe iOS and iPadOS, though I'm not certain) have been autocorrecting "--" into "—" for over a decade. Windows users have been using Alt codes for them since approximately forever ago: https://superuser.com/q/811318.
Typography nerds, which are likely overrepresented on HN, love both em dash and en dash, and we especially love knowing when to use each. Punctation geeks, too! If you know what an octothorp or an interrobang are, you've probably been using em dashes for a long time.
Folks who didn't know what an em dash was by name are now experiencing the Baader-Meinhof phenomenon en masse. I've literally had to disable my "--" autocorrect just to not be accused of using an LLM when writing. It's annoying.
My favorite was Thinking, as it tried to be helpful with a response a bit like the X/Y Problem. Pro was my second favorite: terse, while still explaining why. Fast sounded like it was about to fail, and then did a change-up explaining a legitimate reason I may walk anyways. Pro + Deep Think was a bit sarcastic, actually.
reply