I share the concern based on the current productivity expectations but I did stumble across something recently which makes me feel a lot better. There was a change in the tax code that coincides with the beginning of the post-pandemic layoffs[1]. This was changed back last month by the BBB which likely means a bunch of new R&D spend for big tech. I think this is why we are seeing intense M&A activity and if we can keep the AI hype under control it will probably lead to new hiring as well.
> 1. Layoffs and slowdown are global, not only in US.
Dunno about that. I've moved twice since 2022 (in Ireland), and the market is definitely less crazy but there's still (apparently) lots of work about. To be fair, I interview well and am pushing 15 years experience with a bunch of "prestigious" companies.
Definitely seemed like it much worse in the US. Then again, it was never as insane as the 2010's seemed for the US so maybe there was less over-hiring.
> 2. Expenses are still deductable, but over longer period. It makes no difference for big corporations.
The rate of change between full and 20% deduction definitely had a big impact on smaller companies hiring of software/data people. If you were a megacorp it mattered less but still not trivial.
I'm really baffled why the coding interfaces have not implemented a locking feature for some code. It seems like an obvious feature to be able to select a section of your code and tell the agent not to modify it. This could remove a whole class of problems where the agent tries to change tests to match the code or removes key functionality.
One could even imagine going a step further and having a confidence level associated with different parts of the code, that would help the LLM concentrate changes on the areas that you're less sure about.
Why are engineers so obstinate about this stuff? You really need a GUI built for you in order to do this? You can't take the time to just type up this instruction to the LLM? Do you realize that's possible? You can just write instructions "Don't modify XYZ.ts file under any circumstances". Not to mention all the tools have simple hotkeys to dismiss changes for an entire file with the press of a button if you really want to ignore changes to a file or whatever. In Cursor you can literally select a block of text and press a hotkey to "highlight" that code to the LLM in the chat, and you could absolutely tell it "READ BUT DON'T TOUCH THIS CODE" or something, directly tied to specific lines of code, literally the feature you are describing. BUT, you have to work with the LLM and tooling, it's not just going to be a button for you or something.
You can also literally do exactly what you said with "going a step further".
Open Claude Code, run `/init`. Download Superwhisper, open a new file at project root called BRAIN_DUMP.md, put your cursor in the file, activate Superwhisper, talk in stream of consciousness-style about all the parts of the code and your own confidence level, with any details you want to include. Go to your LLM chat, tell it to "Read file @BRAIN_DUMP.md" and organize all the contents into your own new file CODE_CONFIDENCE.md. Tell it to list the parts of the code base and give it's best assessment of the developer's confidence in that part of the code, given the details and tone in the brain dump for each part. Delete the brain dump file if you want. Now you literally have what you asked for, an "index" of sorts for your LLM that tells it the parts of the codebase and developer confidence/stability/etc. Now you can just refer to that file in your project prompting.
Please, everyone, for the love of god, just start prompting. Instead of posting on hacker news or reddit about your skepticism, literally talk to the LLM about it and ask it questions, it can help you work through almost any of this stuff people rant about.
_all_ models I’ve tried continuously, and still, have problems ignoring rules. I’m actually quite shocked someone would write this if you have experience in the area, as it so clearly contrasts with my own experience.
Despite explicit instructions in all sorts of rules and .md’s, the models still make changes where they should not. When caught they innocently say ”you’re right I shouldn’t have done that as it directly goes against your rule of <x>”.
Just to be clear, are you suggesting that currently, with your existing setup, the AI’s always follow your instructions in your rules and prompts? If so, I want your rules please. If not, I don’t understand why you would diss a solution which aims to hardcode away some of the llm prompt interpretation problems that exist
I am by no means an AI skeptic. It is possible to encode all sorts of things into instructions, but I don’t think the future of programming is every individual constructing and managing artisan prompts. There are surely some new paradigms to be discovered here. A code locking interface seems like an interesting one to explore. I’m sure there are others.
Heroku is turning the 12 factor manifesto into a community project and modernizing it. I posted my thoughts in a blog[1], and I'd love to hear what other people think!
This is something I’ve been arguing for for a while[1]. I called it a “Framework Knowledge Base”. I think it needs to go a bit further and include specific code examples, especially for newer bits that are not in the training set. Ultimately RAG or even fine tuning might be better than a system prompt.
[1]: https://devops.com/the-rise-of-coding-assistants-superchargi...
I've followed Caleb on YouTube for a while, due to his MtG content. He is PhD in Optical Sciences that makes some very interesting AI art. He has created a new card game with hand curated AI art and some interesting rules aimed at solving a bunch of the problems around existing collectible card games. He wrote an essay[1] about his design goals which is fascinating.
[1] https://calebgannon.com/2023/07/08/the-making-of-algomancy/
Python is extremely slow for some tasks. I was surprised to discover how slow when I ran some benchmarks, despite having used python for many years at the time. It has been improving lately, but here is a blog post I made on the topic quite a few years ago that has some interesting comparisons: https://gist.github.com/vishvananda/7a2f1942d0e9ffff4093
Just reran the benchmarks from 10 years ago, python is only 37X slower than C on the benchmark now, and the go version is running faster than the C version. Python still has big productivity wins of course...
This is definitely an intriguing line of thinking. I too am constantly appalled by the complexity we introduce into the things we build, but I suspect a lot of it has to do with human issues that can't be solved by better technology. That said, I'm very curious to see what tractor ends up looking like.
I'm surprised that neither the documentary nor the review gets into the legacy of OpenStack. I may be biased, but it seems to me that a huge amount of the success of kubernetes is directly attributable to OpenStack.
First, OpenStack paved the way for a bunch of companies to invest real money in working together to compete with AWS. Second, there was massive turnover in ~2013 in open source contributors from OpenStack to Kubernetes. I wouldn't be surprised if a good 50% of the kubernetes community was inherited directly from OpenStack.
In this particular case, some of the runs from iFit instructors are actually quite good, and it cool that it adjusts the speed and incline to match the instruction. Probably not worth the extra $$$$ but it is pretty cool. But now I also want to be able to watch regular videos. I usually walk outdoors for an hour a day to get my 10,000 steps in, and the Chicago winter makes that tough, so I'm thinking an hour of walking on the treadmill while i catch up on my favorite shows might be a good substitute.
[1]: https://qz.com/tech-layoffs-tax-code-trump-section-174-micro...