Last week i cancelled my Jetbrains sub after a decade of daily driving it. I just cant take the performance issues anymore. Across 5 different machines all kinds of actions would just take ages and it got worse every year.
Moving to Apple Silicon made it bearable for a few months but somehow Jetbrains manages to get slow even on a M3 Max with 36GB RAM.
Ive been fiddling with configs for years, i tried everything since i was a Jetbrains diehard.
Instead of trying to catch up to other AI editor they should get back to their core and make it possible to use Jetbrains on medium sized Monorepos with multiple languages.
I was hyped when i heard they would release a standalone git product, but then they scrapped it!
In the end i was only dependent on it for debugging and my usual git workflow.
I now switched to zed and gitkraken, i will figure out a new debugging workflow, ill never wait 5 minutes for a simple search action again
With Claude Code + Zed I might be cancelling mine as well.
I thought with Kotlin they'd invest a ton of energy into Kotlin Native in order to produce fully native IDEs that can squeeze out drastically more performance, but its been over a decade of nothing happening with Kotlin that's worthwhile (despite it having had so much potential, and being a literal key language for Android ???) so I'm really kind of over JetBrains, the only thing I'll miss is DataGrip since Zed is a code editor not a DB editor. Fleet was a good idea, but poorly done, the UI was weird as hell, and it did not feel like it was as snappy as something like Zed or Sublime.
Java 33%, Kotlin 8% as the primary programming language among all existing ones, surveys that focus only on the JVM ecosystem show even smaller percentage for Kotlin.
I also cancelled my All Products subscription a while ago. I have been an IntelliJ user since the early 2000s and gave up after in 2025, it would still forget how a Maven project with some generated files should be built, with everything turning to a sea of red until you reimported the project and redid all your settings again. Job #1.
There was always a regression like this in every new build, along with the performance issues.
Also switched to Zed + Claude Code/Codex.
Same. I gave up on Jetbrains and switched to VSCode a few months back after using Jetbrains for over 20 years. Over the years I've done Java, C# and lately mostly Python, and it was PyCharm that made me finally throw in the towel. I felt bad about it. I'm worried that VSCode seems to be taking over everything, but I just couldn't let the tool get in my way anymore. I don't know what's going on at Jetbrains but I hope they can turn it around.
> Last week i cancelled my Jetbrains sub after a decade of daily driving it.
I’ve been paying for a personal license for about 20 years and I’ve been thinking of dropping it. I don’t use it much, but I wanted access to something that I could use offline. I’m not sure that’s possible at this point, so the main appeal is kind of gone for me.
I frequently choose “lesser” tools if it means I’m guaranteed they’ll run offline. I’ve always wanted to have a dev container with all the tools needed to develop 100% offline if needed. Licensing makes that almost impossible and Jetbrains doesn’t look like they have any solutions that work great for 100% offline development anymore.
I might check out Zed this week. I’ve never heard of it. If anyone has some great resources for 100% offline development, I’d love to see them. My subscriptions are getting out of hand and this may be the year for me to trim the fat.
Zed is amazing. It has AI features but it was still amazing before them
The pivot to AI is concerning but the technology is solid and most importantly, it is open source.
I'm kind of mad that JetBrains wouldn't open source Fleet even after EOL, and going as far as taking down the download (something annoying for people that care about software preservation - I hope archive.org has a copy). I can't support a company like this
What kind of offline degradation are you thinking of?
Other than the case you mention (paid service asking for license check) I can’t think of any limitation. Vs code, neovim, zed, eMacs, they should all work. Obviously if you need to clone a repo or download dependencies you need a connection but other than that…
I canceled my 10+ year all products pack because I have to remove the "AI assistant" from my sidebar every three days.
Also the CEO bragging about the incredible adoption numbers for their "opt-in only" and "not default" UI redesign. Which is a bald-faced lie. It was opt-in for a year or two, and was the opt-out default for years after that. Now there's no option.
I use it for Java. I have never used anything else and never had any performance issues on my 16GB MacBook Air.
If its Webstorm maybe its because of automatic refresh capability ? I've had perf issues with VSCode as well with autobuild enabled. Autocomplete would grind to a halt.
It’s something to do with the TypeScript engine, it must be. I can also run IntelliJ fine with a huge Java project a but it’s TypeScript projects that grind it to a halt. It’s unusable on my work PC but the performance is still poor on my home PC. It’s been a steady decline since 2023
I know what I'm about to write is a meme, however: I stopped having any performance issue after switching to GNU Emacs for my code editing. Granted, as an infrastructure guy I the codebases I work with aren't always super large.
However, it's been crazy fast since always. Lately the lisp engine also got compilation to native code, so it's even faster. I occasionally get a slow down when I open a new project and emacs has to wait for the language server to boot.
> I was hyped when i heard they would release a standalone git product, but then they scrapped it!
Magit is cool :P
Also, emacs is free and runs pretty much everywhere. Truly worth learning.
If you're accustomed to vi modes, Doom Emacs is very approachable. LLMs are also surprisingly good with Emacs Lisp, and the official docs and discoverability of Emacs are excellent, so it's pretty easy to get oriented and achieve the configurations you want even if you're not particularly a Lisp fan.
Whatever starter kit you choose, I recommend giving one a go. The experience is really good these days.
I actually got accustomed to vanilla emacs and I am quite satisfied with that choice.
As a sysadmin that has to often jump from machine to machine it’s nice to be able to install whatever emacs release the os vendor ships and be productive
I'm currently exploring vanilla Emacs through the book _Mastering Emacs_ as well. :D
The last 3-4 releases have really enriched vanilla Emacs (LSP support, tree sitter support, project.el). Emacs adds default packages somewhat conservatively but it seems like everything that gets included by default ends up with very solid integration/support with other packages
> As a sysadmin that has to often jump from machine to machine it’s nice to be able to install whatever emacs release the os vendor ships and be productive
Quite true, although TRAMP gives Emacs users another good alternative to "bringing the config along"!
> As a sysadmin
I think it's also fair for operators to install the tools they use/like the most on the systems they administer. If a more recent Emacs release makes you happier, why not use Guix to include a portable copy of the latest release on all your servers?
I have 48GB RAM on my M4 laptop and get tons of freezing. I had to set the memory heap size to 64GB to reduce it, and I still have to force close once per day
Really? I'm daily driving JetBrains IDEs on Apple M3 and don't recognize any if this. Just give it a bunch of extra heap memory (eg 4g instead if 1gb) and it's fast!
I have given it up to 30GB of heap and i tried many different GC configs, i even ran it and my project on ramdisk.
The issue is related to using a monorepo with lots of code in different languages - openening single folders is fine. Ut i want to be able to work on dozens of services in a single window, all other editors manage just fine
I have a similar problem, large monorepo, things have become really bad lately to the point that the cursor is unresponsive. The only workaround I have found is opening each folder of the monorepo in its own IDE instance
Feel this so hard. The opposite is also true where you have a micro-service architecture and cursor faceplants in workspaces with multiple repos. We ended up building cortex.build partly because of this exact pain. Our context engine builds a git-aware dependency/provenance graph so it can stay local and only pull the relevant slice across a massive repo or dozens of smaller ones.
> but somehow Jetbrains manages to get slow even on a M3 Max with 36GB RAM.
Really? That surprises me, given that I don't have any performance issues at all on my first gen dell xps 13.
Which specific products do you use? I use mostly intellij ultimate, but I have been playing around a bit with the community edition of Rover lately too. They're both silky smooth on my nearly 13 years old ultra portable.
I've been using Firefox and Jetbrains for about a decade. Firefox is currently using 0.8% CPU while streaming music in another tab. The only speed difference between it and Chrome is that Chrome will prefetch pages in the background, which appears to make it faster on clicks. However, even were it much slower than Chromium alternatives, I would never give up my fully functioning uBlock Origin.
But anyway, in regards to Jetbrains, its performance certainly seems to be degrading over time. I'll try to explain why I still use it. First of all there is high friction to change IDEs when I have memorized every shortcut and configured every panel to my liking. I have within my IDE the terminal, the DB viewer and query executor, the debugger, the profiler, HTTP client, LLM chat, etc. Configuring all of this elsewhere would be a large pain in the ass, especially when switching computers/jobs.
More sticky still is the functionality. I've unfortunately become reliant upon, or perhaps fortunately been able to learn, the advanced features of the thing. Advanced refactoring tools that I trust to work without review, because they do. Quick shortcuts to insert large chunks of custom boilerplate. Perfect inference of method definitions/sources (try this in a Rails codebase in VSCode; it doesn't work). Other such things that I take for granted but that probably aren't in the competitors.
It might be possible to replicate this functionality with about thirty plugins from random authors in vim/VSC, but I'd rather just pay my yearly license fee and get good working software. Yes, it takes a couple of seconds to do certain things, but it saves me a lot more time than that.
I don't know how their IDEs were advertised to you or how large the codebases you work on are.
I get fast enough autocomplete (sub second), and full line completion just fine, and I never use/buy top of the line systems. (using a midrange ~2020 thinkpad).
But I'm in a similar place as the comment you replied to. Unless they start focusing again on improving they existing product line, next year I might not renew my licenses anymore.
Before AI took over, I was following closely their release notes and announcements because there where on the right path on improving experience.
What makes their IDE look bad is their indexing process, during which it is slow and completions will not show up. If you know about this quirk you know where to look for it (it's visible in the status bar), and know what triggers it (dependencies installation and such). After so many years, I really feel the solution for that is pretty "simple", "just" run the indexing on a snapshot that is not shared with the running instance and swap out indexes when done.
I know about the indexing, marking directories correctly so as not to trigger reindexing etc etc.
Since i work on a couple dozen services in a monorepo in a few languages, no amount of heap memory or CPU will be enough.
One days its the grapqhl plugin, the next day its typescript type inferrence, then something with rust, it just never stops. Sometimes even the golang operations are slow.
Its all just monorepo issues, but i expect my IDE to be able to handle a monorepo, all other IDEs work without issue (and are inferior in functionality sadly)
Everyone who promotes a product they use every day doesn't have to be a paid shill. I like PyCharm, DataGrip, and IntelliJ because they generally work very well for me at my day job and open source side projects.
Firefox is an odd case because I've personally never experienced stability issues with it on Ubuntu. The only problem I've had in the past is some Google products are noticeably slower than on Chromium. Colleagues of mine have had stability issues on Windows though.
Yeah, I'm not a paid shill. I have been using IntelliJ since version 2 way back in 2003(?). Yes, it's had its performance issues, but people tend to forget the feature set they brought to market, and have continued to do so. But, my career is dead now, as I am an unemployed loser. So, 2026 will probably be the first year that I no longer have an updated IntelliJ.
I'm about to cancel mine as well, but JetBrains really does make top quality editors, especially for their respective languages, the next closest one would be Visual Studio for C# / .NET development, and even VS gets enhanced by ReSharper... which is a JetBrains product. I would like it if JetBrains would invest in the performance isssues as a #1 priority. They've dropped the ball on Kotlin Native and it bewilders me, it had so much more potential to their benefit.
I guess having a new up migration to cover the case is better, but its nice to have a documented way of rolling back (which would be the down migration) - without applying it programmatically. But it helps if other team members can see how a change should be rolled back ideally.
Glenjamin gave a great answer. I’ll just add that in my experience (being responsible for the team’s database, at a few companies over the years), down migrations are NEVER helpful when a migration goes wrong. Roll-forward is the only correct model. Besides, there are plenty of migrations that cant be safely rolled back, so most “down migrations” either destroy data or don’t work (or both.)
The key here is that in production it's almost always not safe to actually apply a down migration - so it's better to make that clear than to pretend there's some way to reverse an irreversible operation.
I wish they would limit the number of attendees somehow or have some way to manage overcrowding.
In 7 attendances i and most people in my group got a cold or the flu (this was before covid) - 7 years in a row.
The slack notification noise triggered me to check Slack frantically in case i was mentioned, but there was nothing. Slack really has us conditioned well.
A bit over a year ago I lost a dear friend, while his girlfriend was pregnant.
The feeling of seeing something the person will never use again is soul wrenching. I wept when I read the line "No salt. No salt means that he’s not cooking. He’ll never cook again."
The child is a ray of light for me whenever I see it, I hope the family can find a little comfort in this piece of him that will be brought into the world.
I have followed this story for a while now and wish the family a brighter path in the future. Thank you for focussing my thoughts on what is important, instead of the daily tech grind.
* The main thing that makes ChatGPTs ui useful to me is the ability to change any of my prompts in the conversation & it will then go back to that part of the converation and regenerate, while removing the rest of the conversation after that point.
Such a chat ui is not usable for me without this feature.
* The feedback button does nothing for me, just changes focus to chrome.
* The LLaVA model tells me that it can not generate images since it is a text based AI model. My prompts were "Generate an image of ..."
> * The main thing that makes ChatGPTs ui useful to me is the ability to change any of my prompts in the conversation & it will then go back to that part of the converation and regenerate, while removing the rest of the conversation after that point.
Agreed, but what I would also really like (from this and ChatGPT) would be branching: take a conversation in two different ways from some point and retain the seperate and shared history.
I'm not sure what the UI should be. Threads? (like mail or Usenet)
I have ChatGPT4, I have no idea what arrow you are talking about. Could you be more specific? I see now arrow on any of my previous messages or current ones.
By George, ItsMattyG is right! After editing a question (with the "stylus"/pen icon), the revision number counter that appears (e.g. "1 / 2") has arrows next to it that allow forward and backward navigation through the new branches.
This was surprisingly undiscoverable. I wonder if it's documented. I couldn't find anything from a quick look at help.openai.com .
Careful what you trust with help.openai.com. You used to be able to share conversations, now it's login walled when you share, and the docs don't reflect this (if someone can recommend a frontend that has this functionality, for quick sharing of conversations with others via a link, taking recommendations, thank you in advance).
I understand your point, but my take is that when we talk about AI and its impact, we're talking about the entire system: the model, and what is buildable with the model. To me, the gains available from doing innovative stuff w/ what we're colloquially calling "UI" exceeds, by a bunch, what the next model will unlock. But perhaps the main issue is that whatever this amazing UI might provide, it's not protectable in the way the model is. So maybe that's the answer.
Thank you for the support and the valuable feedback! Sorry about the response time, I haven't expected the incoming volume of requests.
* For changing prompt in the middle - I'll take a crack at it this week. It's on top of my post launch list.
* Feedback button: Thanks for reporting this. The button was supposed to open default email client to email feedback@recurse.chat
* LLaVA model: I'll add more documentation. You are right Llava could not generate images. It can only describe images (similar to GPT-4v). For image generation, it's not supported in the app. While I don't have immediate plans for image generation, check out these projects for local image generation.
I dont understand how this is better than a docker-compose.yml with your dependencies, which plays nicer with all other tooling.
Especially if there are complex dependencies between required containers it seems to be pretty weak in comparison. But i also only used it like 5 years ago, so maybe things are significantly better now.
One specific case that I encountered recently was implementing "integration" tests, where I needed to test some behavior that relies on the global state of a database. All other tests before were easily parallelized, and this meant our whole service could be fully tested within 10-30 seconds (dev machine vs. pipeline).
However, the new tests could not be run in parallel with the existing ones, as the changes in global state in the database caused flaky failures. I know there will be other tests like them in the future, so I want a robust way of writing these kinds of "global" tests without too much manual labor.
Spinning up a new postgres instance for each of these specific tests would be one solution.
I would like to instead go for running the tests inside of transactions, but that comes with its own sorts of issues.
Because you may want to spin up a new postgres database to test a specific scenario in an automated way. Testcontainers allows you to do that from code, for example you could write a pytest fixture to provide a fresh database for each test.
I don't know about Postgres but a MySQL container can take at least a few seconds to start up and report back as healthy on a Macbook Pro. You can just purge tables on the existing database in between tests, no need to start up a whole new database server each time.
Testcontainers integrates with docker-compose [1]. Now you can run your tests through a single build-tool task without having to run something else and your build. You can even run separate compose files for just the parts your test touches.
I just never could be happy with a TV without an OLED panel after i got my first one last year. Since then all other screen types look like garbage to my eyes, the better cinema projectors too.
Shouldn't have bought an expensive big monitor for work without OLED the year before, but i hear that OLED is not that great for close up text rendering.
Maybe someone knows how to solve a common sharing issue, I didn't see it mentioned here:
I have a single ultra wide screen and would like to share a virtual area that has a normal size (16:9) with people via Google Meets, Slack, etc. Otherwise I have to share a window, stop, share another one etc.
Really bad, especially during some on call emergency session.
So far I couldn't make it work, only Zoom had this feature at some point but nobody uses Zoom where I have worked.
I haven't needed to do the same on a Mac yet, but if anyone knows of an app that does the same (i.e., define a "transparent" window that screen cap can then share in a call), do let me know (it's bound to be trickier on macOS due to window contexts, the compositor and privacy, but there might be an app out there for that)
Install OBS, add a 'Scene', add a 'Window Capture' to the scene, then right click it (in sources) and transform / scale / crop the scene dimensions.
Then optionally in the 'controls' panel you can start a virtual webcam, then go to Chrome/Brave settings, go to Site & Shield settings, set the default Camera to your virtual one.
CueCam Presenter solves this quite elegantly: You can prepare a script with a card for each window you want to share. Then you can just select the card during the presentation and it will share just that window.
I also have a single large screen. So I put the CueCam window on the right, top to bottom. And the windows I want to share in the bottom left quadrant. There I can make them smaller, with the correct aspect ratio, so that participants with smaller screens can see all the detail they need.
That leaves the top left quadrant for my meeting window where I can see the meeting participants.
I'm also experimenting with the two companion apps: Shoot to use my iPhone camera and control zoom from CCP; and Video Pencil to draw on my video.
OBS can do this: It can capture a display (eg, the whole ultra-wide monitor), crop that capture to just the desired area, and [optionally] scale that cropped area to be sent at whatever resolution you wish as a virtual camera input to whatever conferencing system (Zoom, Slack, whatever).
Works fine. It's kind of a pain to configure, but it only needs done once and saved as a scene (which can then later be recalled with a keyboard macro or whatever, if one wishes).
(To receive bonus nachos, set the desktop background to include a 16:9 rectangle of the captured area for your own visual reference, and automate it so that this background is displayed when OBS is running. For fancy nachos, have more than one such area with one scene for each.)
One of the easiest ways to do this is to use a HDMI display emulator [0]. Set the resolution to 1920x1080, move the content or presentation you want to share to the 'ghost' monitor and then share the entire second screen in your meeting app [1].
My ultrawide display allows two display inputs, and has the option to put them borderless next to each other. I have a script that enables the second DisplayPort input using DCC. Now I can share half the screen of my ultrawide as “full screen”.
I refer to Fundamentals of Queueing Theory by Gross, Shortle, Thompson & Harris.
Although Wikipedia is enough. As far as insights go the topic is relatively simple, it is just bad practice to be re-deriving the first 100 pages of an intro-to-queueing textbook in an emergency.
80% of the time it is enough to assume the process is an M/M/1 queue or consider how the queue would perform relative to an M/M/1 queue. M/M/1 queues are the analog to fitting a straight line, simple & technically incorrect. It is good to move through that part of the day without thinking.
Whenever I talk to operations research people about "how do I learn X?" or "how do I calculate Y?" I usually get told to write a Monte Carlo simulation despite there being a lot of beautiful math involving stochastic processes, generating functions and stuff like that. (Even if you are calculating results in closed form it is still a slam dunk to have a simulation to check the work except when you are dealing with "exceptional event" distributions... That is, a Monte Carlo simulation of a craps game will give you an accurate idea of the odds in many N=10,000 samples, but simulating Powerball takes more like N=1,000,000,000 samples.)
The single "uncommon sense" result you need to know about queuing is
that is, with random arrivals, a queue that has slightly less than 100% utilization will grow stupendously long. People look at a queue with less than 100% and often have a feeling of moral disgust at the "waste" but if you care about the experience of the customer and the reliability of the system you don't let utilization get above about 80% or so.
I learned about this stuff in grad school. The course wasn't mandatory for everyone but my supervisor made it mandatory for me due to the nature of the research I was doing: "Computer Systems and Performance Evaluation". It was basically focused on queuing theory and state space modelling.
Reading through this whole discussion thread really makes me want to dig up my old notes and whip up a blog post with a Jupyter notebook or something that people can use to really dig into this and start to grok what's happening because a lot of it really isn't that intuitive until you've been steeped in it for a while.
Moving to Apple Silicon made it bearable for a few months but somehow Jetbrains manages to get slow even on a M3 Max with 36GB RAM.
Ive been fiddling with configs for years, i tried everything since i was a Jetbrains diehard.
Instead of trying to catch up to other AI editor they should get back to their core and make it possible to use Jetbrains on medium sized Monorepos with multiple languages.
I was hyped when i heard they would release a standalone git product, but then they scrapped it!
In the end i was only dependent on it for debugging and my usual git workflow.
I now switched to zed and gitkraken, i will figure out a new debugging workflow, ill never wait 5 minutes for a simple search action again