Another open source alternative is CodeChecker [1] with the Clang static analyzer [2]. Make sure the Clang toolchain has been compiled with Z3 [3] support for better results (it's the case in Debian stable), particularly for code doing bit operations. It supports cross files analysis ("cross translation units" or CTU), which last time I checked was not the case for IKOS and helps improve diagnostics.
It's not completely turn key if you use it for a cross compiled code base, but once set-up I prefer it to another professional tool: much less false alarms. Although it's good to have both, each one found issues not seen by the other.
It impresses that age has not meant better tooling for C/C++ when compared with more modern languages.
I really envy productivity people using more "dynamic" languages have had over C/C++ developers just because of tooling. Of course, each language is like a tool tailored for a more adequate domain and I'm sure better tooling wouldn't magically fix all of old languages' traps. Nevertheless I miss it.
I don't agree at all. For example, some of the best debuggers I've used have been for C/++. As far as I understand it, even after years, debugging in Node.js still hasn't progressed beyond the level of printf() debugging.
Also, by their very nature, dynamic languages offer less information at development time about things such as types and function signatures, so very often IDEs can't even tell you the type associated with a name by hovering over it.
I think you understand wrong. https://nodejs.org/api/debugger.html Combined with chrome dev tools or an ide of your choice has been some of the best tooling I’ve used in any programming language
Debugging in NodeJS is very easy and works in an IDE just like Java or C++. My IDE can tell me the types of things, hover over variables, and even provides a REPL where I can call functions while the debugger is paused to debug.
Common Lisp implementations allowed you to change your program's code while debugging decades ago. Microsoft added bare minimum of similar functionality like fifteen years ago or so, if I remember correctly. I'm not even sure they've caught up on class redefinition and instance upgrade at runtime yet, which has been supported in CL since the 1980s.
>Common Lisp implementations allowed you to change your program's code while debugging decades ago. Microsoft added bare minimum of similar functionality like fifteen years ago or so, if I remember correctly.
Fair enough. I did say "some of the best debuggers I've used". I've never used Common Lisp. I have no idea what the debugging experience is like on it.
Incidentally, I don't find edit & continue to be a terribly useful feature.
>I'm not even sure they've caught up on class redefinition and instance upgrade at runtime yet, which has been supported in CL since the 1980s.
That has nothing to do with the tooling, though. C++ just doesn't support those features. You also say "caught up" as if committee members had been trying to add those features to C++ but just hadn't been able to. It's much more likely that there hasn't been enough (or any) interest to add them.
> That has nothing to do with the tooling, though. C++ just doesn't support those features.
In principle I don't see how language definition not supporting this would make this entirely impossible. It's just not possible to ask for such a running program transition to be requested from within the language, but with some modest restrictions on program behavior this might be possible to request from within the IDE. The restrictions I imagine to be similar to the restrictions that would enable C++ code to be subject to code-agnostic tracing GC, such as no user-defined XOR-linked lists.
Apple platforms used to have that ("zero-link") and removed it quickly because nobody actually liked it. I think this and Lisp/Smalltalk's image model is actually mistaken.
It goes against the principle of "crash-only recovery" to try to recover from a bad state instead of being able to start over from the beginning.
That encourages you to live off the intermediate image forever (like autogenerating code and then not maintaining the source for it), but if you miss some data corruption then you're stuck with it.
You already are autogenerating code in Lisp (using macros), but not without keeping the source for it, which still lives in the source files. If those files are not loadable into a clean image, you're doing something wrong (like CMUCL people find out a long time ago).
Could you detail what exact tooling you are talking about ?
Like, this static analyzer looks interesting but 5 years ago clang --analyze had no trouble producing me a nice html indicating the 27 steps across 6 functions that lead to a pointer being dereferenced after being deleted. Tooling is there but it seems that pretty much no one is aware of it - see e.g. this: https://github.com/cpp-best-practices/cppbestpractices/blob/... or this: https://github.com/fffaraz/awesome-cpp for a quick look at what exists (and both are very non-exhaustive).
but most of these tools are, like, 10+ years old. Intel VTune, Coverity and KCacheGrind were already a thing in 2003. PC-LINT dates back to 1985. Of course languages more recent don't have to reinvent the ideas that have been developed for these older languages, it would be like saying that the first automaker sucked because the second automaker took less time at producing a vehicle.
I remember people using eclipse and having orthography correction on comments, fixmes, "instant" reanalysis whenever the code was changed, automatic creation of accessory methods... things like that. I don't remember the same features were as popular among C/C++ developers at the time.
Maybe my feeling is wrong, but I always had a feeling that more "modern" more "dynamic" languages implemented development tools faster or even before the same features became available and popular for C/C++.
C++ is much more difficult to parse than Java, so refactoring is a big ask. If your renaming tool can miss some usages it's worse than nothing. IMO C++ also has always had less boilerplate than Java, so boilerplate generators were less useful. C++ also has other typing-saving tools, such as the preprocessor (we can argue about whether it's a good tool, but it can certainly save typing).
Well, it took some time before a more modern language like Python had tools like pip, virtualenv, etc. Only "recently" they got more sophisticated tools like pipenv (https://pypi.org/project/pipenv/#history first release, totally unknown to anyone, was in 2017. It's in the last 2-3 years that you hear about it, though). And still, we're in 2022, and not everybody uses things like pipenv, for example.
In the C++ world, although I am not experienced enough, so take my words with a pinch of salt, there is more fragmentation for the tooling part. AFAIK, there is not a single "community" saying "this is the recommended way". Scala had (and still has) for a long time a very similar issue: lots of tools popping up left and right, lots of "experiments" become production-ready stuff. Then after several years, they decided to face the issue and proposed a centralized place (scalacenter) to offer some sort of "official" support. It's not "finished" yet, but it's a good move into the right direction which will only help adopt the language.
My experience so far with C++ has been a little bit different: you have a lot of different tools, very good ones, so you're undecided which one to use, but finally thanks to the books/resources you read, you'll eventually stick to some of them: it's decentralized and community-based (like other languages) vs [very] centralized (go/rust). It's two different philosophies, and from my understanding, C++ tries very hard NOT to enforce anything on the developers, which can be a burden for newcomers because there is just too much to learn. A stupid example: I use clang-format/clang-tidy: which style am I supposed to use to format the code? I am freaking new, "just pick whatever you want", but hey the default is "none".
I wouldn't consider C++ Builder as C++. Have not used it in ages, but last time I used it, it was not C++. Anyway, that's is not exactly what I was talking about. I think even the tools you listed lacked years behind useful features that eclipse and netbeans implemented for java like refactoring.
I'm glad there are good front-ends to the llvm language server though and that static analyzers are slowly improving and becoming more popular even to the sporadic hobbyist.
That’s because the preprocessor adds too much flexibility to make it easy to write such tools.
Once you have
#ifdef FOO
...
#else
...
#endif
you have to completely preprocess the compilation unit to find out which branch is taken. That means you have to interpret the build script (which may do -DFOO, possibly only for some compilation targets)
You have to do that even for such simple things as syntax coloring. Now, you say “I simply parse both branches, check what things they define, somehow merge the two, and we’re good to go”. That won’t work, as the not taken branch(es) need not contain valid C/C++, and their validity may depend on whether other branches are taken.
the preprocessor also has concatenation (https://gcc.gnu.org/onlinedocs/cpp/Concatenation.html). That makes writing a refactoring tool as good as impossible (for example, how do you rename quit_command to commandExit in the example on that page?)
Now, lots of code is fairly reasonable in its use of the preprocessor, but writing a reliable tool is a lot harder, if it’s possible at all, than with Java.
That's because doing any kind of tooling for C++ is a heroic endeavour. The language just has orders of magnitudes more corner cases than anything except perl.
For example there were some tools Mozilla maintained: Elsa, pork, treehydra etc but (as far as I can quickly find on my phone) it looks like the burden of maintaining them proved too much a while back.
I don't agree, C and C++ have excellent tools, they are just not evenly distributed (e.g. some IDEs have excellent integrated debuggers and profilers, other don't, some compilers have great integrated static analysis, others don't).
test.c function main
[main.array_bounds.1] line 6 array 'a' upper bound in a[(signed long int)i]: SUCCESS
[main.array_bounds.2] line 8 array 'a' upper bound in a[(signed long int)i]: FAILURE
** 1 of 2 failed (2 iterations)
VERIFICATION FAILED
gcc and clang both failed to find anything at compile-time.
Well, it is on SPDX License List (https://spdx.org/licenses) with short identifier "NASA-1.3" so it's definitely not new. The same table marks it as OSI Approved.
> "based on the theory of Abstract Interpretation."
I don't know about you all, but I get a bit scared when people tell they're basing what they're doing to me on some kind of abstract interpretations :-(
It's not completely turn key if you use it for a cross compiled code base, but once set-up I prefer it to another professional tool: much less false alarms. Although it's good to have both, each one found issues not seen by the other.