Hacker Newsnew | past | comments | ask | show | jobs | submit | TylerGlaiel's commentslogin

All that the C++ committee needed to do was just introduce "import" as "this is the same as include except no context can leak into it".

Would have been dirt simple to migrate existing codebases over to using it (find and replace include with import, mostly), and initial implementations of it on the compiler side could have been nearly identical to what's already there, while offering some easy space for optimizing it significantly.

Instead they wanted to make an entirely new thing that's impossible to retrofit into existing projects so its basically DOA


That could be said of every additional c++ feature since c++0x.

The committee has taken backwards compatibility, backwards - refusing to introduce any nuance change in favor of a completely new modus operandi. Which never jives with existing ways of doing things because no one wants to fix that 20 year old codebase.


if no one wants to fix this 20yo codebase, why is there someone who wants to push it to the new C++ standard?


The goal is to be able to import that 20yo battle-tested library into your C++20 codebase and it just works.


Hyrum law would dictate that your C++20 now becomes C++11 or whatever is the oldest in the whole chain of dependency.


That's kind of the point of C++'s retro-compatibility: C++20 contains new additions to the C++11 standard. Unless you rely in mistakes in the standard like auto_ptr or faulty atomics, which should be fixed regardless of the C++ standard, your C++11 dependencies are perfectly valid C++20 and do not block the new shiny toys.


there are really a lot of simpler solutions than switching the standard of the whole codebase. E.g. write wrapper which interface doesn't require a new standard.


Mmm pimpl’s and acne…


You have it backwards. Everyone wants a new standard but no one wants to fix the code to make it work with a new way. They would rather introduce a whole new thing.

We want new stuff, but in order to do that we must break old stuff. Like breaking old habits, except this one will never die.


we spent a billion dollars to rewrite our code breaking everything. 15 years latter and we have a better product than the old stuff- but 15 years ago was pre c++11, and rust didn't exist. i cannot honestly ask for another billion dollars to repeat that again and we are stuck. We must keep the old code running and if your new thing isn't compatible with whatever we did back then you are off the table.

c++26 still builds and runs that c++98 and so we can use it. Rust is nice but it can't interoperate as well in many ways and so it gets little use. you can call that bad desingn - I might even agree - but we are stuck.


This. This is why C++ is stuck. The effort required to rewrite old shit code (that works, does its job, makes money) is just too valuable to the company so it sits. All vendors must conform or else be shown the door. No innovation can take place so long as Clu has its claws in our company.

I empathize. This is where rust could really help but there’s a lot of hate around “new” so it sits. The C++2042 standard will still have to be able to solve for C++98. The language will die. A pointer is a pointer and it shouldn’t need weak_ptr, shared_ptr, unique_ptr etc. If you expose it, it’s shared. If you don’t, then it could be unique but let the compiler decide. The issue with all these additions is they are opt in for a community that would rather opt out since it means learning/rewriting/refactoring.

I’ve come across this so many times in my career. Thank goodness for AI and LLMs that can quickly decompose these hairball code bases (just don’t ask it to add to it).

I love C/C++ but it’s so old at this point that no sane person should ever start with that.


If you think the problem with C++ is that shared_ptr exists, you should probably just use C.


If that is what you take from it, you missed the point entirely.

I could care less about shared_ptr.

The issue is why should I care? Why is it on the dev to determine how a pointer should work? Why the dev has to go back and refactor old code to be new again? Why can’t the committee build non-breaking changes to the spec? I would rather have compile flags that make a pointer an “old pointer style” vs having to mentally juggle which ptr container to use, when, and why.

This is just one example. Gang of 3, becomes gang of 5, becomes mob of state… It’s just a giant mess.


that youieven think a compile flag could switch everything shows how little you understand the problem. it is fine not to understand - it isn't possible to understand everything - but stop talking as if there is an easy problem that would work.


My experience has been that everyone _wants_ to fix the 20 year old codebase! But it’s hard to justify putting engineers on nebulous refactoring projects that don’t directly add value.


This I understand. The risk of revenue loss if not done right, the risk of revenue loss if not done at all.


No it can't be said for most other C++14+ features as they are actually implemented and used in real code bases.


Touché, but why the 4 different pointer containers?


What do you mean by "no context can leak into it"? Do you mean it shouldn't export transitive imports?

As in `#include <vector>` also performs `#include <iterator>` but `import vector` would only import vector, requiring you to `import iterator`, if you wanted to assign `vec.begin()` to a variable?

Or is it more like it shouldn't matter in which order you do an import and that preprocessor directives in an importing file shouldn't affect the imported file?


Not GP, but I take it to mean I can’t do:

    #define private public
    #import <iostream> // muahaha
Or any such nonsense. Nothing I define with the preprocessor before importing something should effect how that something is interpreted, which means not just #define’s, but import ordering too. (Importing a before b should be the same as importing b before a.) Probably tons of other minutiae, but “not leaking context into the import” is a pretty succinct way of putting it.


yeah that, include is a textual replacement, so anything placed before the include is seen by all the code in the include. Not just other preprocessor stuff and pragmas but all of the other function definitions as well. There are some cases where this has legitimate use, but also is one of the main reasons why compilers can't just "compile the .h files separately and reuse that work whenever its included, automatically"

you define #import as "include but no context leaks into it" and that should on its own be enough to let the compiler just, compile that file once and reuse it wherever else its imported. That's like 95% of the benefit of what modules offered but much much simpler


This implementation grows indefinitely if you repeatedly push to the head and remove from the tail, even if the max number of elements in the array is small


Does it definitely do that? You could easily avoid it by making the "resize" really a move if you don't actually need more space.

I feel like they're over-selling it anyway by comparing to `std::deque` (which is not hard to beat). The only advantage this has over a standard ring buffer (like Rust's VecDeque) is that the data is completely contiguous, but you'll pay a small performance cost for that (regular memmove's when used as a queue), and I'm not sure how useful it is anyway.



Excellent, that's the one! Sorry, too late to edit. Perfect definition I think.


Biggest place for improvement performance-wise I can see from this is that you aren't taking into account cache coherency here. Horizontal blurs with this method are going to be ripping fast, but vertical blurs are going to constantly cache miss. There's a number of ways to potentially fix this, the fact that you only need to store 3 sums for one line means you could do a few (64) columns at a time and just store that many distinct sets of sums (sorta like doing 64 columns in parallel, just making sure to use the data thats in the cache while its still in the cache)


Cache locality and specifically the vertical pass was top of my mind when trying to come up with good ways to vectorize. In the end (at least in my vector implementations) the difference between the passes weren't too large. But most of the them ended up having to do things like first convert the incoming row/col to its own float vector.

One main issue I never resolved is in the middle of the main loop, data has to be converted and written back to the source image and the incoming pixels have to be converted and loaded in. Even when doing all rows or cols in bulk (which was always faster somehow than doing batches of 32/64), that seemed pretty brutal.

I also wondered whether it might be more efficient to rotate the entire image before and after the vertical pass, but in my implementations at least, there wasn't a huge difference in the pass timings.


There's nothing wrong with spamming 1000s of raycasts per frame in a 2D game. They should be very cheap, so the performance impact of that should not be something you have to think about, and if some interaction or mechanic is easy to express with a boatload of raycasts, you should be able to just do that


It depends on the type of game, but 360 raycasts per bot is definitely a code smell that warrants a closer look at how the game is being architected.


I think openAI might be experimenting with smaller context lengths to save on costs or something since I've had a few other things break down like this for me today too (even in GPT-4)


That’s what I thought too. I like using local models and those with short contexts will definitely go off into cuckooland if you start scrolling off the end of the buffer.


note that when they went and analyzed that AI they found out it was using the smudges on the google car camera as a sort of fingerprint (as those are consistent for the many pictures taken by one camera) so that AI would almost certainly not do well against pictures taken from different sources


the last time I tried it (it was years ago) std::regex was taking a measurable number of milliseconds to evaluate which is kind of a very long time even outside of performance critical paths.


if you actually wanted to you could probably wrap thread to pass the stacktrace of the spawning thread into the worker thread whenever you spawn a thread and then output that upon a crash as well. the library seems pretty simple and flexible.


oh yeah C++ regex is stupidly inefficient, like "python is faster" inefficient. I tried to use it for text replacements and pretty much immediately abandoned it


Apparently typical implementations of std::regex are inefficient like "'popen("perl..")' is faster" is inefficient!

I thing boost::regex is significantly faster although not particularly fast


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: