Hacker Newsnew | past | comments | ask | show | jobs | submit | clappski's commentslogin

I like the priorities.

I think a core thing that's missing is that code that performs well is (IME) also the simplest version of the thing. By that, I mean you'll be;

- Avoiding virtual/dynamic dispatch

- Moving what you can up to compile time

- Setting limits on sizing (e.g. if you know that you only need to handle N requests, you can allocate the right size at start up rather than dynamically sizing)

Realistically for a GC language these points are irrelevant w.r.t. performance, but by following them you'll still end up with a simpler application than one that has no constraints and hides everything behind a runtime-resolved interface.


I generally don't worry too much about static vs dynamic dispatch. Not that I use a lot of interfaces all over the place, but there are certain places where I do (for instance persistence layer abstraction - where it doesn't actually matter since any overhead caused by that is many orders of magnitude smaller than the cost of what the call does anyway)

Also, if someone can understand the code, they can optimize it if needed. So in a way, trying to express oneself clearly and simply can be a way to help optimization later.


When we're talking about opaque it's really in relation to an individual translation unit - somewhere in the binary or its linked libraries the definition has to exist for the code that uses the opaque type.


Forgive my ignorance in this topic, but if "stdio.h" itself includes "bits/types/struct_FILE.h", is anything preventing me from accessing the individual elements of FILE as they are defined in the latter header file?


It looks like FILE is not opaque in glibc. Create a translation unit that includes <stdio.h> & declares a FILE variable and it compiles fine. For comparison, create a translation unit that declares your own struct (but does not provide a definition) and declares a variable of the same type, and you'll get a "storage size of 'x' isn't known" error when compiling.


Thanks for the explanation. In case FILE was opaque in glibc, would the same test (including <stdio.h> and declaring a variable of type FILE) also fail with the unknown storage size error? If so, would linking again some library (-l) be necessary?

EDIT: after some more thinking I assume the key is that we wouldn't be able to have a variable of type FILE, but a pointer, whose size is always known.


You'd have an error about an incomplete type - see https://godbolt.org/z/G4Gsfn7MT

> a pointer, whose size is always known

Yeah, this is exactly how it works. You work with a pointer that acts like a void* in your code, and the library with the definition is allowed to reach into the fields of that pointer. Normally you'd have a C API like

    struct Op;
    Op* init_op();
    void free_op( Op* );
    void do_something_with_op( Op* );

in the header provided by the library that you compile as part of your code, and the definition/implementation in some .a or .so/.dll that you'll link against.*


Reminds me of the shadow paging used in LMDB, which is effectively the same but at the page level rather than the whole tree and allows each reader their own context to read from, rather than a shared reader context.


     f" something { my_dict['key'] } something else "
This works in Python already, allowing for nesting is a big QoL improvement


C++ has two types of polymorphism;

- Templates (compile time), which are generic bits of code that are monomorphized over every combination of template parameters that they're used with.

- Virtualization (runtime), classic OO style polymorphism with a VTable (I think this similar to Rust's dyn trait?).


I think they meant to say that neither templates (nor virtual classes) are quite up to the same level of polymorphism that Rust has. Ie Rust type checkes at definition, as opposed to copy-paste and hope for the best (wrgt templates) when trying to compile the code. In other words Cpp templates are closer to Rusts declarative macros than to Rust generics. Hopefully concepts make this better in Cpp.


Unfortunately, the concepts we got in C++20 still do not allow for type checking definitions and there isn't a realistic path to get there.

I still think that C++ templates are different than rust macros as they are not a separate pass from type checking.


As always in C++ world, there is a workaround with static analysis, which could give errors when template code reaches out to capabilities that aren't part of the concept definition.

Either that, or switch to Circle I guess.


Does such a static analysis exist? I think the most realistic solution is to write an archetype class for each concept and instantiate each template with its relevant archetypes.

Now can we automate writing archetypes from concepts (and viceversa)? Maybe in C++64 when we finally get static reflection.


Not yet, hence the conditional form.

I really hate how C++/WinRT used the "C++ reflection is around the corner" excuse to kill C++/CX and downgrade the COM development experience back to pre-.NET days.

Apparently it isn't around the corner.

However with type traits and a bit of constexpr if (requires {....}) it is possible to have a kind of poor man's compile time reflection.


> haven't seen depth info past the best bid/offer (maybe exists? I'm not an expert)

Most exchanges will provide arbitrary depth in way of a pure order feed. From that you can construct your price book, which gives you the levels of depth.

Some just expose the price book, some price book and order feed, some just order feed.

Some anonymize the order feed, some don't (I think typically in equities it's anonymized although not my asset class, but other markets e.g. power need to be de-anonymized and you can see the other buyers and sellers submissions).

However, that data doesn't represent the market price - I would look to constructing the fair price using the trades made on the exchange rather than the outstanding orders. From that you can create different lenses to view the fair price - e.g. volume weighted, time weighted, other types of averages.


So are you arguing that genocide and slavery aren't objectively bad?


I'd argue that morality and objectivity are orthogonal. Morality is an expression of values, which are subjective, never objective.

I value human life. If another human values human death, there isn't a sense in which my value is necessarily more objective than theirs.


No.


By sharing the load, we have a #reviews channel that anyone can put something in to get peer review. If there's something that needs attention from a specific person then arranging a call to walk through is a good approach where both reviewee and reviewer can negotiate a time to review.


There's much more reason to do a greenfield project in C++ than Rust - experienced C++ hiring is still considerably easier! Not everything has a purely technical motivator.


> experienced C++ hiring is still considerably easier

Perhaps, but you can take experienced devs with a background in other languages and expect them to write solid Rust code. You probably only need to hire 1 or 2 people who already know Rust.


I don't know any people who program in Rust. I've been programming 25 years professionally. On my LinkedIn and through friends I probably know 50 people who do c++ programming in some capacity, including myself, to a poor level.


As someone who only knows C++ to a poor level, you’re exactly the sort of person I wouldn’t want to hire for a C++ job, but I would consider hiring for a Rust job. The bar is way higher for C++ because it’s so easy for even experienced developers to introduce mistakes.


Perhaps you should be the first in your group? ;)


I'd be tempted to pick Rust for a greenfield project even if I had only a team of C++ devs with no prior Rust experience. Having one person that can teach it would help, but it's not an absolute necessity. And luckily every team has that one person ho is the Rust evangelist...

If you can program C++ you'll pick upp Rust more quickly than any other convert. Unless it's a startup with a short runway where you might not have the luxury of a slower start, then I think it would probably pay of in both productivity, staff retention, ease of recruiting (later), and a lot of other parameters.


I'd say the trend in the industry is to hire engineers rather than language specialists. An experienced C++ guy should be able to learn the Rust basics in a few months.


> Is it boring? Was I right about that?

> who uses programming as a problem solving tool

Programming can be boring, software development is much more exciting!

All software is solving a problem, same as the software that’s the output of the programming you do.

Those problems might be purely technical (like virtualising different CPU architectures) or focussed on improving the efficiency that others can solve problems (like the software running Stack Exchange) or something completely different.

Solving all of those problems requires more than programming, same as the problems you’re trying to solve need more than programming to fully resolve. Building for maintainability, reliability etc. requires much more than mindlessly programming the software.

But even the programming itself is interesting, especially when solving difficult problems.


A lot of software being written is to solve the "problem" of humans doing repetitive/simple work that can be automated. Interesting to ponder that say a year of development may prevent tens of thousands of man hours of staff being paid for their time. This trend causes life to be less and less personal whereby we might have interaction with another person for simple things like groceries and banking being completely replaced with tapping on a screen and occasional frustration as every company has their own custom UI that has to be understood to interact with the business' processes.


Most of what people are paid to do, doesn't need to be done in the first place.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: