Layered ignorance and incompetence from years of exponential growth, and linear education. I would also hazard a guess that the general disdain for understanding context and proven tech is another major factor.
I'm looking forward to the next decades, there's a demographic collapse on the horizon, global supply chain are weakening. The incentives to bring back manufacturing onshore will increase and that will require full understanding and will likely put constraints on supply anyway, just less so than a total collapse. Software will have to adapt and get all the compute results to a much better power efficiency.
Whenever I interview people I explicitly make a section of the interview where I answer the candidate's questions, just in case it doesn't come up naturally or the candidate is a bit meek in that department. I think this is an important part of the interview. But if a candidate asked a bunch of these questions, it certainly would harm my opinion of them. I do not recommend using these questions as a guide.
Why not? For one, a lot of them are petty and kind of annoying, so you will come across as petty and kind of annoying. (Oh my God, if someone asks "tabs or spaces?" that is a no-hire right there.)
But also, note that many of these are leading questions for which there is supposedly a definite right or wrong answer ("Do you regularly correct technical debt? Do you use MVC or similar code structuring?" [This second question indicates a serious no-hire.]) Either these questions are appropriate, which implies you know more than the company does about how to develop software, or are at least its peer, in which case why are you interviewing there in a non-executive role, or why are you not off starting your own thing; or they are inappropriate, in which case you are signaling that you are going to be second-guessing everything the company does all the time, in a Dunning-Kruger kind of way, which is not something that anyone wants to deal with.
Either way, the attitude here is that the applicant can stand in judgement over the company, comprehensively across all these categories involved in programming, from the outside, with no real knowledge of what happens there day-to-day. This implies that the company cannot have much to teach the applicant or offer in terms of knowledge and technique, because if it could, then the answers to many of these questions would be surprising to the applicant or would even come across as "incorrect". So you're signaling that you think you know as much or more than the company about how to develop software well -- but news flash, most projects developed even with positive answers to all these questions are trash fires anyway. But some projects are very successful, so clearly there are factors much more important than are covered by these questions. If you treat these questions as important, you are signaling that you don't really know there are more important questions (while also being annoying).
Lastly, note that a lot of these questions are about minimizing the impact of the job on your life, "work-life balance" kinds of stuff. And there's a paradox here. On the one hand, I am not a fan of crunch or any kind of overworking people, and I do think it's reasonable for many people to avoid jobs that put you into that kind of situation. On the other hand, doing good work, for many people, is a primary vector for finding meaning in life; if you are not interested in investing time, energy and effort into a job, it probably is not meaningful to you, so these questions are kind of a checklist for finding a hollow meaningless job, actually. And if you ask a bunch of these kinds of questions from an employer, even one that is very careful about providing "work-life balance" (btw a phrase I never liked, I think it's a deep misconception, but at least we know what we are talking about), the employer is going to notice you keep asking this stuff and is going to get the feeling that you are someone who doesn't want to work very hard, unless you actively signal something like, I intend to work very hard within company hours but clock out exactly at 5pm, or whatever. But if you don't carefully do this kind of balanced signaling, and instead just ask a bunch of the questions from this list, you'll be giving the impression that you are someone most people don't want to hire, because why are you so preoccupied with minimizing work? It's a job, you are supposed to want to work there.
Questions I would recommend in lieu of those on this list:
* Why is the work important? What do I get from this job besides money (in terms of skills or other experience)?
* How is the company deciding what is "good" or "bad" in terms of the software or the development process? Aesthetically, what is considered good, what is considered bad?
* What kinds of decisions do I have the authority to make, and what kinds of decisions am I expected to defer upward?
* How closely will I be managed? What size of problem am I expected to solve independently, and what size of problem should be deferred upward?
* How far can I rise in my field by doing this job?
* How much of my time do I spend programming, and how much in meetings?
Note that none of these are questions with a pre-known right answer (except maybe a little bit the meetings one). They are also reasonable questions that should help maintain a good impression.
> if you aren't precise to the millisecond things will jump around
It is correct that precision is very important here, but, a millisecond is way too coarse: at 120fps, a millisecond is 1/8 of the frame time, and you'd get horrible jitter.
Ehh. The framerate doesn't actually matter that much, because in the end the actual pixel changes color at the same rate regardless of FPS. No shader is periodic at 120 Hz; they're very rarely periodic at even 10 Hz.
10 Hz is already a slow strobe light; 1 ms deltas means that each flash is within ~1% of the correct color.
If you're doing something like raymarching in the pixel shader, then you might want sub-millisecond resolution. In 99% of normal shaders, I don't think so. That kind of precision comes into play more with moving objects, where a tiny time delta can mean the difference between a pixel being completely lit or completely dark. Even then though, bad time resolution is just as likely to manifest as motion blur or something.
If you can’t reverse a binary tree, or don’t even know how to approach the problem, you can’t do most of programming. Fixing this should be a high priority, but the author seems to have no awareness of this.
Programming is hard. It takes a lot of skill to do it well. If the author seemed interested in acknowledging this and developing skill, the article would not come off as whiny and pointless. But it does, because he’s not interested in identifying and fixing the problem, which resides in his own house.
Reversing a binary tree isn't some crazy algorithmic knowledge. I'm concerned that so many people here think it is. What's gone wrong that makes people think reversing a binary tree is up there with hand-coding machine learning techniques?
Reversing a binary tree is a basic test of whether you can implement a trivial requirement given a spec. How is that not related to what you do every day on the job? You get a lot of requirements, most of which are trivial tasks needed to maintain compatibility with some broken external system that will never get fixed. The solutions are two to five lines of code, once you've taken the time to understand the requirements properly - just like reversing a binary tree.
This is neither a task that needs to be studied for nor an unfair pop quiz. It's an easier version of fizzbuzz. (You don't even need to worry about gotchas like getting case ordering wrong the way you do in fizzbuzz.) All it checks is that you have the basic skills necessary for reading and comprehending requirements and then meeting them with trivial code.
If you can't do that, what should an interviewer conclude about your capacity to do the job?
If you already know the algorithms, sure. The thing about reversing a binary tree is that no one already knows the algorithm. It's not something you'll find in any textbook. It has no intrinsic meaning, so why would it be there?
That makes it far better for an interview question. It's just a test of how you handle the sort of nonsense task that makes up so much day to day work. "Why do I have to frobnicate these doodads?" "Because the doodad processor requires it. That's just how it works, so we need to do it."
If you're doing matrix multiplication from scratch it sure isn't!
But I do think that by hand implementations of k means or k-nn or cross validation would be appropriate for folks who are interviewing for ML positions...
"Can't reverse a binary tree" is different from "Can't do it flawlessly enough in an interview setting". It also isn't necessary to provide business value. The things I spend 98% of my time on at work are not raw data structures, but leveraging those data structures.
The fact many companies interview for something that is mostly unpracticed by candidates in the field (only those practicing it at home), and won't actually be done on the job if hired, is a very strange place our industry finds itself in.
There's nothing strange about it. You are simply looking at the problem from the point of view of the candidate instead of from the point of view of the business.
The vast majority of developers can absolutely do 80% of the job I give them to some sufficiently reasonable degree, but I'm not hiring people for the 80% of the job. My issue as a company is finding people who can nail the remaining 20% of the job. That's very hard to find but it's absolutely critical in a business as competitive and cutthroat as technology.
The fact that most developers don't have a basic understanding of fundamental and core concepts really does have a measurable impact on the quality of software and almost all of us, as consumers of software, pay a penalty because of it.
We put up with slow, bloated, and inadequate software because we take for granted core computing concepts, fail to appreciate the hardware our software runs on, and write software in an unbelievably indulgent manner.
Some of this is alleviated by relying on frameworks provided by the major tech companies to deliver software and that certainly helps quite a bit but this benefit doesn't come for free; it results in vendor lock in, code rot, and a genericization where many software products end up being bland and derivative because everyone is gluing together the same frameworks.
Speaking as a hiring manager, it totally is from the point of the view of the business.
I don't need 100% of my hires able to handle 100% of the tasks. I have specialists at every level anyway; if I need someone writing data structure libraries I will focus on hiring for that. Most of what I, and most software shops need, is not that.
"The fact that most developers don't have a basic understanding etc" - citation needed. Most developers do. Even those untrained. But I don't need a developer to have reviewed data structure pedantry prior to an interview; I need developers that know they don't know everything, and know how to find answers.
"We put up with slow, bloated, and inadequate software because blah blah blah" - no, we put up with it because the market doesn't care about it. All the pressures on a business are features; speed is immaterial past a certain point.
>"We put up with slow, bloated, and inadequate software because blah blah blah" - no, we put up with it because the market doesn't care about it.
Yep. Company culture / psychology is also a factor. You can hire a bunch of amazing programmers who deeply care about speed and elegance but unless business pressures exist to incentivize those things and also the company's process and management are set up to encourage those things, it will not matter. A developer can care about speed and elegance, but if his bosses want him to use bloated frameworks because "everybody uses them" and it is easy to find developers for them, or if the feature requirements keep shifting, or if his bosses want him to spend time writing some pointless tests because of supposed "best practices", or if any of a bunch of other factors are at play, it will not matter much.
So I am not sure that a shortage of developers who are competent at those things is the real bottleneck.
>I don't need 100% of my hires able to handle 100% of the tasks.
No but every facet of your software that is treated as a specialty, or handled by a specialist, introduces a bottleneck in the development process.
Understanding the basic algorithms and data structures should not be something your developers have to specialize in and it should not be something that introduces bottlenecks into your work flow.
>"The fact that most developers don't have a basic understanding etc" - citation needed.
And yes it is very much reproducible and I manage to reproduce it consistently. Every single applicant to a job posting of mine goes through an unbelievably simple screener similar to Fizz Buzz and the fail rate to this day and on a consistent basis is over 50%.
I suppose you hire a specialist for that as well.
>All the pressures on a business are features; speed is immaterial past a certain point.
No, it's that most businesses don't even realize the importance of performance because they have no idea how to even begin measuring its impact.
As an example of this... consider a research study Google conducted to determine the maximum number of search results to display per page. Google's survey said that users wanted 20 results per page, so Google conducted an A/B test to compare traffic between 10 results per page and 20 results per page.
Their finding was that 20 results per page resulted in a tremendous drop in traffic, despite what users had told them. So certainly users were wrong and Google should stick to 10 results per page, right?
Wrong.
After very carefully studying the numbers, what Google realized was that there was an unforeseen factor that they did not take into account in their experiment nor did they anticipate it even being a factor: performance.
The 20 results per page took something like a couple 100ms longer to deliver compared to the 10 results per page, and that small performance delay was all it took to degrade user engagement and traffic on their site.
Once Google normalized performance so that 10 results took as much time as 20 results... traffic on the 20 results was substantially improved over traffic on 10 results. Furthermore this result generalized, as Google improved the performance of search results they continued noticing an improvement in traffic.
In other words... performance is treated by users as a feature, and yet it's a feature that most people don't even realize they want until they know what it's like to use reasonably fast software. Furthermore, the more performance you have, the more of a "usability budget" you have available to deliver features to your users.
You are welcome to spend your "usability budget" on hiring developers who can't answer basic questions about data structures and algorithms, and honestly if that works for your business then do it. But please understand some of us do care about writing reasonably fast software and we wish to hire developers who can answer basic technical questions about the profession they are engaged in.
Sure, I end up paying more for those developers, a lot more... but in the end that cost is but a tiny fraction of the total cost of running a company and I'm more than happy to pay it knowing that I have a team of qualified professionals who are able to implement the fundamental data structures and algorithms that underlie this profession.
I'd say that reversing a binary tree can be solved in two ways.
1. Never having thought about it before, generate an acceptable whiteboard algorithm from scratch while having a person or three bore holes in your back.
2. Throw a canned answer on the board.
My impression is that large companies prefer Door Number 2.
This entire article is nonsense. To a first approximation, the speed of your program in 2021 is determined by locality of memory access and overhead with regard to allocation and deallocation. C allows you to do bulk memory operations, Rust does not (unless you turn off the things about Rust that everyone says are good). Thus C is tremendously faster.
There is this habit in both academia and industry where people say "as fast as C" and justify this by comparing to a tremendously slow C program, but don't even know they are doing it. It's the blind leading the blind.
The question you should be asking yourself is, "If all these claims I keep seeing about X being as fast as Y are true, then why does software keep getting slower over time?"
(If you don't get what I am saying here, it might help to know that performance programmers consider malloc to be tremendously slow and don't use it except at startup or in cases when it is amortized by a factor of 1000 or more).
> To a first approximation, the speed of your program in 2021 is determined by locality of memory access and overhead with regard to allocation and deallocation.
I wouldn't call that a first approximation. Take ripgrep as an example. In a checkout of the Linux kernel with everything in my page cache:
$ time rg zqzqzqzq -j1
real 0.609
user 0.315
sys 0.286
maxmem 7 MB
faults 0
$ time rg zqzqzqzq -j8
real 0.116
user 0.381
sys 0.464
maxmem 9 MB
faults 0
This alone, to me, says "to a first approximation, the speed of your program in 2021 is determined by the number of cores it uses" would be better than your statement. But I wouldn't even say that. Because performance is complicated and it's difficult to generalize.
Using Rust made it a lot easier to parallelize ripgrep.
> C allows you to do bulk memory operations, Rust does not (unless you turn off the things about Rust that everyone says are good). Thus C is tremendously faster.
Talk about nonsense. I do bulk memory operations in Rust all the time. Amortizing allocation is exceptionally common in Rust. And it doesn't turn off anything. It's used in ripgrep in several places.
> There is this habit in both academia and industry where people say "as fast as C" and justify this by comparing to a tremendously slow C program, but don't even know they are doing it. It's the blind leading the blind.
I've never heard anyone refer to GNU grep as a "tremendously slow C program."
> The question you should be asking yourself is, "If all these claims I keep seeing about X being as fast as Y are true, then why does software keep getting slower over time?"
There are many possible answers to this. The question itself is so general that I don't know how to glean much, if anything, useful from it.
> This alone, to me, says "to a first approximation, the speed of your program in 2021 is determined by the number of cores it uses" would be better than your statement. But I wouldn't even say that.
You chose an embarrassingly parallel problem, which most programs are not. So you cannot generalize this example across most software. When you try to parallelize a structurally complicated algorithm, the biggest issue is contention. I was leaving this out because it really is a 2nd order problem -- most software today would get faster if you just cleaned up its memory usage, than if you just tried to parallelize it. (Of course it'd get even faster if you did both, but memory is the E1).
> There are many possible answers to this.
How come so few people are concerned with the answers to that question and which are true, but so many people are concerned with making performance claims?
Well, I mean, you chose an embarrassingly general statement to make? Play stupid games, win stupid prizes.
> which most programs are not
Programs? Or problems? Who says? It's not at all obvious to me that it's true. And even if it were true, "embarrassingly parallel" problems are nowhere close to uncommon.
> When you try to parallelize a structurally complicated algorithm, the biggest issue is contention.
With respect to performance, I agree.
> How come so few people are concerned with the answers to that question and which are true, but so many people are concerned with making performance claims?
The question is itself flawed. Technology isn't fixed. We "advance" and try to do more stuff. This is not me saying, "this explains everything." Or even that "more stuff" is a good thing. This is me saying, "there's more to it than your over-simplifications."
If you do not understand that "embarrassingly parallel" is a technical term and that it's generally understood that most programs are not easily parallelizable, there is not a discussion we can have here.
Really feels like you're just digging yourself deeper into a hole here:
Burntsushi began with "here, parallelization is beating out memory locality and optimization in its impact," but explicitly declined to generalize this the way you generalized your claim about memory.
He further pointed out that ripgrep is fast not just because of parallelization, but also because of how it handles memory.
Then you come back with "you can't always parallelize this well" (which burntsushi agreed with from the beginning) and "you also need to deal with memory" (which ripgrep does)? How is this burntsushi's problem with understanding "embarrassingly parallel" and not your problem with understanding Rust?
I agree that a discussion is difficult. Your comments are so vague and generalized that it's not clear what you're talking about at all. Bring something more specific to the table like the OP did instead of pontificating on generalities.
Thanks for making The Witness, Jonathan. It's one of my favorite games of all time and an exemplar of what it means to work through the consequences of logical axioms.
Makes me all the more sad that you're consistently unable to work through the consequences of Rust's axioms.
I don't disagree that memory access is nowadays critical for speed, but I haven't found Rust standing in the way of optimizing it.
As I've pointed out in the article, Rust does give you precise control over memory layout. Heap allocations are explicit and optional. In safe code. You don't even need to avoid any nice features (e.g. closures and iterators can be entirely on stack, no allocations needed).
Move semantics enables `memcpy`ing objects anywhere, so they don't have a permanent address, and don't need to be allocated individually.
In this regard Rust is different from e.g. Swift and Go, which claim to have C-like speed, but will autobox objects for you.
Bulk operations are not really about layout, they are about whether you mentally consider each little data structure to be an individual entity with its own lifetime, or not, because this determines what the code looks like, which determines how fast it is. (Though layout does help with regard to cache hits and so forth).
I don't know what you're trying to imply that Rust does, but I'll reiterate that Rust lifetimes don't exist at code generation time. They're not a runtime construct, they have zero influence over what code does at run time (e.g. mrustc compiler doesn't implement lifetimes, but bootstraps the whole Rust compiler just fine).
If you create `Vec<Object>` in Rust, then all objects will be allocated and laid out together as one contiguous chunk of memory, same as `malloc(sizeof(struct object) * n)` in C. You can also use `[Object; N]` or ArrayVec that's is identical to `struct object arr[N]`. It's also possible to use memory pools/arenas.
And where possible, LLVM will autovectorize operations on these too. Even if you use an iterator that in source code looks like it's operating on individual elements.
Knowing your other work I guess you mean SoA vs AoS? Rust doesn't have built-in syntax for these, but neither does C that we're talking about here.
> They're not a runtime construct, they have zero influence over what code does at run time (e.g. mrustc compiler doesn't implement lifetimes, but bootstraps the whole Rust compiler just fine).
This kind of reasoning seems like it makes sense, but actually it is false. ("Modern C++" people make the same arguments when arguing that you should use "zero-cost abstractions" all over the place). Abstractions determine how people write code, and the way they write the code determines the performance of the code.
When you conceptualize a bunch of stuff as different objects with different lifetimes, you are going to write code treating stuff as different objects with different lifetimes. That is slow.
> If you create `Vec<Object>` in Rust, then all objects will be allocated and laid out together as one contiguous chunk of memory
Sure, and that covers a small percentage of the use cases I am talking about, but not most of them.
> When you conceptualize a bunch of stuff as different objects with different lifetimes, you are going to write code treating stuff as different objects with different lifetimes. That is slow.
This is not how lifetimes work at all. In fact this sounds like the sort of thing someone who has never read or written anything using lifetimes would say: even the most basic applications of lifetimes go beyond this.
Fundamentally, any particular lifetime variable (the 'a syntax) erases the distinctions between individual objects. Rust doesn't even have syntax for the lifetime of any individual object. Research in this area tends to use the term "region" rather than "lifetime" for this reason.
Lifetimes actually fit in quite nicely with the sorts of things programs do to optimize memory locality and allocations.
> Sure, and that covers a small percentage of the use cases I am talking about, but not most of them.
Fortunately the other stuff you are talking about works just fine in Rust as well.
Rust's flavor of RAII is different from C++'s, because Rust doesn't have constructors, operator new, implicit copy constructors, and doesn't expose moved-out-of state.
Rust also has "Copy" types which by definition can be trivially created and can't have destructors. Collections take advantage of that (e.g. dropping an array doesn't run any code).
So I don't really get what you mean. Rust's RAII can be compiled to plain C code (in fact, mrustc does exactly that). It's just `struct Foo foo = {}` followed by optional user-defined `bye_bye(&foo)` after its last use (note: it's not free/delete, memory allocator doesn't have to be involved at all).
I suspect you're talking about some wider programming patterns and best practices, but I don't see how that relates to C. If you don't need per-object init()/deinit(), then for the same you wouldn't use RAII in Rust either. RAII is an opt-in pattern.
RAII is completely orthogonal to lifetimes, for one thing. You can have either without the other.
But, I am familiar with the kind of thing you're complaining about here, and frankly the mere existence of RAII is not its cause. Working with a large dataset, managing allocation/layout/traversal in a holistic way, you just... don't write destructors for every tiny piece. It works fine, I do it all the time (in both Rust and C++).
You haven't really explained in any detail what is slow about "treating stuff as objects with different lifetimes", and specifically how Rust differs there from C. Can you give an example?
Maybe you'd be interested to hear that Rust's borrow checker is very friendly to the ECS pattern, and works with ECS much better than with the classic OOP "Player extends Entity" approach.
> (If you don't get what I am saying here, it might help to know that performance programmers consider malloc to be tremendously slow and don't use it except at startup or in cases when it is amortized by a factor of 1000 or more).
Rust is now getting support for custom local allocators ala C++, including in default core types like Box<>, Vec<> and HashMap<>. It's an unstable feature, hence not yet part of stable Rust but it's absolutely being worked on.
I guess I am confused by the question. The job of the borrow checker is to constrain what you are allowed to do, and it's well-understood that it constrains you to a subset of correct programs, so that you stay in a realm that is analyzable.
Sure, but the borrow checker only operates on references. Rust gives you the tools to work with raw everything, if you dip into unsafe. Memory allocators, doing this kind of low-level thing, don't work with references. Let's say you want to implement a global allocator (this is the only current API in stable Rust, non-global allocators are on the way). The trait you use, which gives you the equivalent of malloc/free, has this signature:
Note the *mut u8 rather than say, &mut u8. Most people would not be using this interface directly, they'd be using a data structure that uses it internally.
Now, there's a good argument to be had about safe and unsafe, how much you need, in what proportion, and in what kinds of programs... but when you say things like "C allows you to do bulk memory operations, Rust does not" and ask about the borrow checker when talking about allocators, to someone who is familiar with Rust's details, it seems like you are misinformed somehow, which makes it really hard to engage constructively with what you're saying.
I'll try to further bridge some of the understanding gap.
People in this thread keep talking about "arena allocators" as if they are special things that you would use a few times (Grep Guy said this above, for example), or, here you imply they would be used internally to data structures, in a way that doesn't reach out to user-level.
That makes them not nearly as useful as they can be!
The game we are working on is currently 100k lines (150k lines if you count comments etc), and almost all allocations in the entire program are of this bulk type, in one way or another. Like if I want to copy a string to modify it a little bit to then look up a bitmap or make a filename, those are temporary allocations at user level and are not wrapped by anything. The string type is not some weird heavyweight C++ std::string kind of thing that wraps a bunch of functionality, it is just a length and a data pointer.
So the proposal to use unsafe in this kind of context doesn't make sense, since then you are putting unsafe everywhere in the program, which, then, why pretend you are checking things?
You can say, "well you as the end-user shouldn't be doing this stuff, everything should be wrapped in structures that were written by someone smarter than you I guess," but that is just not the model of programming that I am doing.
I understand how you can think the statement "Rust does not (allow you to do bulk memory operations)" is false, but when I say this, part of what I am including in "bulk memory operations" is the ability (as the end user) to pretend like you are in a garbage-collected language and not worry about the lifetime of your data, without having to take the performance penalty of using a garbage-collected language. So if you add back in worrying about lifetimes, it's not the same thing.
If you think "bulk memory allocation" is like, I have this big data structure that manages some API and it has some linked lists, and instead of allocating those nodes on the heap I get them from an arena or pool managed by the bigger structure ... that's fine, it is better than not doing it, but it doesn't help the end user write simpler code, and in practical terms it means that most of the allocations in the program are going to be non-bulk, because there's just too much friction on doing them broadly.
If it helps, I can revise my statement to "Rust enables you to do certain kinds of internal bulk memory allocation, but using bulk allocation broadly and freely across your program goes against the core spirit of the language" ... that sounds pretty uncontroversial? Then to bring it back to the original post, I would say, "This kind of broad use of bulk allocation is important for high performance and simplicity of the resulting code."
One last note, I am pretty tired of the "you don't understand Rust, therefore you are beneath us" line that everyone in the Rust community seems to deploy with even the slightest provocation -- not just when responding to me, but to anyone who doesn't just love Rust from top to bottom. Really it makes me feel that the only useful thing to do is just ignore Rust folks and go do useful things instead. I know I am not the only person who feels this way.
> People in this thread keep talking about "arena allocators" as if they are special things that you would use a few times (Grep Guy said this above, for example)
I didn't say anything about arena allocators. What I said was that amortizing allocation was routine and commonplace in ripgrep's code. I definitely wouldn't say that amortizing allocation is "special" or something I use a "few" times. As one example of amortizing allocs, it's very common to ask the caller for some memory instead of allocating memory yourself.
> I am pretty tired of the "you don't understand Rust, therefore you are beneath us" line that everyone in the Rust community seems to deploy with even the slightest provocation
Kind of like opening a comment with "This entire article is nonsense." Right? Snubbing your nose and then getting miffed by the perception of others snubbing their nose at you is a bunch of shenanigans. And then you snub your nose at pretty much everyone: "It's the blind leading the blind." I mean, c'mon dude.
The problem with your comments is that they lack specifics. Even after this comment where you've tried to explain, it's pretty hard for me to understand what you're getting at. I suspect part of the problem is your use of this term "bulk allocation." Is it jargon that refers to the specific pattern you have in mind? Because if it is, I can't find it documented anywhere after a quick search. If it's not jargon, then "bulk allocation" could mean a lot of things, but you clearly have a very specific variant of it in mind.
It's clear to me that your argument is a very subtle one that requires nuance and probably lots of code examples to get the point across. Going about this at the other end---with lots of generalities and presumptions---just seems like a fool's errand.
This style of memory management can be as pervasive as you like! You are reading way more detail out of people's comments than they put there, and then getting upset about your misinterpretation.
If every throwaway string in your program comes from an arena that you clear later, great! Rust won't stop you, or even force you to use unsafe every time you build one. The unsafe code goes in a "give me a fresh chunk of temporary memory" function, and that function is safe to call all over the place: unsafe-in-a-safe-function is a common pattern for extending the set of analyzable programs.
(It's also worth pointing out that Rust's primitive string type is "just a length and a data pointer," so once you've allocated one out of an arena like this, you can do all the nice built-in string-y things with it, with no std::string-like interference.)
The Rust compiler itself uses this sort of bulk memory all the time. It's not limited to the internals of data structures there- it's spread across larger phases and queries of its operation, with all kinds of stuff allocated the same way.
Now, to be fair, this is not the default- e.g. Rust's standard library of collections don't participate. But this is why everyone keeps mentioning custom allocators to you- there is ongoing work to extend these collections with the ability to control how they perform their allocation!
> One last note, I am pretty tired of the "you don't understand Rust, therefore you are beneath us" line that everyone in the Rust community seems to deploy with even the slightest provocation -- not just when responding to me, but to anyone who doesn't just love Rust from top to bottom.
You would get this kind of reaction a lot less often if you didn't make vague or nonsense claims about it so often.
Okay, but if I do this everywhere, then I de facto don't have memory safety. Why, then should I use Rust and pretend like I am getting memory safety? Why wouldn't I use a lower-friction language with a faster compiler?
It looks to me like the Rust community has this weird way of wanting to have its cake, and eat it too, about memory. Y'all want to advertise how important memory safety is, how great it is to have, and so forth. Then in cases like this, it's always "oh but you just use unsafe, it's fine". These stories are mutually inconsistent. Either you have memory safety or you don't. Paying the cost that Rust makes programmers pay for memory safety, and then not actually getting memory safety, is the worst of both worlds.
Then when you guys say I am making nonsense claims because of course you can have your cake and also eat it as long as you use the Rust programming language, well, it's just pretty weird at that point.
Memory safety is not some sort of binary thing which you either have or you don't. All memory safe environments are built on a foundation of unsafe code. For example, Java being memory safe assumes the JVM or JNI code doesn't have any memory safety bugs.
What Rust does is reduce the amount of code that's memory unsafe, that needs to be triply reviewed and audited. Reduction of the scope of high-scrutiny code is the single most leveraged thing that can be done to improve code quality in a large, long-running project. Why? Because it lets you do careful, time-consuming analysis on a small part of your codebase (the bits that are marked unsafe), then scale the analysis up to the rest of your code.
> These stories are mutually inconsistent. Either you have memory safety or you don't.
This is... what can I say. This is simply incorrect. It pains me to say this as a fan of your games but you really don't seem to have any idea what you're talking about.
> Okay, but if I do this everywhere, then I de facto don't have memory safety.
No, that's not how this works. You write the unsafe code in one place and make sure it's correct (just like you'd do in C or Jai), and then you wrap it in a function signature that lets the compiler apply its memory safety checks to all the places that call it (this is what Rust gives you over C).
This is still a meaningful improvement to memory safety over C. The compiler checks the majority of your program; if you still see memory bugs now you only have a small subset to think about when debugging them.
This is also not very different from a hypothetical language with your "actual" memory safety- in that case, you still have to consider the correctness of the compiler checks themselves. Rust just takes a more balanced and flexible approach here and moves some of that stuff out of the compiler. (In fact this simplifies the compiler, which increases confidence in its correctness...)
Rust has been very clear about all this from the beginning. If you are still reading people's claims about Rust memory safety a different way, that's on you.
> Why wouldn't I use a lower-friction language with a faster compiler?
That's totally up to you! I don't have a problem with people using other languages for these kinds of reasons. My goal here is not to convert you, but to cover more accurately what Rust is and isn't capable of. (At the root of this thread, that's things like "writing fast software with effective memory management styles.")
They are the opposite of meaningless. This is just straight-up incorrect, both in theory and in practice.
Please take some time and think about this a bit more. Please think about how code review processes work, how audits work, how human attention spans work. Please think about how people endlessly nitpick small PRs but accept large ones with few comments. What unsafe does is make it easy to spot the small bits of critical code to nitpick while not having to worry about safety for the rest.
They're conditionally meaningful: if a small amount of your program is correct, the entire program satisfies some useful properties.
This may or may not be something you care about, but it is certainly a meaningful tool that is quite useful to me, including when I use your type (3) style described in the sibling thread.
> here you imply they would be used internally to data structures, in a way that doesn't reach out to user-level.
Ah! I think I am understanding you a bit better. The thing is, ultimately, Rust is as flexible as you want it to be, and so there are a variety of options. This can make it tricky, when folks are talking about slightly different things, in slightly different contexts.
When you say "doesn't reach out to user level," what I mean by what I said was that users don't generally call alloc and dealloc directly. Here, let's move to an actual concrete example so that it's more clear. Code is better than words, often:
use bumpalo::{Bump, boxed::Box};
struct Point {
x: i32,
y: i32,
}
fn main() {
let bump = Bump::with_capacity(256);
let c = Box::new_in(Point { x: 5, y: 6 }, &bump);
}
This is using "bumpalo", a very straightforward bump allocator. As a user, I say "hey, I want an arena backed by 256 bytes. Please allocate this Point into it, and give me a pointer to it." "c" here is now a pointer into this little heap it's managing. Because my points are eight bytes in size, I could fit 32 points here. Nothing will be deallocated until bump goes out of scope.
But notably, I am not using any unsafe here. Yes, I am saying "give me an allocation of this total size", and yes I am saying "please allocate stuff into it and give me pointers to it," but generally, I as a user don't need to mess with unsafe unless I'm the person implementing bumpalo. And sometimes you are! Personally, I work in embedded, with no global heap at all. I end up using more unsafe than most. But there's no unsafe code in what I've written above, but it's still gonna give you something like what you said you're doing in your current game. Of course, you probably want something more like an arena, than a pure bump allocator. Those exist too. You write 'em up like you would anything else. Rust will still make sure that c doesn't outlive bump, but it'll do that entirely at compile time, no runtime checks here.
Oh, and this is sorta random but I didn't know where to put it: Rust's &str type is a "pointer + length" as well. Using this kind of thing is extremely common in Rust, we call them "slices" and they're not just for strings.
> You can say, "well you as the end-user shouldn't be doing this stuff, everything should be wrapped in structures that were written by someone smarter than you I guess," but that is just not the model of programming that I am doing.
While that's convenient, and in this case, I am showing that, the point is that it's about encapsulation. I don't have to use this existing allocator if I wanted to write something different. But because I can encapsulate the unsafe bit, no matter who is writing it, I need to pay attention in a smaller part of my program. Maybe I am that person, maybe someone else is, but the benefit is roughly the same either way.
> So if you add back in worrying about lifetimes, it's not the same thing.
To be super clear about it, Rust has raw pointers, that are the same as C. No lifetimes. If you want to use them, you can. The vast, vast, vast majority of the time, you do not need the flexibility, and so it's worth giving it up for the compile time checks.
> If you think "bulk memory allocation" is like...
It's not clear to me above if the API I'm talking about above is what you mean here, or something else. It's not clear to me how you'd get simpler than "please give me a handle to this part of the heap," but I haven't seen your latest Jai streams. I am excited to give it a try once I am able to.
> but using bulk allocation broadly and freely across your program goes against the core spirit of the language
I don't know why you'd think these techniques are against the core spirit of the language. Rust's primitive array type is literally "give me N of these bits of data laid out next to each other in memory." We had a keynote at Rustconf about how useful generational arenas are as a technique in Rust. As a systems language, Rust needs to give you the freedom to do literally anything and everything possible.
> One last note, I am pretty tired of the "you don't understand Rust, therefore you are beneath us"
To be clear, I don't think that you or anyone else is "beneath us," here. What I want is informed criticism, rather than sweeping, incorrect statements that lead people to believe things that aren't true. Rust is not perfect. There are tons of things we could do better. But that doesn't mean that it's not right to point out when facts are different than the things that are said. You of all people seem to appreciate a forward communication style.
> rather than sweeping, incorrect statements that lead people to believe things that aren't true
I agree, and if I say things that are incorrect, then I definitely want to fix them, because I value being correct.
But what I am meeting in this thread is people wanting to do some language-lawyer version of trying to prove I am incorrect, without addressing the substance of what I am actually saying. I think your replies have been the only exception to this (and only just).
I realize my original posting was pretty brusque, but, the article was very bad and I am very concerned with the ongoing deterioration of software quality, and the hivemind responses to articles like this on HN, I think, are part of the problem.
I know that Rust people are also concerned with software quality, and that's good. I just think most of Rust's theories about what will help, and most of the ways these are implemented semantically, are just wrong.
So if something I am saying doesn't seem to make sense, or seems "incorrect", well, maybe it's that I am just coming from a very different place in terms of what good programming looks like. The code that I write just looks way different from the code you guys write, the things I think about are way different, etc. So that probably makes communication much harder than it otherwise would be, and makes it much easier to misunderstand things.
On the technical topic being discussed here...
Using a bump allocator in the way you just did, on the stack for local code that uses the bump allocator right there, is semantically correct, but not a very useful usage pattern. In a long-running interactive application, that is being programmed according to a bulk allocation paradigm that maybe is "data oriented" or whatever the kids call it these days, there are pretty much 4 memory usage patterns that you ever care about:
(1) Data baked into the program, or that is initialized so early at startup that you don't have to worry about it. [This is 'static in Rust].
(2) Data that probably lives a long time, but not the whole lifetime of the program, and that will be deallocated capriciously at some point. (For example, an entry in a global state table).
(3) Data that lasts long enough that local code doesn't have to care about its lifetime, but that does not need to survive long-term. For example, a per-frame temporary arena, or a per-job arena that lasts the lifetime of an asynchronous task.
(4) Data that lives on the stack, thus that can't ever be used upward on the stack.
Now, the thing is that category (3) was not really acknowledged in a public way for a long time, and a lot of people still don't really think of it as its own thing. (I certainly didn't learn to think about this possibility in CS school, for example). But in cases of dynamic allocation, category (3) is strictly superior to (4) -- because it's approximately as fast, and you don't have to worry about your alloca trying to survive too long. You can whip up a temporary string and just return it from your function and nobody owns it and it's fine. So having your program really lean on (3) in a big way is very useful. This is what I was saying before about pretending to have a garbage collector, but you don't pay for it.
So if you are doing a fast data-oriented program (I don't really use the term "data-oriented" but I will use it here just for shorthand), dynamic allocations are going to be categories 1-3, and (4) is just for like your plain vanilla local variables on the stack, but these are so simple you just don't need to think about them much.
Insofar as I can tell, all this lifetime analysis stuff in Rust is geared toward (4). Rust wants you to be a good RAII citizen and have "resources" owned by authoritative things that drop at very specific times. (The weird thing about "resources" is that in reality this almost always means memory, and dealing with memory is very very different from dealing with something like a file descriptor, but this is genericized into "resources", which I think is generally a big mistake that many modern programming language people make).
With (1), you don't need any lifetime checking, because there is no problem. With (2), well, you can leak and whatever, but this is sort of just your problem to make sure it doesn't happen, because it is not amenable to static analysis. With (3), you could formalize a lifetime for it, but it is just one quasi-global lifetime that you are using for lots of different data, so this by definition cannot do very much work for you. You could use it to avoid setting a global to something in category (3), and that's useful to a degree, but in reality this problem is not hard to catch without that, and it doesn't seem worth it to me in terms of the amount of friction required to address this problem. Then there is (4), which, if you are not programming in RAII style, you don't really need checking very much??, because everything there is simple, and anyway, the vast majority of common stack violations are statically detectable even in C (the fact that C compilers did not historically do this is really dumb, and has been a source of much woe, but like, it is very easy to detect if you return a pointer to a local from a function, for example. Yes, this is not thorough in the way Rust's lifetime checking is, and this class of analysis will not catch everything Rust does, but honestly it will catch most of it, at no cost to the programmer).
So when I said "Rust does not allow you to do bulk memory allocation" what I am saying is, the way the language is intended to be used, you have most of your resources being of type (4), and it prevents you from assigning them incorrectly to other resources of type (4) but that have shorter lifetimes, or to (2) or (1).
But if almost everything in (4) is so simple you don't take pointers to it and whatnot, and if most of your resources are (3), they have the same lifetime as each other, all over the place, so there is no use checking them against each other. So now the only benefit you are getting is ensuring that you don't assign (3) to (2) or (1). But the nature of (3) is such that it is reset from a centralized place, so that it is easy, for example, to wipe the memory each frame with a known pattern to generate a crash if something is wrong, or, if you want something more like an analytical framework, to do a Boehm-style garbage collector thing on your heap (in Debug/checked builds only!) to ensure that nothing points into this space, which is a well-defined and easy thing to do because there is a specific place and time during which that space is supposed to be empty.
So to me "programming in Rust" involves living in (4) and heavily using constructors and destructors, whereas I tend to live in (3) and don't use constructors or destructors. (I do use initializers, which are the simple version of constructors where things can be assigned to constant values that do not require code execution and do not involve "resources" -- basically, could you memcpy the initial value of this struct from somewhere fixed in memory). Now the thing that is weird is that maybe "programming in Rust" has changed since last time I argued with Rust people. It seems that it used to be the sentiment that one should minimize use of unsafe, that it should just be for stuff like lockfree data structure implementation or using weird SIMD intrinsics or whatever, but people in this thread are saying, no man, you just use unsafe all over the place, you just totally go for it. And with regard to that, I can just say again what I said above, that if your main way of using memory is some unsafe stuff wrapped in a pretend safe function, then the program does not really have the memory safety that it is claiming it does, so why then be pretending to use Rust's checking facilities? And if not really using those, why use the language?
So that's what I don't get here. Rust is supposed to be all about memory safety ... isn't it? So the "spirit of Rust" is something about knowing your program is safe because the borrow checker checked it. If I am intentionally programming in a style that prevents the borrow checker from doing its job, is this not against the spirit of the language?
I'll just close this by saying that one of the main reasons to live in (3) and not do RAII is that code is a lot faster, and a lot simpler. The reason is because RAII encourages you conceptualize things as separately managed when they do not need to be. This seems to have been misunderstood in many of the replies above, as people thinking I am talking about particular features of Rust lifetimes or something. No, it is RAII at the conceptual level that is slow.
> We had a keynote at Rustconf about how useful generational arenas are as a technique in Rust.
If that's the one I am thinking of, I replied to it at length on YouTube back in 2018.
Thanks for expanding on that. Now I think I get what you mean.
Rust does case (3) with arenas. In your frame loop you'd create or reset an arena and then borrow it. That would limit lifetime of its items to a single loop iteration.
The cost would be only in a noisy syntax with `'arena` in several places, but other than that it compiles to plain pointers with no magic. Lifetimes are merely compile-time assertions for a static analyzer. Note that in Rust borrowing is unrelated to RAII and theoretically separate from single ownership.
Rust's `async fn` is one case where a task can hold all of the memory it will need, as one tagged union.
As for unsafe, the point is in encapsulating it in safe abstractions.
Imagine you're implementing a high level sandbox language. Your language is meant to be bulletproof safe, but your compiler for that language may is in C. The fact that the compiler might do unsafe things doesn't make the safe language pointless.
Rust does that, but the safe language vs unsafe compiler barrier is shifted a but, so that users can add "compiler internals" themselves for the safe language side.
eg. `String` implementation is full of unsafe code, but once that one implementation has been verified manually to be sound, and hidden behind a safe API, nobody else using it can screw it up when eg concatenating strings.
I know it seems pointless if you can access unsafe at all, so why bother? But in practice it really helps, for reasons that are mostly social.
* There are clear universal rules about what is a safe API. That helps review the code, because the API contract can't be arbitrary or "just be careful not to…". It either is or isn't, and you mark it as such. Not everything can be expressed in terms of safe APIs, but enough things can.
* Unsafe parts can be left to be written by more experienced devs, or flagged for more thorough review, or fuzzed, etc. Rust's unsafe requires as much diligence as equivalent C code. The difference is that thanks to encapsulation you don't need to write the whole program with maximum care, and you know where to focus your efforts to ensure safety. You focus on designing safe abstraction once, and then can rely on the compiler upholding it everywhere else.
> So if something I am saying doesn't seem to make sense, or seems "incorrect", well, maybe it's that I am just coming from a very different place in terms of what good programming looks like.
I do think this is probably true, and I know you do care about this! The thing is...
> The code that I write just looks way different from the code you guys write, the things I think about are way different, etc.
This is also probably true! The issue comes when you start describing how Rust code must be or work. There's nothing bad about having different ways of doing things! It's just that when you say things like "since then you are putting unsafe everywhere in the program," when that's empirically not what happens in Rust code.
> Using a bump allocator in the way you just did, on the stack for local code that uses the bump allocator right there, is semantically correct, but not a very useful usage pattern.
Yes. I thought going to the simplest possible thing would be best to illustrate the concept, but you're absolutely right that there is a rich wealth of options here.
Rust handles all four of these cases, in fairly straightforward ways. I also agree that 3 isn't often talked about as much as it should be in the broader programming world. I also have had this hunch that 3 and 4 are connected, given that the stack sometimes feels like an arena for just the function and its children in the call graph, and that it has some connection to the young generation in garbage collectors as well, but this is pretty offtopic so I'll leave it at that :)
Rust doesn't care just about 4 though! Lifetimes handle 3 as well; they ensure that the pointers don't last longer than the arena lives. That's it.
I don't have time to dig into this more, but I do appreciate you elaborating a bit here. It is very helpful to get closer to understanding what it is you're talking about, exactly. I think I see this differently than you, but I don't have a good quick handle on explaining exactly why. Some food for thought though. Thanks.
(Oh, and it is the one you're thinking of; I forgot that you had commented on that. My point was not to argue that the specifics were good, or that your response was good or bad, just that different strategies for handling memory isn't unusual in Rust world.)
> they ensure that the pointers don't last longer than the arena lives. That's it.
Sure, but my point is, when most things have lifetimes tied to the same arena, this becomes a almost a no-op. Both in the sense of, you are not really checking much (as Ayn Rand said, 'a is 'a), and you're paying a lot in terms of typing stuff into the program, and waiting around for the compiler to be not usefully checking all these things that are the same. Refactoring a program so that most things' lifetimes are the same does not feel to me like it's in the spirit of Rust, because then why have all these complicated systems, but maybe you feel that it is.
There is a bit of a different story when you are heavily using threads, because you want those threads to have allocators that are totally decoupled from each other (because otherwise waiting on the allocator becomes a huge source of inefficiency). So then there are more lifetimes. But here I am not convinced about the Rust story either, because here too I think there are simpler things to do that give you 95% of the benefit and are much lower-friction.
(And I will admit here that "Rust doesn't allow you to X", as I said originally, is not an accurate statement objectively. Attempting to rephrase that objection to be better, I would say, by the time you are doing all this stuff, you are outside the gamut that Rust was designed for, so by doing that program in Rust you are taking a lot of friction, but not getting the benefit given to someone who stays inside that gamut, so, it seems like a bad idea.)
> Refactoring a program so that most things' lifetimes are the same does not feel to me like it's in the spirit of Rust, because then why have all these complicated systems, but maybe you feel that it is.
I think a common sentiment among Rust programmers would instead phrase this as, "the complicated system we designed to find bugs keeps yelling at us when we have lots of crazy lifetimes flying around, so presumably designs that avoid them might be better."
In this sense, even for someone who doesn't feel the borrow checker is worth it in their compiler, this can just be taken as a general justification for designs in any language that have simpler lifetime patterns. If they're easier to prove correct with static analysis, maybe they're easier to keep correct by hand.
When I saw this announcement I was hoping that I could finally buy a laptop with a good trackpad, with buttons, and a good keyboard again. But looking at the announcement, it seems like trackpad and keyboard quality are far from anyone's mind, and it just looks like the laptop is copying Apple stylistically like everyone else, which means it is going to be kind-of unusable and I won't want to use it. (Especially when running Windows, those kinds of giant Apple-esque trackpads are death, because you'll keep accidentally moving files into places you didn't mean to, and then of course there's the general unresponsiveness once you add all the PalmCheck and turn-off-trackpad-for-n-secons after typing junk).
I like the idea of a laptop built for quality, but for me the #1 determinant of quality is my area of constant physical contact with the laptop, the keyboard and trackpad. And sadly, those look like afterthoughts here.
(For context -- I have bought and heavily used an average of more than one laptop per year, every year, since 1998, and have been dismayed to watch the quality trend constantly downward over that time).
Keyboard and touchpad quality were high priorities for us. We built in 1.5mm key travel, which is unusually high for a <16mm thick laptop. The touchpad surface feels great and performs well. I hear you on the touchpad buttons though. That is something we've done a little exploration on. The touchpad is an end-user replaceable module, but we can't commit to a three button version materializing just yet.
See, I disagree, I use home/end/pageup/pagedown all the time, and having them separate in an awkward spot is annoying. I prefer having them overlaid on the arrow keys, with fn to access them. IMHO that is the one area laptop keyboards can superior to full size keyboards.
We probably need laptops with configurable keyboard layouts then, so that you can choose a variant when purchasing it and then replace it with different one if you find out you don't like it.
(And then, let's not forget that keyboard layouts are somewhat configurable in software. It's easy to bind pageup to mod-uparrow and pagedown to mod-downarrow. But it can't be done if the physical keys are missing, so physical keyboard layouts with more keys are preferable to those with less keys. Unfortunately fn-combos are usually hardwired in the Embedded Controller and can't be changed easily in software.)
By configurable, I mainly meant the physical layout of keys, configurable at purchase time. So that one can get a laptop with a trendy 6-row chiclet, or a proper 7-row classic ThinkPad keyboard.
But yeah, being able to override the key mapping in EC is an important feature as well, and I'm glad someone's doing it! I really hope for the future where all these pieces come together: a laptop with a good physical keyboard (configurable/swappable, so that everyone can get their meaning of "good"), with configurable EC, replaceable components, etc. That would be a dream.
This is what makes newer ThinkPad keyboard layouts great at least in terms of arrow keys - the useless icon keys from the GP's posted image are now PgUp/PgDown, meaning you can access at least two out of the four without annoying two-handed operation. Home/End can still be bound to arrow keys plus modifier, but even there it's nice to have a dedicated key available.
Basically, full-size arrow keys are what really makes the difference, and at that point you could get the best of both worlds anyway.
(Let's not talk about other recent/misguided ThinkPad keyboard developments though.)
This is one of the reasons I actually really like the Surface Book keyboard [0]. It has home/end/pageup/pagedown as primary keys in the fn row, where they all fall to hand just by moving your right hand up from the home row.
Key size and placement is generally pretty perfect for the size imo (backspace, shift, enter, etc aren't cramped), and key travel and feel is up there with the best.
The up/down arrow keys and lack of brightness control in the fn row are the only real problems.
Full-sized (full-height!) F keys, full-sized inverted-T arrow keys, dedicated Ins/Del/Home/End/PgUp/PgDown keys, and the Menu key (as on the Thinkpad) would be ideal.
At this point I'd prefer few keys, just 18 per side for my fingers and a row of 4-6 for each of my thumbs. I'd handle numbers, function keys, volume, and whatever else you mentioned with layers.
Those keys already have their own layers in a lot of software. Having to press yet another additional modifier key destroys usability and muscle memory.
For me it’s the opposite: I get RSI from having to stretch my hands to press 2-3 modifiers simultaneously, whereas moving my hands a bit is no problem.
If there are different preferences for arrangement of keyboard layout, then my vote goes to: it would be amazing if there were a laptop where such things could be customized.
Then the people who want niche 40% or 36-key layouts can go with that, and those who prefer more keys can go with that.
IMO, the row-stagger is an unergonomic, archaic skeuomorphism. This is also a niche opinion.
Unfortunately, I'm sure there are practical reasons why "modular keyboard" can't happen (not enough market interest, strength of the laptop hull suffers if the keyboard is a separate module, laptop couldn't be as thin, etc.).
While I agree with you one the F keys and home block, I also understand that's something that people might not prefer.
What I don't understand are the arrows. I've been using MB Pro daily for over 3 years now and I still regularly miss up/down arrows. I would never buy a device that merges them into size of one key as the MB Pro does, as well as this one.
But am I the only person considering a split ergonomic layout with thumb clusters, to give your pinkies some rest? The touchpad could live between the keyboard halves, and never be touched by mistake.
Fn position is obviously configurable. I have ctrl/fn swapped, but plenty of people don't, and that's fine. (And then plently of people have ctrl on capslock, which is also fine. I couldn't get used to it.)
I think if you could buck the "flat thin keys" trend you would probably develop lots of customers for life.
Thin and flat has nice visual appeal, but I think you should approach it from a tactile direction.
My ideal keyboard would have a little extra throw, and they keys would be sculpted to match the curve of your fingertips for comfort and to help you center on the keys.
I think the thinkpad keyboards were favorites for a reason.
It's strange to see a complaint about the Apple trackpad, because whenever I use a non-Apple laptop, I despair at the trackpad. The current design is too large, but the pre-USB-C models had a perfect size and UX. I haven't ever experienced an equal.
The Apple trackpad seems to be really polarising - I often this see on HN: fans surprised anyone would dislike it, and opponents surprised anyone would like it!
Personally, I'm in the latter camp. I have a 13" MBP, and find the buttons need way too much pressure, even with the sensitivity ramped up. There's also something I can't quantify... there's a feeling of it being laggy, and somehow "unpleasant" to drag my finger across. I prefer just about every other trackpad I've ever used, even those in dirt cheap netbooks.
Not sure if we’re talking about the same thing with that plural, but the first thing I change on a new Mac is enabling tap to click. It works great and you can avoid the annoyance of the audible click.
The second thing is to enable three finger drag, which the moves into Accessibility settings about 5 years ago.
I suspect it has something to do with their typing habits and/or some physical issue.
Personally, the newest Macbooks became a problem for me despite the history of amazing palm rejection on Macbooks, I have sort of sweaty hands and when they increased the trackpad size, that combined with how I type (palm resting on the chassis, not raised), it causes a lot of jitter (I say a lot, it's super rare, but just enough to train me away from it) and I've ended up using an external mouse exclusively, but in the past year, uh, I haven't been mobile so that's just a nonissue :)
not intending to start an apple-vs-msft flamewar, but this has been a solved problem on the mac since forever. is the experience that bad on windows laptops that you don't want a big trackpad? genuine question.
It is for Synaptics drivers (some manufacturers like Dell, Razer, HP used to default to those) but not for precision touchpad drivers (what's used in the Surfaces).
yeah. you definitely want palm detection off or it'll miss a good deal of swipes (if you mix typing and mousing). with palm detection off, you need the touchpad to be slightly offset to the left and small enough that it fits between your hands at rest.
In my experience trackpad and touch support on Windows has improved immensely since the introduction of the Surface. I recall the experience you’re describing but associate it with the 2010-2015 era .
Okay, but ... what is being gained here?
The ability to package programs so they run on a computer without you worrying about it?
That is an ability we had since the 1960s, that people gave up for "reasons" that are questionable at best.
Someday people will realize they don't have to make their computers orders of magnitude slower just in order to get back capabilities that used to come automatically.
I have heard this story multiple times. But if it's true, then why are roads pretty much the same, as concerns walking vs driving, wherever you go in the world?
If things would have naturally been different if only it weren't for some lobbyists, how come things aren't that way at least somewhere?
Well roads do vary in their physical form and the social norms that govern their use in different parts of the world based on a lot of factors.
But I would say roads in developed areas are pretty much the same everywhere because cars are only made one way. The social consensus I mentioned was moving towards requiring a governor on cars that would shut their engines off if they exceeded 25 mph. If that had succeeded and endured, roads would look very different. But the economic effects that cars produce in accelerating the free flow of commodities, labor, and capital necessitated a rupture with the established order so that automobiles could continue to develop to be bigger, faster, and more powerful.
The similarities in road form and norms around the world represent an equilibrium between the current level of car technology and the amount of injury and death resulting from their use that society can metabolize.
We went through all this BS, then signed up with them on a year contract.
Then we were shocked to find that it would take them two weeks to set up our first machine.
I noped out of it at that point and just paid the penalty to get the hell out of there. A competing service, that we already used, sets up machines in 1-2 days.
When stuff hits the fan and you really need things to happen, two weeks is just a crazy response time. That's like infinity in internet time.
Someone told me something similar. Took 2 weeks to provision servers (and we mean VMs no just physical kit) and there was usually something wrong with them that took more time to fix.
This was about 3 years ago. His company moved everything to AWS.
If you rotate a square, then the length will oscillate. A circle will remain fixed (but presumably would have surface detail to clue you in that it's rotating.)
Assuming you have "1d depth vision", ie computing distance to the eye using two eyes set some distance apart, you'd be able to tell them apart rather easily, in addition to shading and movement differences.
What you should instead be wondering is why so much stuff today fails to display even this level of interactivity.