Hacker Newsnew | past | comments | ask | show | jobs | submit | aaravchen's commentslogin

It's worth noting that at least EU vs US, most the the better quality EVs are not available. Almost all the long range and more efficient EVs are Chinese-made and are not available in the US. My understanding is that the vast majority of EVs in the EU are these Chinese-made ones because of their hugely superior performance.

I don't know if Canada has the same restrictions on EVs, and I definitely can't speak to longevity or quality of the Chinese-made EVs, only to the charging speed and range. BYD for example is one of these Chinese-made brands prohibited in the US but leading the industry in performance.


I try to keep a list somewhere, but I only pick one or two that are likely to be most useful and try to remember them until they're such a normal part of my workflow I don't have to think much about them anymore. Then I go back and pick a couple new ones to add.

The hard part is when you've reached the point where all your common use cases are covered and you're left with only uncommon cases. Then it's extra hard to remember you had a trick for dealing with the uncommon case.


I struggle with this all the time. It's easy to start with some shell commands, and nothing beats the direct terminal running nature of a shell script. But at some point you get to where you need to do things shell isn't good at, and the cost of maintaining that part can put weight the ease of direct terminal access. Especially if you then throw in the need to make your solution runnable on a lot of systems, you now have a problem with compatibility and possibly environment setup. On top of that, if you're really experienced and good with shell, the bar for "this is difficult" in shell gets further and further away, even if it's hard for someone else to maintain.

I've been investigating alternative options with bring-your-own-tool for this situation at $JOB, mostly making use of shebangs to trigger environment setup from self-contained utilities packaged with the scripts, and there are some promising options. Things like `uv` to setup a self-contained one-off python venv automatically so you can write in Python instead, possibly with one of the packages that helps with the many subcommands footguns, or to write `xonsh` if you want something more shell script leaning but with Python power. Or you can alternatively go with a language agnostic solution and use shebang triggered `nix-portable` to construct a whole isolated custom environment automatically from a flake.nix and run whatever tools and mishmash of languages you want.


Thank you.

For some reason shell and Bash is very alluring to me, but Python isn’t. I guess it’s the combination of both having to orchestrate subprocesses and having to deal with all the typical dynamic language bugs that I introduce in the process of writing in a language I usually do not use.

So then I use some static language. Which has even more of an activation cost.

I should just try a better shell than the vaguely Posix compliant ones.


You can install whatever shell you want on your system, and there are tons of better alternatives out there. Where it's actually sticky is in the lowest-common-denominator cases, when you need to throw together a quick series of commands that need to be run on a semi-arbitrary system. In that case there are VERY few options because you have to use whatever is already on the system. Sometimes you're lucky enough you can safely assume bash is present, but in many cases you have to assume only sh. Unfortunately both are a difficult/footgun enough language that it greatly helps to start out using it daily to get familiar, and that's where most people end up peaking. Additionally you may need to manually jump into that semi-arbitrary system and do things manually. You're limited by the minimal tools already there, so you better be familiar enough with them already. This is the very reason I've learned some basic Vi commands as well, even though I would never even consider using it otherwise.

In my experience Power shell is actually a terrible replacement because you have to learn not only every command, but the structured return and available input formats of every command in order to do anything. The design of almost all shells is specifically to avoid that and reduce to a common universal input/output format. Powershell could certainly be useful as an intermediate choice between a full blown language and shell scripting though, if you needed more power but not so much to make it worth the load of a real language, but would also require it to be ubiquitous already (which it's not).


One of the peer comments alludes to this and mentions that it's bash only. Doesn't work in zsh by default apparently.

It depends too on whether you use shellcheck as a primary tool or not. I prefer to have no shellcheck errors/warnings by default so when they do appear it's very obvious. But having a consistent opening block on a bunch of scripts is often more important, so setting a shellcheck disable on that one variable that may or may not be used is a better solution.

If you're using `set -e` you almost always want a trap on ERR to print where you suddenly exited from your program. Otherwise there's no way to tell.

Also worth mentioning, but not including permanently, is `set -x` which will print every command to stderr with a `+ ` prefix before running it.

For `DIR` I usually pick a name less likely to conflict with one randomly selected in my script, like `SCRIPT_DIR`.

You also need to think about symlinks and how you want to handle them. Your current `DIR` resolution gives you the directory name without any symlinks resolved. If you're possibly a script that got symlinked to by someone, this may not be the directory of your actual script anymore, but the directory containing the symlink. As long as you want that it's fine, but a lot of the time you want the script to see it's resolved folder so it can call scripts in the real folder with it instead


For me I find I never have any reason to jump between directories based on frecency. Like ever. If I'm jumping between a few directories they're almost always named very similarly and that causes problems with the frecency ordering, especially when they're deep paths that get truncated.

If I really happen to be jumping back and forth a lot, I use `pushd ...`, and then I can reference my other directory with `~1/` and toggle between the two with a bare `pushd`.


`set -e` is almost never what you want in your scripts. It means "silently exit immediately if there are any u handled non-zero exit codes". The thing that trips most people up on that is subshells when your trying to catch output into a variable, if it gets a non-zero exit code your entire script suddenly exits.

`set -e` really only makes sense if you setup a trap to also print what line you exited on, and even then only for debugging (e g. `trap 'echo "ERROR: Line $LINENO" ERR'`)

Conversely, `set -o pipefail` should be set in every script. It makes scripts with pipelines do what you expect, and the whole string of pipeline commands gets set to an error if any of the commands inside of it exit with an error. Default behavior for historical reasons is still to ignore all exit codes except the last command in a pipeline.


I did the same, starting with Ergo Mode in Emacs many years ago, and ending up today with a programmable split keyboard with those keys as arrows on a layer. For when I'm on a laptop without the keyboard, I have a mishmash of solutions that bind to Alt+{ijkl}

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: