Hacker Newsnew | past | comments | ask | show | jobs | submit | j1elo's commentslogin

We don't need to be communicative at all times. But don't romanticize it either; we did what you say because we had to, whether we wanted or not. Not having any chance of correcting course or being more flexible is not a cool thing of the past, it's a limitation of how things were. That you find confort on it, is a different thing than it being better or worse... it just was.

As if it would make sense that spending 2hrs relaxing on the beach or gardening your orchids would cost $400 to you. Money not made is not money spent. If you were doing a hobby project for learning, you were not going to be working during that time anyways, so your hourly rate doesn't matter.

What a messy and frankly, absurd situation to be left in. To fork a project in order to provide a tool through Pypi, only to then stop updating it on a broken version. That's more a disservice than a service for the community... If you're going to stay stuck, better to drop the broken release and stay stuck on the previous working one.

A nitpick to your nitpick: they said "memory location". And yes, a pointer always points to a memory location. Notwithstanding that each particular region of memory locations could be mapped either to real physical memory or any other assortment of hardware.

No. Neither in the language (NULL exists) nor necessarily on real CPUs.

NULL exists on real CPUs. Maybe you meant nullptr which is a very different thing, don't confuse the two.

I don't agree. Null is an artefact of the type system and the type system evaporates at runtime. Even C's NULL macro just expands to zero which is defined in the type system as the null pointer.

Address zero exists in the CPU, but that's not the null pointer, that's an embarrassment if you happen to need to talk about address zero in a language where that has the same spelling as a null pointer because you can't say what you meant.


Null doesn't expand to zero on some weird systems. tese days zero is special on most hardware so having zero and nullptr be the same is importnt - even though on some of them zero is also legal.

Historically C's null pointer literal, provided as the pre-processor constant NULL, is the integer literal 0 (optionally cast to a void pointer in newer standards) even though the hardware representation may not be the zero address.

It's OK that you didn't know this if you mostly write C++ and somewhat OK that you didn't know this even if you mostly write C but stick to pre-defined stuff like that NULL constant, if you write important tools in or for C this was a pretty important gap in your understanding.

In C23 the committee gave C the C++ nullptr constant, and the associated nullptr_t type, and basically rewrote history to make this entire mess, in reality the fault of C++ now "because it's for compatibility with C". This is a pretty routine outcome, you can see that WG14 members who are sick of this tend to just walk away from the committee because fighting it is largely futile and they could just retire and write in C89 or even K&R C without thinking about Bjarne at all.


You can point to a register which is certainly not memory.

Whenever you have this kind of impressions on some development, here are my 2 cents: just think "I'm not the target audience". And that's fine.

The difference between 2ms and 0.2ms might sound unneeded, or even silly to you. But somebody, somewhere, is doing stream processing of TB-sized JSON objects, and they will care. These news are for them.


I remember when I was coming up on the command line and I'd browse the forums at unix.com. Someone would ask how to do a thing and CFAJohnson would come in with a far less readable solution that was more performative (probably replacing calls to external tools with Bash internals, but I didn't know enough then to speak intelligently about it now).

People would say, "Why use this when it's harder to read and only saves N ms?" He'd reply that you'd care about those ms when you had to read a database from 500 remote servers (I'm paraphrasing. He probably had a much better example.)

Turns out, he wrote a book that I later purchased. It appears to have been taken over by a different author, but the first release was all him and I bought it immediately when I recognized the name / unix.com handle. Though it was over my head when I first bought it, I later learned enough to love it. I hope he's on HN and knows that someone loved his posts / book.

https://www.amazon.com/Pro-Bash-Programming-Scripting-Expert...


Wow that takes me back. I used to lurk on unix.com when I was starting with bash and perl and would see CFAJohnson's terse one-liners all the time. I enjoyed trying my own approaches to compare performance, conciseness and readability - mainly for learning. Some of the awk stuff was quite illuminating in my understanding of how powerful awk could be. I remember trying different approaches to process large files at first with awk and then with Perl. Then we discovered Oracle's external tables which turned out to be clear winner. We have a lot more options now with fantastic performance.

Why are half the forum posts on there all about AI? Yikes

Also as someone who looks at latency charts too much, what happens is a request does a lot in series and any little ms you can knock off adds up. You save 10ms by saving 10 x 1ms. And if you are a proxyish service then you are a 10ms in a chain that might be taking 200 or 300ms. It is like saving money, you have to like cut lots of small expenses to make an impact. (unless you move etc. but once you done that it is small numerous things thay add up)

Also performance improvements on heavy used systems unlocks:

Cost savings

Stability

Higher reliability

Higher throughput

Fewer incidents

Lower scaling out requirements.


Wait what? I don't get why performance improvement implies reliability and incident improvement.

For example, doing dangerous thing might be faster (no bound checks, weaker consistency guarantee, etc), but it clearly tend to be a reliability regression.


First, if a performance optimization is a reliability regression, it was done wrong. A bounds check is removed because something somewhere else is supposed to already guaratee it won't be violated, not just in a vacuum. If the guarantee stands, removing the extra check makes your program faster and there is no reliability regression whatsoever.

And how does performance improve reliability? Well, a more performant service is harder to overwhelm with a flood of requests.


"Removing an extra check", so there is a check, so the check is not removed?

It does not need to be an explicit check (i.e. a condition checking that your index is not out of bounds). You may structure your code in such a way that it becomes a mathematical impossibility to exceed the bounds. For a dumb trivial example, you have an array of 500 bytes and are accessing it with an 8-bit unsigned index - there's no explicit bounds check, but you can never exceed its bounds, because the index may only be 0-255.

Of course this is a very artificial and almost nonsensical example, but that is how you optimize bounds checks away - you just make it impossible for the bounds to be exceeded through means other than explicitly checking.


Somes directly like other commenters touch on. Less likely to saturate CPU quickly. Lower cost to run so can have more headroom.

But also the stuff you tend to do to make it fast makes it more reliable.

Local caches reduce network traffic. Memory is more reliable than network IO so it improves reliability.

Reducing lookup calls to other services (e.g. by supplying context earlier in the dependency chain) makes it faster and more reliable.

Your code will probably branch less and become more predictable too.

And often the code is simpler (sometimes not when a performance hack is used)


Less OOMs, less timeouts, less noisy neighbors problems affecting other apps

But even in this example, the 2ms vs 0.2 is irrelevant - its whatever the timings are for TB-size objects.

So went not compare that case directly? We'd also want to see the performance of the assumed overheads i.e. how it scales.


Which is fine, but the vast majority of the things that get presented aren’t bothering to benchmark against my use (for a whole lotta mes). They come from someone scratching an itch and solving it for a target audience of one and then extrapolating and bolting on some benchmarks. And at the sizes you’re talking about, how many tooling authors have the computing power on hand to test that?

> "somebody, somewhere, is doing stream processing of TB-sized JSON objects"

That's crazy to think about. My JSON files can be measured in bytes. :-D


Well obviously that would happen mostly only on the biggest business scales or maybe academic research; one example from Nvidia, which showcases Apache Spark with GPU acceleration to process "tens of terabytes of JSON data":

https://developer.nvidia.com/blog/accelerating-json-processi...


All files can be measured in bytes. :)

You, sir or ma'am, are a first class smarty pants.

Who is the target audience? I truly wonder who will process TB-sized data using jq? Either it's in a database already, in which case you're using the database to process the data, or you're putting it in a database.

Either way, I have really big doubts that there will be ever a significant amount of people who'd choose jq for that.


There was a thread yesterday where a company rewrote a similar JSON processing library in Go because they were spending $100,000s on serving costs using it to filter vast amounts of data: https://news.ycombinator.com/item?id=47536712

That's a really great perspective. Thanks for sharing!

For that you need a very centralized VCS, not a decentralized one. Perforce allows you to lock a file so everybody else cannot make edits to it. If they implemented more fine-grained locking within files, or added warnings to other users trying to check them out for edits, they'd be just where you want a VCS to be.

How, or better yet, why would Git warn you about a potential conflict beforehand, when the use case is that everyone has a local clone of the repo and might be driving it towards different directions? You are just supposed to pull commits from someone's local branch or push towards one, hence the wording. The fact that it makes sense to cooperate and work on the same direction, to avoid friction and pain, is just a natural accident that grows from the humans using it, but is not something ingrained in the design of the tool.

We're collectively just using Git for the silliest and simplest subset of its possibilities -a VCS with a central source of truth-, while bearing the burden of complexity that comes with a tool designed for distributed workloads.


It's fully caused by management mindset. There are companies that are investing hard on the AI trend, but the message is clear: all code pushed is your ultimate responsonsibility, and if it lacks quality or causes problems, you're on the hook for it; using AI hasn't changed that.

So if Spotify had a modicum of AI usage hygiene, plus accountability expectations for code quality, this would still mean a bad performance review for whoever introduced this issue (person or team; poor results and mistakes are never something that come from a single source)


spotify has no performance review process or any sort of performance management. Never heard of anyone getting piped there for many years i was there.


Well, thanks. That small web just taught me in a very concise way a thing or two about bicicle braking technique!


I would enjoy this so much. Always keeping electronic parts around home, "just in case". It feels so profoundly satisfying when you finally get to put some switch or random piece to use for a repair, after having kept it stored for 13 years in a drawer (and through moving houses 3 times!)


As a Kodi user, I must say it is very good on its core, and very bad on the addons side (which arguably is the part for which it gets recommended mostly)

It forces its limited model of text-based folders-with-files to everything. Also it's all Python, and I don't know if it's me but I always find quality issues first in Python projects than anything else. Error control is usually very lacking, and it's so frequent to see error pop-ups showing on here and there. You enter a menu and the first entry selected is ".." which is to go back to the previous menu (poor UX). All in all, Kodi for me has always been a player with good tech (it all basically works, surround sound, codecs, integration with hardware, etc), exposed as very amateurish UI experiences.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: