Hacker Newsnew | past | comments | ask | show | jobs | submit | markl42's commentslogin

Can someone explain what this does differently to the status quo under the hood?


> When using npm, all dependencies for a project are installed in a single node_modules directory. This means that if two packages depend on different versions of the same package, the one that is installed last will be used, which can lead to compatibility issues. This is known as "dependency hell" and can make it difficult to manage the dependencies of a project.

> pnpm's symlink feature addresses this problem by allowing different versions of the same package to be installed side-by-side, and linking them to their dependents through symlinks. This helps to ensure that the correct version of a package is used for each dependent, reducing the chances of compatibility issues and making it easier to manage dependencies.

From the last issue linked in the PR (https://github.com/oven-sh/bun/issues/1760)


Apparently the source code has been lost so probably not


That is most depressing news to hear


We use https://github.com/Yelp/pgctl heavily, which uses s6!


Has any streaming service trained a model to actually understand the music itself to work out what other songs would be of a similar vibe/genre?

My favorite band (vulfpeck, and more recently jack's solo stuff) often branch out into different genres, and it's a bit of whiplash when it goes to another song just because the artists are similar / appear together in other places.


Not a traditional streaming service, but Plex offers sonic analysis: https://support.plex.tv/articles/sonic-analysis-music/

> Plex Media Server uses a sophisticated neural network to analyze each track in the music library, cataloging a wide variety of characteristics of the track. Think of it as things like female vs male, vocals vs not, sad, happy, rock, rap, etc. All these various characteristic constitute a “Musical Universe” and the server is determining where that particular track exists within it.

> For the math-savvy, the Musical Universe consists of points in N-dimensional space. But what’s important is that this allows us to see how “close” anything in your library is from anything else, where distance is based on a large number of sonic elements in the audio.

I haven't tried it so can't speak to its effectiveness.


There is a whole field of music classification, if anybody (except copyright holders) were interested in using that, I'd expect it to be the likes of Spotify.

The problem with classification is that what makes a genre is not uniform. Some genres are defined by the way people sing, other genres are defined by the singers language, other are purely about the instrumentation or rhythms used, yet others mostly about the sounds and notes used etc.

But there are things like tempograms, tonnetz (tonal centroid features), chromagrams, spectral flatness/contrast/roloff, laplacian segmentation etc. And I guess feeding these into some neural net might give you interesting results.


What someone likes also doesn’t correlate solely with genres. While I like certain genres more than others, I only really like a small fraction of pieces in each genre, so the statistical correlation between what I like and genre affiliation is probably not very high.


ChatGPT is pretty good with this, you can try it yourself. I created a playlist generator for YouTube a while ago. It is powered by GPT-3.5 Turbo and can create playlists based on text descriptions: https://playlists.at/youtube/generate/


This is a fallacy, the results are skewed to our ability of describing music, which via text (as opposed to tapping/singing/etc) is very weak.


I used gpt-4 to generate ideas what to listen to. I just say "I like X, Y and Z" and it gives me interesting, motivated choices. No special recommender transformer, just the plain text one.


On top of what other users pointed out, it doesn't work at all on too niche genres


AFAIK all efforts in that direction were way too costly a few years back and degraded models considerably. Spotify, for the longest time, only trained on an equivalent of manually curated playlists by experts and users to understand similarity.


At the risk of hijacking the comments, I've been trying to use OTel recently to debug performance of a complex webpage with lots of async sibling spans, and finding it very very difficult to identify the critical path / bottlenecks.

There's no causal relationships between sibling spans. I think in theory "span links" solves this, but afaict this is not a widely used feature in SDKs are UI viewers.

(I wrote about this here https://github.com/open-telemetry/opentelemetry-specificatio...)


I don't believe this is a solved problem, and it's been around since OpenTracing days[0]. I do not think that the Span links, as they are currently defined, would be the best place to do this, but maybe Span links are extended to support this in the future. Right now Span links are mostly used to correlate spans causally _across different traces_ whereas as you point out there are cases where you want correlation _within a trace_.

[0]: https://github.com/opentracing/specification/issues/142


I was underwhelmed by the max size for spans before they get rejected. Our app was about an order of magnitude too complex for OTEL to handle.

Reworking our code to support spans made our stack traces harder to read and in the end we turned the whole thing off anyway. Worse than doing nothing.


As per the spec there's no formal limits on size, although in practice there can be in several levels:

- Your SDK's exporter

- Collector processors and general memory limitations based on deployment

- Telemetry backend (this is usually the one that hits people)

Do you know where the source of this rejection happened? My guess would be backend, since some will (surprisingly) have rather small limits on spans and span attributes.


Sounds like a knob you can turn, from my practice at least.


How big of an issue is this for GQL servers where all queries are known ahead of time (allowlist) - i.e. you can cache/memorize the ast parsing and this is only a perf issue for a few minutes after the container starts up

Or does this bite us in other ways too?


I build GraphQL API gateways / Routers for 5+ years now. It would be nice if trusted Documents or persisted operations were the default, but the reality is that a lot of people want to open up their GraphQL to the public. For that reason we've build a fast parser, validator, normalizer and many other things to support these use cases.



Good lord.

> You MUST use flaxseed oil to season your pan.

Then

> You MUST NOT use flaxseed oil to season your pan.

Just season the fucking pan with some cooking oil and be done with it.

I swear, the internet has made some of the simplest things into this relentless pursuit of perfection.


flaxseed has an atrociously low smoke point (225 F). You really shouldn't be using it for cooking whatsoever, let alone seasoning cast iron.

In general I agree with your sentiment though. Find something that works for you and go with it.


I don't mean for my salty comment to justify a not-ideal method. But for this topic, the proper use of the pan will generate a good seasoning as it is used.


I wonder if the price is inflated


Costs have ballooned


I’m not familiar with the work that OWASP does, other than the cheat sheet series.

The cheat sheet series is amazing - a great resource to defer to when you don’t know or want to think about how to do <x>, you just want to look up and implement the industry standard.

It’s a great reference, and I use it lot. <3 to the folks working on that :)


The main cheat sheet I’ve looked closely at is the XSS one, and it’s never been better than mediocre, with (for over a decade, despite it being known about; only recently has it been redone to be tolerable, though still not excellent) awful framing, grossly misleading structure (seriously, almost every citations I’ve seen of it has misapplied it because of this), irrelevant and excessive content in some areas and critical missing content in other areas.

Therefore my recommendation is: use it for general awareness, perhaps, but do not trust it. Because there probably isn’t anyone really working on it—you’re probably actually looking at something that was written well over 10 years ago by an amateur, and has received almost no maintenance since then.


Can you recommend a good substitute for the Cheatsheets?


I kinda thought Apollo would buy GraphCDN. Both great products!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: