Doesn't this fundamentally misunderstand the Pareto principle? The 80% and 20% of causes in standard examples don't refer to portions of a sequential effort, but rather slices of competing agents/producers/customers in an economic system. Like, 20% of clients account for 80% of sales, that kind of thing.
"hey reminder that" does make a difference, but that difference is invisible because it's exclusively in the form of incidents that did not happen. Remember that everyone reading the comment is a human too and some will be receptive. Good is good.
Many public figures have social media and PR teams to help deal with stuff like this, you really expect that every individual out there should just grow a thick skin as culture is continually encouraging us to have more and more of a public presence? This is the kind of thinking that gives a free pass to entrenched patterns of discrimination and targeted hate speech. It's very possible that emails like this wouldn't just be a spew of random insults but targeted towards a person's very specific triggers, and in that case a thick skin cannot stop a bullet.
I think of it more as a general observation: the people who maintain open source projects that must deal with the public must have thick skin, or they won't last. Acknowledging that there are and will always be assholes like the email writer in the world is not to excuse their behavior. It's great to call out shitty behavior and try to get rid of it, but it will always be around at some low level, that's the reality.
Unfortunately HN is no longer a place where you can assume good faith observations when something like that is said. OP's words play into toxic narratives that need to be called out for the sake of other people reading the comments. I might be wrong about OP's intentions when giving my reply, but I am pushing back against a narrative, not against a specific person.
> I might be wrong about OP's intentions when giving my reply, but I am pushing back against a narrative, not against a specific person.
What narrative? Your comment is vague. Would you mind expanding on this accusation and how it relates to open source, especially when today's maintainers on platforms like GitHub and GitLab have an array of tools at their disposal to deal with issues?
OP replied to my comment and I now see that it makes sense as an observation. The narrative I was referring to is something that I see common in discussions surrounding abuse; "there's nothing to be done about it; [people should] just grow a thick skin." If we agree that this exists, then I posit that it's a toxic thing to say to someone who's coming forward and saying that they're suffering from this. It's also toxic to use that as a way to dismiss measures that we can build in technologically to combat abuse, like building blocking mechanisms or modifying them to make them better. Hope that clarifies things.
You might be correct but it still stands that nothing is added to the discourse by just telling people to grow a thick skin. It's a repulsive attitude and I had to call that out.
I think the "repulsive attitude" I'm referring to is shouting "Just grow a thick skin!" to someone who is saying that they're suffering from abuse. In some cases, the suffering is so acute and so deep ("bullets") that it's not necessarily actionable on the part of the person suffering; rather, it's a call for help that others may respond to.
Perhaps this is the obvious comment, but I really hope that something that employs technology like this can get off the ground and become a services company. The ideas about democratizing the AI inference hardware space and making it energy efficient really resonate with me.
Don't you think that LLM inference is already very democratic? Not saying that there is no room for improvements there -- there is still a lot to do in the space of speculative decoding, quantization and other stuff. I'm saying that every 16 year old with a decent enough personal computer can run fully locally latest open weights model like Llama3-8b that beats almost everything we had a year ago.
The part of this ecosystem that is as non-democratized as it can be is training. It's currently impossible to train decent enough model with resources that are available to one person.
I once attempted to make something like ProxySwitchy for DNS[1], but I didn't work on it long enough to get off the ground. This article made me think about it again. Is there actually a use case for that kind of thing?
This is awesome! I really wonder if GPs will see an uptick in usage in the future, I've seen a lot of interest in them lately.
Also, thanks for including the source so they can be seen in the browser dev tools, don't know if that gets done enough with interactive visualizations like this.
As imperfect measures as they are, statistical significance measures like the p-value are more epistemologically sound than any arbitrary rule of thumb-threshold on the size of the effect.
> statistical significance measures like the p-value are more epistemologically sound than any arbitrary rule of thumb-threshold on the size of the effect.
Keep in mind that the GP isn't saying the effect doesn't exist if it's in the single digits, but that it is inconclusive and/or insignificant. Insignificant in the human sense, not the statistical sense.
A 1% increase in this behavior? Irrelevant to almost everyone.
This, of course, is not even getting into the issue of the reproducibility crisis, much of which did rely on p-values. While I personally am happy to do p-tests, the skepticism of small effects is well founded. Were someone else to try to reproduce the effects and fail, the standard defense is that the results are sensitive to the methodology used. It's much easier to invoke that defense if your effect is 1% vs 20%.
P-values are based on a normally-distributed sampling distribution which presumes the samples are randomly-chosen. It's hard to see how random sampling can apply very well here.
Suppose we did a comparison between the p-values of biased researchers desperate to publish, versus a conservative heuristic that doesn't believe small effects. Particularly for social science experiments like this, which would you bet on being able to assess repeatability better?
Statistical significance is a good measure of how sure we are there is a correlation, or in this case with an RCT, causation. But any layperson can look at a 3% effect and conclude, yeah, that's probably not that big of a deal. Or not, depending upon your preferences! No judgement. It's not something that requires a degree to determine, just an assessment of one's own values and the effect size in that context.
> The fragment synthesis procedure uses a powerful, but computationally expensive, algebraic rewrite-based search algorithm to identify non-trivial compositions of blocks. The performance of the algorithm depends on the complexity of the target equation rather than the size of the overall system.
Could the construction of these blocks, as well as the composition step, benefit from training an RL agent on the state space of circuit configurations/the action space of connecting two blocks (or other operations)? Does any facet of the problem make the idea of using reinforcement learning intractable/otherwise a bad idea?
Ideally you'd be able to accurately gauge your progress without having intense negative emotions attached to it. Certainly it shouldn't undercut a person's feeling of validity in the world.
I think "feeling behind" means different things for different people, depending on the experiences in their formative years they have associated with the thought of being behind. The ideal response to the feeling depends on what the feeling is.
Yep. I think that if we have a bad feeling about being behind, it's more likely because there's something else that we Actually feel bad about, but "being behind" is an easy thing to attribute it to. More likely there are some childhood/family/psychiatric issues being unaddressed. At least, I've discovered this about myself.