It doesn't seem to work this way in practice, not least because most libraries will be transitive deps of the application owner.
I think creating the hooks is very close to just not doing anything here, if no one is going to use the hooks anyway then you might as well not have them.
I think an example where libraries could sensibly log error is if you have a condition which is recoverable but may cause a significant slowdown, including a potential DoS issue, and the application owner can remediate.
You don't want to throw because destroying someone's production isn't worth it. You don't want to silent continue in that state because realistically there's no way for application owner to understand what is happening and why.
We call those warnings, and it's very common to downgrade errors to warnings by wrapping an exception and printing the trace as you would an exception.
Warnings are for where you expect someplace else to know/log if it really is an error but it might also be normal. You might log why a file io operation failed: if the caller recovers somehow it isn't an errer, but if they can't they log an error and when investigating the warning gives the detail you need to figure it out.
statistacs are someimes run and the most common investigated (normally shut up the noise)
mostly though when you are on a known problem warnings should be a useful filter to find where in the logs the problem might have started, then you use that timestamp to find info logs in the same area
Warning logs are usually polluted with stuff nobody wants to fix but try to wash their hands off with a log. Like deprecated calls or error logs that got demoted because it didn't matter in practice.
Anything that has a measurable impact on production should be logged above that, except if your system ignores log levels in the first place, but that's another can of worms.
In such scenarios it makes sense to give clients an opportunity to react on such conditions programmatically, so just logging is wrong choice and if there’s a call back to client, client can decide whether to log it and how.
It's a nice idea but I've literally never seen it done, so I would be interested if you have examples of major libraries that do this. Abstractly it doesn't really seem to work to me in place of simple logs.
One test case here is that your library has existed for a decade and was fast, but Java removed a method that let you make it fast, but you can still run slow without that API. Java the runtime has a flag that the end use can enable to turn it back on a for a stop gap. How do you expect this to work in your model, you expect to have an onUnnecessarilySlow() callback already set up that all of your users have hooked up which is never invoked for a decade, and then once it actually happens you start calling it and expect it to do something at all sane in those systems?
Second example is all of the scenarios where you're some transitively used library for many users, it makes and callback strategy immediately not work if the person who needs to know about the situation and could take action is the application owner rather than the people writing library code which called you. It would require every library to offer these same callbacks and transitively propagate things, which would only work if it was just such a firm idiomatic pattern in some language ecosystem and I don't believe that it is in any language ecosystem.
>but Java removed a method that let you make it fast, but you can still run slow without that API
I’d like to see an example of that, because this is extremely hypothetical scenario. I don’t think any library is so advanced to anticipate such scenarios and write something to log. And of course Java specifically has longer cycle of deprecation and removal. :)
As for your second example, let’s say library A is smart and can detect certain issues. Library B depending on it is at higher abstraction level, so it has enough business context to react on them. I don’t think it’s necessary to propagate the problem and leak implementation details in this scenario.
Protobuf is the example I had in mind. It uses sun.misc.Unsafe which is being removed in upcoming Java releases, but it has a slow fallback path. It logs a warning when it runs if it can tell it's only using the fallback path but the fast path is still available if the application owner set a flag to turn it back on if they want to:
Java Protobuf also logs a warning now if you can tell you are using gencode old enough that it's covered by a DoS CVE. They actually did a release that broke compatability of the CVE covered gencode but restored it and print a warning in a newer release.
There's a lot here, to be honest these things always come back to investment cost and ROI compared to everything else that could be worked on.
Java 8 is still really popular, probably the most popular single version. It's not just servers in context, but also Android where Java 8 is the highest safe target, it's not clear what decade we'll be in when VarHandle would be safe to use there at all.
VarHandle was Java 9 but MemorySegment was Java 17. And the rest of FFM is only in 25 which is fully bleeding edge.
Protobuf may realistically try to move off of sun.misc.unsafe without the performance regressions in a way that is without adopting MemorySegment to avoid the versioning problem, but it takes significant and careful engineering time.
That said it's always possible to have waterfall of preferred implementations based on what's supported, it's just always an implementation/verification costs.
I’ve written code that followed this model, but it almost always just maps to logging anyway, and the rest of the time it’s narrow options presented in the callback. e.g. Retry vs wait vs abort.
It’s very rarely realistic that a client would code up meaningful paths for every possible failure mode in a library. These callbacks are usually reserved for expected conditions.
Yes, that’s the point. You log it until you encounter it for the first time, then you know more and can do something meaningful. E.g. let’s say you build an API client and library offers callback for HTTP 429. You don’t expect it to happen, so just log the errors in a generic handler in client code, but then after some business logic change you hit 429 for the first time. If library offers you control over what is going to happen next, you may decide how exactly you will retry and what happens to your state in between the attempts. If library just logs and starts retry cycle, you may get a performance hit that will be harder to fix.
Defining a callback for every situation where a library might encounter an unexpected condition and pointing them all at the logs seems like a massive waste of time.
I would much prefer a library have sane defaults, reasonable logging, and a way for me to plug in callbacks where needed. Writing On429 and a hundred other functions that just point to Logger.Log is not a good use of time.
This sub-thread in my understanding is about a special case (a non-error mode that client may want to avoid, in which case explicit callback makes sense), not about all possible unexpected errors. I’m not suggesting hooks as the best approach. And of course “on429” is the last thing I would think about when designing this. There are better ways.
If the statement is just that sometimes it’s appropriate to have callbacks, absolutely. A library that only logs in places where it really needs a callback is poorly designed.
I still don’t want to have to provide a 429 callback just to log, though. The library should log by default if the callback isn’t registered.
It's very easy to accidentally get misleading benchmarking results in 100 different ways, I wouldn't assume they did no benchmarking when they did the duplication.
I think what you listed matches with what he suggested, they just have words instead of a letter for the variants. Which is actually better in this example because the "Slim", "Pro" and "Digital" mean what you would expect them to mean here, versus the "a" in "Pixel 9a" in somewhat obtuse.
Dell is messing this up badly even though they almost got the strategy, "Dell Pro 14 Premium" is a real product and "Dell Pro Max 14 Plus" is also a real product, there's no way anyone knows what that means.
I'm not sure I follow the argument. If literally every individual site had an uncorrelated 99% uptime, that's still less available than a centralized 99.9% uptime. The "entire Internet" is much less available in the former setup.
It's like saying that Chipotle having X% chance of tainted food is worse than local burrito places having 2*X% chance of tainted food. It's true in the lens that each individual event affects more people, but if you removed that Chipotle and replaced with all local, the total amount of illness is still strictly higher, it's just tons of small events that are harder to write news articles about.
No it's like saying if one single point of failure in a global food supply chain fails, nobody's going to eat today. And which is in contrast to if some supplier fails to provide a local food truck today their customers will have to go to the restaurant next door.
Ah ok, it is true that if there's a lot of fungible offerings that worse but uncorrelated uptime can be more robust.
I think the question then is how much of the Internet has fungible alternatives such that uncorrelated downtime can meaningfully be less impact. If you have a "to buy" shopping list, the existence of alternative shopping list products doesn't help you, when the one you use is down it's just down, the substitutes cannot substitute on short notice. Obviously for some things there's clear substitutes though, but actually I think "has fungible alternatives" is mostly correlated with "being down for 30 minutes doesn't matter", it seems that the things where you want the one specific site are the ones where availability matters more.
The restaurant-next-door analogy, representing fungibility, isn't quite right. If BofA is closed and you want to do something in person with them, you can't go to an unrelated bank. If Spotify goes down for an hour, you're not likely to become a YT Music subscriber as a stopgap even though they're somewhat fungible. You'll simply wait, and the question is: can I shuffle my schedule instead of elongating it?
A better analogy is that if the restaurant you'll be going to is unexpectedly closed for a little while, you would do an after-dinner errand before dinner instead and then visit the restaurant a bit later. If the problem affects both businesses (like a utility power outage) you're stuck, but you can simply rearrange your schedule if problems are local and uncorrelated.
If utility power outage is put on the table, then the analogy is almost everyone solely relying on the same grid, in contrast with being wired to a large set of independent providers or even using their own local solar panel or whatever autonomous energy source.
I'm not sure what evidence you would expect to see if it was self-selection because of an in-group mentality versus explicit hostility to intentionally keep some out.
By comparison, is there some affirmative evidence for the reason why there are so few liberals in the FBI is because they self-selected out, instead of that the FBI being perceived as a conservative institution causes them to self-select out?
> I'm not sure what evidence you would expect to see if it was self-selection because of an in-group mentality versus explicit hostility to intentionally keep some out.
> is there some affirmative evidence for the reason why there are so few liberals in the FBI is because they self-selected out, instead of that the FBI being perceived as a conservative institution causes them to self-select out?
I’m not sure I understand your question. I would presume if people are self-selecting out of any organization it’s because they believe it isn’t a suitable place for them, and if this division is along party lines then politics is likely to be the cause of that belief. In either case, if the FBI skews conservative, I would guess that this was due to internal gatekeeping, not self-selection, and I think the history of the organization supports that assertion.
I'm not sure what you're pointing at in the links: I don't see any "explicit roadmap" to exclude mainstream conservative thinkers from professorships documented there. The main examples seem to be Creationists and Alex Jones and similar inflammatory content creators having paid speaker invitations rescinded due to student pressure, which is an radically different topic than what I thought the thread was about.
> In either case, if the FBI skews conservative, I would guess that this was due to internal gatekeeping, not self-selection, and I think the history of the organization supports that assertion.
The FBI very dramatically skews conservative compared to the American base, and I think it is a conspiracy theory level claim that the explanation is that the FBI is deliberately keeping out mainstream-left-leaning people from being agents.
It's always very attractive to believe that there's some shadowy cabal explicitly and deliberately controlling the strings to the outcomes that you don't like, when in reality it essentially is never the case. The reality is always far messier, and nearly all bad things stem from complex emergent systemic outcomes with no X-Files Smoking Man at the center of it all.
Any claim of an affirmative explicit decision for the bad outcome requires exceptional justification, because it's just such an appealing thing to want to believe and its almost never true.
> The main examples seem to be Creationists and Alex Jones and similar inflammatory content creators having paid speaker invitations rescinded due to student pressure, which is an radically different topic than what I thought the thread was about.
I’m pointing you towards the trends; you aren’t going to find documentary evidence stating: “We didn’t hire this guy because of his voting history” because a) it’s illegal and b) it’s very unlikely these biases are coordinated between institutions. The Long March article is instructive because the departments where the bias is strongest are all frequent washout degrees for critical theorists.
> It's always very attractive to believe that there's some shadowy cabal explicitly and deliberately controlling the strings to the outcomes that you don't like, when in reality it essentially is never the case.
I never made any assertion as to the existence of a “shadowy cabal” nor any other organized concert. This is an inane attempt to make my claims look ridiculous because you can’t refute them on their own merits. To wit:
> I think it is a conspiracy theory level claim that the explanation is that the FBI is deliberately keeping out mainstream-left-leaning people from being agents.
If you found out 100% of agents were not left-leaning, would you still consider this a fanciful, tin-foil hat style conspiracy? Because that’s what we are talking about here. Note that you have not accurately represented my claim: There is no FBI-style organization in my model that is coordinating the exclusion; it’s intentional, but happening at a local level, not as part of a centralized effort.
I may have misinterpreted "explicit roadmap", to be that implied a directing/organizing entity that is coordinating the exclusion of right-leaning people from professorships, versus your reply here clarifying you mean something different.
> If you found out 100% of agents were not left-leaning, would you still consider this a fanciful, tin-foil hat style conspiracy
Yep, I would. I think we are in the world where FBI agents are skewed as dramatically to the right as university professors are skewed left (which is to say: very). I don't believe either one as the deliberate exclusion of interested individuals from those positions based on their voting records, but instead more likely that both are self-selection and direct correlation to political views effects even when it's too such an extreme degree.
It's the same effect as theater having way more queer people in it than football, which is also not due to any conspiracy.
I'm surprised by this comment; after the drama last week and after seeing this I fully have to side against Rebble.
The nature of driving a healthy open source centered ecosystem is that you don't control it under your iron fist: you make good contributions, users _and_ companies are able to use them in all new ways which comply with the licensing terms. And it seems that RePebble is going way beyond the licensing terms requirements, but bending over backwards to honor Rebble here when they aren't actually required to.
I just can't imagine what people want from RePebble if not this: they are being maximally open, making it so all of everything would be able to continue if they went out of business tomorrow, while also actively enabling people to continue using Rebble's store and paid offerings. Should they be forcing users to use Rebble's offerings (instead of making things even more open) as a reward for doing a good job bridging the dark age?
My impression is that there is a lot more going on than just the facts provided by both sides. Core technologies managed to get Katie Berry to step away from the project[1] and that's extremely significant to me. Her tireless dedication to keeping Pebble alive (and get it open sourced) is how any of this is possible. For her to just up and leave now tells me that Eric and Core are not being as magnanimous and friendly to community as these blogs posts and actions might suggest.
Both of those comments seem to just boil down to "Core probably could be more proactive about comms", which hardly seems like a particularly egregious sin.
"interactions with Core have gone so poorly that they were adversely impacting my mental health"
That seems a little more serious than "could be proactive about comms" especially when this is one of the key people responsible for a lot of the original Pebble tech, rebble tech, and working within Google to get the Pebble OS open sourced.
I think unfortunately this is a normal thing that happens: passionate people get very attached to something and have trouble dealing with dispute even when everyone is relatively good intentioned. I've seen it in the workplace a dozen times.
They also backed down from their ludicrous position that they are acting as protectors of other people's watchfaces being downloaded in bulk by a particular company they don't like, whereas they are totally fine with the watchfaces being publicly available for general use. It clearly reads as them trying to clutch control of the one thing they haven't open sourced.
Rebble contributors did have a legitimate gripe, which is that they were lead to develop some additional software under the idea that there would be an agreement at the end of the day. But the Rebble Foundation's response to this was totally immature and irrational.
I agree with what Eric said in his follow up, which is that it is quite concerning to engage in a partnership with an organization which reacts like this as part of a negotiation process. God knows I wouldn't, and it doesn;t surprise me that an alternative solution was found.
Well said and exactly my thoughts on it as well. Eric has done more than he really had to, and it is unclear to me what rebble really wants/is positioning for.
It's a completely absurd claim that "most" people think the only reason the government isn't sending million dollar checks is hoarding.
The government spending too much and the debt being large is an extremely popular talking point, at least half of everyone would say "the federal government should spend less than they do" much less million dollar checks for every person.
I don't know if I have ever met someone in the ~40% of Americans who don't pay income tax, complaining that their government benefits are going too far.
The people who complain most about government spending are the people who pay a lot in taxes while only directly getting a fraction of it back in useful services. Taxes for roads and police? No Problem. Taxes for rent subsidies? Icy.
I know a number of people who pay very little taxes and enjoy tht benefit of government services, many of them are vocally against government spending despite that they would be personally harmed if they stopped
If what you are saying is true then Democrats would smash every election, "government spending too high" is demonstrably a fairly popular opinion with both low and high income based on voting.
There's potentially an argument for a ponzi scheme for one-ish more generation after which robots can do elder care and it's not necessary anymore.
Japan already bet on it and the robots haven't materialized, so maybe it's a bad strategy or maybe they bet too soon or maybe it will turn out they did it at the right time.
I think creating the hooks is very close to just not doing anything here, if no one is going to use the hooks anyway then you might as well not have them.
reply