Hacker Newsnew | past | comments | ask | show | jobs | submit | more thaumasiotes's commentslogin

Skunks also aren't aggressive...

I thought rabies had to be considered for any interaction with a wild animal. For one showing atypical behavior, the indication is that much stronger.


> the last project I encountered in a professional capacity in Python where optional type hinting was used but wasn't always accurate which was a special sort of hell.

But that's the entire purpose of optional type hinting. If the hints had to be accurate, you'd have mandatory typing, not optional hinting.


No, optional type hinting means there's sometimes not a hint. Having a hint and then passing some type that's not that is wrong and hell.

Python's type hinting is horrible.

It's not checked, it's not required, and the bolted on syntax is ugly.

Even if the types were checked, they'd still fail at runtime and some code paths wouldn't get exercised.

We need a family of "near-scripting" languages like Go that check everything AOT, but that can be interpreted.


> It's not checked, it's not required,

It is both of those if you use a typechecker, which is the whole reason it exists (in fact, the first popular typechecker existed before the annotation syntax using type comments; type annotations were developed specifically so that it could be accommodated in the language rather than becoming its own separate language.)


That's the problem! The code should not run if the types are wrong. Having an external tool is an antipattern.

Having to rely on process for validity is a recipe for bad. We already know how the greater python community has been with requirements.txt and dependencies. I've spent days fixing this garbage.

It's a tooling problem. Good tools make good habits part of the automation and stop you from having to think about it.


You are talking about misleading type hints, not optional ones. Optional means they don’t have to be added. Wrong typehints are so much worse than missing ones.

I think the purpose of optional type hinting is that you don't have to add it everywhere all at once, not that it doesn't have to be accurate. I guess you could split hairs and say "hint" doesn't imply perfect accuracy, but... adding a language feature that can lie really seems to have a lot of downsides vs upsides; whereas at least optional has obvious migration benefits.

You could have optional type hints where the runtime would still yell at you - maybe even from just an optional flag - if you returned a string out of a function that should return an int.

Because as-is, once you have those function that says it returns an int but returns a string instead, etc, in a big codebase, your editor tooling gets really confused and it's way worse to work through than if the hints weren't there at all.

(And there are tools in Python that you can use to inspect and verify the accuracy. But those tools are also... optional... And if you start to apply them to a codebase where they weren't used, it can be very time-consuming to fix everything...)


> And there are tools in Python that you can use to inspect and verify the accuracy. But those tools are also... optional... And if you start to apply them to a codebase where they weren't used, it can be very time-consuming to fix everything...

How is that "bad" solution different from this "good" one?

> You could have optional type hints where the runtime would still yell at you - maybe even from just an optional flag - if you returned a string out of a function that should return an int.


If it's built in to the runtime you get a lot of potential benefits:

- you don't need to install additional packages

- you could have (because you don't want to hurt prod perf by checking all the time) dev-mode with warnings by default on execution and a prod-mode where they're ignored

- you can then have people in the dev environment catching things as they write/run/test their code vs only whenever they run the third party tool (which it seems a lot of people don't set up for even run-on-every-commit)

Let's flip the question around! What do you think are the benefits to making it easy to add misleading incorrect type hints?


Going by wikipedia, the incubation period can be up to three months. That isn't a particularly significant span of time if we're measuring how likely someone is to suffer an unexpected death. It's long enough that the possibility exists, but that's about all you can say.

Assuming an even distribution of deaths: 3 months / 80 years = 0.3% But there is very little people that is incubating rabies.

But the question isn't "what are the odds someone who dies in this period has rabies" it's "what are the odds someone who died after being infected with rabies died before they started showing symptoms" so the rarity of people incubating rabies is irrelevant.

Further, rabies incubation is highly variable - symptoms may not appear for years.


nit: surely the bounds for an acceptable organ is < 80 years old and > 15? maybe the window is closer to 50 years?

They harvested organs from somebody who had died of rabies.

> And, of course, a hypothesis is capable of being proven.

No, that's just you not understanding the definition of 'postulate'.


> while Gilbert said he enjoyed Vampire Survivors, he added that the game’s style was “a little too much ‘ADHD’ for me. I look at those games and it’s like, wow, I feel like I’m playing a slot machine at some level. The flashing and upgrades and this and that… it’s a little too much.”

Vampire Survivors was designed by a guy whose job was coding slot machines.


Dang, that explains so much about it.

It's weird. The "records" in question appear to be those kept by the Smithsonian:

> The Smithsonian Institution’s Global Volcanism Program has no record of any eruptions of Hayli Gubbi during the Holocene, the current geological epoch, which began at the end of the last ice age, about 11,700 years ago.

But I'm fairly confident that the Smithsonian's records don't go back more than 700 years.


Since I had to think about it:

    unsigned add(unsigned x, unsigned y) {
        unsigned a, b;
        do {
            a = x & y;   /* every position where addition will generate a carry */
            b = x ^ y;   /* the addition, with no carries */
            x = a << 1;  /* the carries */
            y = b;
        /* if there were any carries, repeat the loop */
        } while (a);
        return b;
    }
It's easy to show that this algorithm is correct in the sense that, when b is returned, it must be equal to x+y. x+y summing to a constant is a loop invariant, and at termination x is 0 and y is b.

It's a little more difficult to see that the loop will necessarily terminate.

New a values come from a bitwise & of x and y. New x values come from a left shift of a. This means that, if x ends in some number of zeroes, the next value of a will also end in at least that many zeroes, and the next value of x will end in an additional zero (because of the left shift). Eventually a will end in as many zeroes as there are bits in a, and the loop will terminate.


In C, I'm pretty confident the loop is defined by the standard to terminate.

Also I did take the excuse to plug it (the optimized llvm ir) into Alive:

https://alive2.llvm.org/ce/#g:!((g:!((g:!((h:codeEditor,i:(f...


Alive2 does not handle loops; don't know what exactly it does by default, but changing the `shl i32 %and, 1` to `shl i32 %and, 2` has it still report the transformation as valid. You can add `--src-unroll=2` for it to check up to two loop iterations, which does catch such an error (and does still report the original as valid), but of course that's quite limited. (maybe the default is like `--src-unroll=1`?)


Oh wow nice catch - I was not at all familiar with the limitations. I would've hoped for a warning there, but I suppose it is a research project.

I was able to get it working with unrolling and narrower integers:

https://alive2.llvm.org/ce/#z:OYLghAFBqd5QCxAYwPYBMCmBRdBLAF...


> In C, I'm pretty confident the loop is defined by the standard to terminate.

Huh? What's that supposed to mean?


That it is Undefined Behavior for a loop with a non-constant conditional and that doesn't cause side effects in its body to not terminate.

For example, you can use this make the compiler "prove" the Collatz Conjecture:

https://gcc.godbolt.org/#g:!((g:!((g:!((h:codeEditor,i:(file...


Compare the observation that in superhero comics, wealthy villains can be self-made, while wealthy heroes invariably get that way through inheritance.

The only acceptable leader is someone who was born so rich that he leads as a hobby.


Alternately: Self-made wealth is so frequently derived from evil that those who are born rich and come good owe penance for the sins of their forebears.


That strategy is pretty hit or miss though. Sure we've got Kennedy or FDR, but we've also got Bush and Trump.


I'm not saying that's a useful thing to believe. I'm saying that's what people do believe.


But the first example sigmoid10 gave of a company that can't do software was Microsoft.


Yeah I'm not convinced Microsoft can do software anymore. I think they're a shambling mess of a zombie software company with enough market entropy to keep going for a long time.


The prosecution presents windows 11 as evidence that Microsoft can’t do software. Actually that’s it, that’s the entirety of the case.

The prosecution rests.


Due to clerical error the frontend updates of GitHub was not part discovery so not allowed as evidence. Still, though.


Yeah the fact they had to resort to forking Chrome because they couldn’t engineer a browser folks wanted to use is pretty telling.


They did engineer a good browser: original Edge with the Chakra JavaScript Engine. It was faster than Google Chrome and had some unique features: a world-best, butter-smooth and customizable epub reader. I loved it for reading - it beat commercial epub readers - and then Nadella took over and said Microsoft is getting rid of it and Edge will move to Chromium and Microsoft will also get rid of Windows phone. Modern Microsoft will be Cloud/AI and Ads. That was so depressing.

I don't think that tells us anything.

Maintaining a web browser requires about 1000 full-time developers (about the size of the Chrome team at Google) i.e., about $400 million a year.

Why would Microsoft incur that cost when Chromium is available under a license that allows Microsoft to do whatever it wants with it?


You could say the same thing about all Microsoft products then. How many full time developers does it take to support Windows 11 when Linux is available, SqlServer when Postgres is available, Office when LibreOffice exists?

And so on all under licenses that allows Microsoft do whatever it wants with?

They should be embarrassed to do better, not spin it into a “wise business move” aka transfer that money into executive bonuses.


Microsoft gets a lot of its revenue from the sale of licenses and subscriptions for Windows and Office. An unreliable source that gives fast answers to questions tells me that the segments responsible for those two softwares have revenue of about $13 and about 20 billion per quarter respectively.

In contrast, basically no one derives any significant revenue from the sale of licenses or subscriptions for web browsers. As long as Microsoft can modify Chromium to have Microsoft's branding, to nag the user into using Microsoft Copilot and to direct search queries to Bing instead of Google Search, why should Microsoft care about web browsers?

It gets worse. Any browser Microsoft offers needs to work well on almost any web site. These web sites (of which there are 100s of 1000s) in turn are maintained by developers (hi, web devs!) that tend to be eager to embrace any new technology Google puts into Chrome, with the result that Microsoft must responding by putting the same technological capabilities into its own web browser. Note that the same does not hold for Windows: there is no competitor to Microsoft offering a competitor to Windows that is constantly inducing the maintainers of Windows applications to embrace new technologies, requiring Microsoft to incur the expense of applying engineering pressure to Windows to keep up. This suggests to me that maintaining Windows is actually significantly cheaper than it would be to maintain an independent mainstream browser. An independent mainstream browser is probably the most expensive category of software to create and to maintain excepting only foundational AI models.

"Independent" here means "not a fork of Chromium or Firefox". "Mainstream" means "capable of correctly rendering the vast majority of web sites a typical person might want to visit".


They did incur that cost… for decades. They were in a position where their customers were literally forced to use their product and they still couldn’t create something people wanted to use.

Potentially these last two points are related.


You don't need a Google-sized team to work on a browser. No other browser engine has a team that large.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: