I know that attributing to western countries the responsability for any bad thing happening in this world is a common reflex, but we are 30 years after the handover, 40 years after the negotiation, so surely China bears some if not pretty much all the responsibility here.
And it's not like the UK had much of a choice in the first place. China threatened to invade and there is very little the UK could have done to prevent a full control.
Worth also remembering that "one country, two systems" came with an expiration date that is rapidly approaching anyway.
> I know that attributing to western countries the responsability for any bad thing happening in this world is a common reflex
You can’t gloat that the sun never sets on your empire and then absolve yourself from responsibility for events that you had a heavy hand in influencing. Regardless, if you think the article is wrong, your point would he better served by providing examples of where it’s wrong and stating why.
nitpick:
I would argue that it is more accurate to attribute cause and effect to certain groups of citizens within the country rather than the entire country.
The Holocaust, for example, is, in my opinion, more accurately described as being the fault of the Nazi party of Germany, which is a subset of the German population that was politically active in the early-mid 20th century, rather than just being "Germany's" fault.
The war crimes committed by the Empire of Japan during WWII are similarly the fault of a subset of the politically active population during that time, not "Japan".
I believe this method of attribution has the added advantage of noting that certain citizens, or groups of citizens can make mistakes, and using them as an example of what NOT to do, for other citizens to learn from, rather than tarring everyone with the same bad brush, which I think can have negative psychological consequences - people should be held accountable for their actions, rather than stigmatized for belonging to a specific group by no fault of their own (it's not your fault you were born with citizenship in X country, but it is your fault if you start killing people).
> I know that attributing to western countries the responsability [sic] for any bad thing happening in this world is a common reflex
I don't think I'm being superficial here. There are a few distinct events during the 20th century which can be attributed to the British. The handover of Hong Kong, Suez Crisis and the Balfour Declaration stand out the most.
> And it's not like the UK had much of a choice in the first place. China threatened to invade and there is very little the UK could have done to prevent a full control.
The leased territories are Chinese territory. Full stop. Hong Kong island and the ceded land could not survive alone. All of the water processing happens in the New Territories. It would have been impossible to either break up HK or defend it.
China has not rolled back any reforms that happened before negotiations began [0]. They did rollback the last-ditch efforts of Chris Patten [1] because at that point it was seen a malicious attempt to undermine the handover.
The mechanisms for China to take control were largely left in place by the British so they bare some responsibility, but it is the PRC asserting this control and there's an argument to be made that most of HK supports the PRC and it's their right to do what they wish with their own territory.
> Worth also remembering that "one country, two systems" came with an expiration date that is rapidly approaching anyway.
It'll be interesting to see what is kept. China's experimenting already in Hainan. They could structure Hong Kong in a similar fashion.
[0] The PRC did introduce PR with the idea that it would reduce the risk of majorities forming but the system is arguably more democratic than FPTP.
The Chinese absolutely bear responsibility for how they've governed the last 30 years, just as the British bear responsibility for how they governed the prior 150.
The fact that British HK liberalized a little at the very last second before handover is better than nothing, and the National Security Law is definitely bad, but right now the scoreboard is 7/150 years of free speech under the UK, compared to 23/28 years of free speech under PRC. It'll take another 100 years for the PRC to have a worse record than the UK.
I think it’s somewhat disingenuous to ignore the trend direction.
The Netherlands has a longer history of monarchy under their current government (present monarchy founded 1813) than North Korea (current government established 1948). Does that mean you’d rather live in North Korea than the Netherlands?
The plain and obvious fact remains that Hong Kongers would have more political liberties today if the UK retained control of the territory, regardless of the complete colonial insanity of the original arrangement.
Can you name one present existing British overseas territory that has less of a right to criticize the government than Hong Kong? There are still a bunch of them to choose from from.
Wasn't meaning to ignore the trend, the PRC bears full responsibility for their actions. Just saying that complaints from the British in particular are a little rich.
Also they appear to be arresting more people for speech in total and per-capita than HK:
That is not even remote comparable. There is a huge difference between arrest and conviction, and whether people are given a proper hearing or not, between legal process and things like disappearances, and between laws that punish criticism of the authorities and hate speech laws.
I do not like the UK's hate speech laws at all, but the fact is that I can criticise them, from the UK, without fear, and I can criticise the government. Could I do that in China? Of course not.
I'm not disagreeing too hard with you, we probably feel similarly about both cases. I just don't like how western media takes a "we're the good guys and they're a dystopia" approach to reporting it.
They've charged like 250 people in 5 years under this law[1], I don't like any one of those cases, I'm also against it, but it gets characterized like nobody ever catches a bullshit charge in the West.
Or surely PRC should get all the praise for diffusing geopolitical traps UK like to leave whenever they lose a colony. Patton threw a curve ball right before handover to last minute liberalize HK a little to hold onto influence, something they didn't do under UK rule. Of course it was geopolitical trap to make PRC look bad if they ever decide take away from HK what UK never provided, but PRC managed to do it anyway and most of world, i.e. global south got example that it is possible to excise legacy colonial tumors from declining empires who choose not to pass gracefully.
I have no idea how you came to the conclusion that liberalization (giving ever so slightly more freedom) would increase foreign UK influence post handover.
Yes, it shows. 11th hour liberalization was the spiked punch that subverted/prevented PRC from doing useful reforms, like (patriotic) education (MNE / moral national education in 2010s), getting rid of colonial british textbooks that koolaid generations of minds and tethered them to muh anglo liberal values, libtards that would later collude with foreign powers to sanction their own gov. Instead PRC had to waste 20 years unwinding the shitshow because they didn't want to rock the boat too hard during period of heightened end of history wank, i.e. didn't want to risk unrolling last minute landmine reforms which could lead to sanctions / capital flight.
Then there's liberalization bullshit like court of final appeal (staffed with overseas anglo "judges", read compradors, friendly to UK values and interests) that replaced UK privy council to enshrine liberal, UK aligned, rulings vs Beijing. Under colonial UK rule, privy council, decision makers in London, got to overrule HK local moves that countered UK interest. Or Legco reforms that enabled direct elections / local veto that didn't exist prior, which stalled art 23 / NSL implementation for 20 years, something Beijing would have otherwise been able to ram through using old colonial system where governor or Beijing equivalent get to rubber stamp whatever the fuck they wanted... like NSL. Or retooling societies ordinance, public order ordinance, bill of rights ordinance, that was previously used by UK crush dissenting groups with absolute power/prejudice into liberal instruments that now allow retooled ordinance to proliferate with greater judicial power over PRC appointed executive vs pre 90s when these were all tools UK executives used to crush dissent. Liberalization took away all the fancy authoritarian killswitch UK used to rule HK as colony with iron fist.
Post NSL, PRC gave all the compromised none-Chinese judges the boot and get to designate PRC aligned judges that rule on PRC interests. Nature is healing etc.
A lot of those CRT screens had a pretty low refresh frequency, you were basically sitting in front of a giant stroboscope. That was particular bad for computer screens where you were sitting right in front of them. I think they pretty much all displayed at 30Hz. I can imagine how a gigantic screen can get pretty uncomfortable.
I recall a lot of people playing counterstrike at 640x480 to get at 100+hz refresh rates. The lower the resolution, the faster you can refresh. I don't recall the absolute limit but it would give the latest LCD gaming panels a serious run for their money.
If you pay extra for that. Meanwhile _any_ CRT could trade off resolution for refresh rate across a fairly wide range. In fact the standard resolutions for monitors were all just individual points in a larger space of possibilities. They could change aspect ratio as well. This can be quite extreme. Consider the 8088 MPH demo from a few years back (<https://trixter.oldskool.org/2015/04/07/8088-mph-we-break-al...>). See the part near the end with the pictures of 6 of the authors? That video mode only had 100 lines, but scrunched up to make a higher resolution.
Well, we are discussing a CRT TV that was $40k new a life time ago, so perhaps the fact that it costs $599 to get a 480Hz OLED today is not a consideration. To the point though: it is a fallacy to believe that CRTs could arbitrarily shape their resolution. While the input signal could cover a wide range of possible resolutions and refresh rates depending on the bandwidth supported, the existence of apperture grilles or shadow masks imposed a fixed digital reality that limited the maximum possible resolution to much lower values than the typical 4k panels that we have today. The "pixels" didn't become larger on lower resolutions: they just covered more dots on the mask. We can get much better results today with scaling than we ever could on CRTs, as awesome a technology as they were 40 years ago.
Sure, but 99% of that cost was paying for the absurd physical dimensions of that particular television.
> The "pixels" didn't become larger on lower resolutions…
Strictly speaking, the CRT only had discrete lines not pixels. Within a line the color and brightness could change as rapidly or slowly as the signal source desired. It was in fact an analog signal rather than a digital one. This is why pixels in many display modes used by CRTs were rectangular rather than square.
> We can get much better results today with scaling than we ever could on CRTs…
I say it’s the other way around! No ordinary flat–panel display can emulate the rectangular pixels of the most common video modes used on CRTs because they are built with square pixels. You would have to have a display built with just the right size and shape of pixel to do that, and then it wouldn’t be any good for displaying modern video formats.
Seems irrelevant to bring up cost for something that is streamline-priced today, but sure, let's move on.
> Strictly speaking, the CRT only had discrete lines not pixels.
The electron gun moves in an analog fashion, but when it hits the glass surface, it can only go through specific openings [1]. These openings are placed at a specific distance apart [2]. This distance specifies the horizontal, digital, max CRT resolution.
> No ordinary flat–panel display can emulate the rectangular pixels of the most common video modes used on CRTs because they are built with square pixels.
Today's panels have achieved "retina" resolution, which means that the human eye cannot distinguish individual pixels anymore. The rest is just software [3].
Yes and no. Half of the screen was refreshing at a time, so it was really flashing at 30Hz. You still had a visible stroboscopic effect. True 60Hz and 100Hz screen appeared in the late 90s and made a visible difference in term of comfort of viewing.
CRT TVs only supported vertical refresh rates of 50Hz or 60Hz, which matched the regional mains frequency. They used interlacing and technically only showed half the frame at a time, but thanks to phosphor decay this added a feeling of fluidity to the image. If you were able to see it strobe, you must have had an impressive sight. And even if they supported higher refresh rates, it wouldn't matter, as the source of the signal would only ever be 50/60Hz.
CRT monitors used in PCs, on the other hand, supported a variety of refresh rates. Only monitors for specific applications used interlacing, customer grade ones didn't, which means you could see a strobing effect here if you ran it at a low frequency. But even the most analog monitors from the 80s supported atleast 640x480 at 60Hz, some programs such as the original DOOM were even able to squeeze 70Hz out of them by running at a different resolution while matching the horizontal refresh rate.
For some reason I remember 83Hz being the highest refresh rate supported by my XGA CRT, but I think it was only running at SVGA (800x600) in order to pull that rate.
Some demos could throw pixels into VRAM that fast, and it was wild looking. Like the 60Hz soap-opera effect but even more so.
I still feel that way looking at >30fps content since I really don't consume much of it.
> some programs such as the original DOOM were even able to squeeze 70Hz out of them by running at a different resolution while matching the horizontal refresh rate.
400p at 70 Hz was the default resolution of the VGA, pretty much all the classic mode 13h games ran at 70 Hz.
The only time the electron gun was not involved in producing visible light was during overscan, horizontal retrace, and the vertical blanking interval. They spent the entire rest of their time (the very vast majority of their time) busily drawing rasterized images onto phosphors (with their own persistence!) for display.
This resulted in a behavior that was ridiculously dissimilar to a 30Hz strobe light.
Did they really do that, or did the tubes just ran at 2x vertically stretched 640x240 with vertical pixel shift? A lot of technical descriptions of CRTs seem to be adapted from pixel addressed LCDs/OLEDs, and they don't always seem to capture the design well
The limiting factor is the horizontal refresh frequency. TVs and older monitors were around 15.75kHz, so the maximum number of horizontal lines you could draw per second is around 15750. Divide that by 60 and you get 262.5, which is therefore the maximum vertical resolution (real world is lower for various reasons). CGA ran at 200 lines, so was safely possible with a 60Hz refresh rate.
If you wanted more vertical resolution then you needed either a monitor with a higher horizontal refresh rate or you needed to reduce the effective vertical refresh rate. The former involved more expensive monitors, the latter was typically implemented by still having the CRT refresh at 60Hz but drawing alternate lines each refresh. This meant that the effective refresh rate was 30Hz, which is what you're alluding to.
But the reason you're being downvoted is that at no point was the CRT running with a low refresh rate, and best practice was to use a mode that your monitor could display without interlace anyway. Even in the 80s, using interlace was rare.
Interlace was common on platforms like the Amiga, whose video hardware was tied very closely to television refresh frequencies for a variety of technical reasons which also made the Amiga unbeatable as a video production platform. An Amiga could do 400 lines interlaced NTSC, slightly more for PAL Amigas—but any more vertical resolution and you needed later AmigaOS versions and retargetable graphics (RTG) with custom video hardware expansions that could output to higher-freq CRTs like the SVGA monitors that were becoming commonplace...
CGA ran pretty near 262 or 263 lines, as did many 8-bit computers. 200 addressable lines, yes, but the background color accounted for about another 40 or so lines, and blanking took up the rest.
The irony is that most of those who downvote didn't spend hours in front of those screens as I did. And I do remember these things were tiring, particularly in the dark. And the worst of all were computer CRT screens, that weren't interlaced (in the mid 90s, before higher refresh frequency started showing up).
I spent literally thousands of hours staring at those screens. You have it backwards. Interlacing was worse in terms of refresh, not better.
Interlacing is a trick that lets you sacrifice refresh rates to gain greater vertical resolution. The electron beam scans across the screen the same number of times per second either way. With interlacing, it alternates between even and odd rows.
With NTSC, the beam scans across the screen 60 times per second. With NTSC non-interlaced, every pixel will be refreshed 60 times per second. With NTSC interlaced, every pixel will be refreshed 30 times per second since it only gets hit every other time.
And of course the phosphors on the screen glow for a while after the electron beam hits them. It's the same phosphor, so in interlaced mode, because it's getting hit half as often, it will have more time to fade before it's hit again.
Have you ever seen high speed footage of a CRT in operation? The phosphors on most late-80s/90s TVs and color graphic computer displays decayed instantaneously. A pixel illuminated at the beginning of a scanline would be gone well before the beam reached the end of the scanline. You see a rectangular image, rather than a scanning dot, entirely due to persistence of vision.
Slow-decay phosphors were much more common on old "green/amber screen" terminals and monochrome computer displays like those built into the Commodore PET and certain makes of TRS-80. In fact there's a demo/cyberpunk short story that uses the decay of the PET display's phosphor to display images with shading the PET was nominally not capable of (due to being 1-bit monochrome character-cell pseudographics): https://m.youtube.com/watch?v=n87d7j0hfOE
Interesting. It's basically a compromise between flicker and motion blur, so I assumed they'd pick the phosphor decay time based on the refresh rate to get the best balance. So for example, if your display is 60 Hz, you'd want phosphors to glow for about 16 ms.
But looking at a table of phosphors ( https://en.wikipedia.org/wiki/Phosphor ), it looks like decay time and color are properties of individual phosphorescent materials, so if you want to build an RGB color CRT screen, that limits your choices a lot.
Also, TIL that one of the barriers to creating color TV was finding a red phosphor.
There are no pixels in CRT. The guns go left to right, ¥r¥n, left to right, while True for line in range(line_number).
The RGB stripes or dots are just stripes or dots, they're not tied to pixels. There would be RGB guns that are physically offset to each others, coupled with a strategically designed mesh plates, in such ways that e- from each guns sort of moire into only hitting the right stripes or dots. Apparently fractions of inches of offsets were all it took.
The three guns, really more like fast acting lightbulbs, received brightness signals for each respective RGB channels. Incidentally that means they could go between brightness zero to max couple times over 60[Hz] * 640[px] * 480[px] or so.
Interlacing means the guns draw every other lines but not necessarily pixels, because CRTs has beam spot sizes at least.
No, you don't sacrifice refresh rate! The refresh rate is the same. 50 Hz interlaced and 50 Hz non-interlaced are both ~50 Hz, approx 270 visible scanlines, and the display is refreshed at ~50 Hz in both cases. The difference is that in the 50 Hz interlaced case, alternate frames are offset by 0.5 scanlines, the producing device arranging the timing to make this work on the basis that it's producing even rows on one frame and odd rows on the other. And the offset means the odd rows are displayed slightly lower than the even ones.
This is a valid assumption for 25 Hz double-height TV or film content. It's generally noisy and grainy, typically with no features that occupy less than 1/~270 of the picture vertically for long enough to be noticeable. Combined with persistence of vision, the whole thing just about hangs together.
This sucks for 50 Hz computer output. (For example, Acorn Electron or BBC Micro.) It's perfect every time, and largely the same every time, and so the interlace just introduces a repeated 25 Hz 0.5 scanline jitter. Best turned off, if the hardware can do that. (Even if it didn't annoy you, you'll not be more annoyed if it's eliminated.)
This also sucks for 25 Hz double-height computer output. (For example, Amiga 640x512 row mode.) It's perfect every time, and largely the same every time, and so if there are any features that occupy less than 1/~270 of the picture vertically, those fucking things will stick around repeatedly, and produce an annoying 25 Hz flicker, and it'll be extra annoying because the computer output is perfect and sharp. (And if there are no such features - then this is the 50 Hz case, and you're better off without the interlace.)
I decided to stick to the 50 Hz case, as I know the scanline counts - but my recollection is that going past 50 Hz still sucks. I had a PC years ago that would do 85 Hz interlaced. Still terrible.
I think you are right, I had the LC III and Performa 630 specifically in mind. For some reason I remember they were 30Hz but everthing I find googling it suggest they were 66Hz (both video card and screen refresh).
That being said they were horrible on the eyes, and I think I only got comfortable when 100Hz+ CRT screens started being common. It is just that the threshold for comfort is higher than I remember it, which explains why I didn't feel any better in front of a CRT TV.
Could it be that you were on 60Hz AC at the times? That is near enough to produce something called a "Schwebung" when artificial lighting is used. Especially when using flourescent lamps like they were common in offices. They need to be "phasenkompensiert" (phase compensated?/balanced), meaning they have to be on a different phase of the mains electricity, than the computer screens are on. Otherwise even not so sensitive people notice that as interference/sort of flickering. Happens less when you are on 50Hz AC, and the screens run at 60Hz, but with flourescents on the same phase it can still be noticeable.
And the manufacturers are in a quest to remove as many keys as they can from the keyboard. Like you can hardly find any light laptop today with page up/down keys anymore. Why?.... Haven't these guys heard of keyboard shortcuts?
Yes, it's a miracle that after 40 years of typing every day, my fingers still work. But that may be a biased view on my part; there may be lots of programmers out there with arthritis in their fingers, carpal tunnel syndrome, and other occupational diseases.
Worse than that, there's no consistency in Fn+key shortcuts. Recently acquired an HP Ergonomic Keyboard as a replacement for a broken Sculpt, only to find out that it literally cannot send Ctrl+Break -- there's no key for it, no Fn+key shortcut for it and the remapping software doesn't simulate it properly.
The keyboard I was mentioning isn't a laptop keyboard, actually, but laptop keyboards tend to be in a slightly better spot as the major vendors typically have Fn shortcuts for the missing keys, like Fn+B for Break, and they also document them in the user guides.
Detached keyboards seem to be more of a wild west, especially when they target multiplatform -- and it's always the stuff they don't document that screws you.
I dunno, I actually prefer Fn+Up/Dn. I just find it more logical, and it feels standard to me now. I press them surely hundreds of times a day and have no problem with it.
Nothing tops Apple's infantile refusal to put a (real) Delete key on their laptops. Instead, they have a Backspace key mislabeled "delete."
When the Eject key became obsolete, Apple had a perfect opportunity to fix this omission with essentially no effort. NOPE. Meanwhile, everybody else managed to have a proper Delete key on their laptops.
A hill that I'll die on is that Apple's terminology is more correct than PC terminology for this.
Backspace makes sense if you see the computer as a fancy typewriter.
Delete makes sense if you consider the actions from first principles.
Consider the various forms of deletion (forward, backward, word, file deletion, etc.) Each of these just has a modifier key in Apple's way of thinking. (None, Fn, Option, Cmd) which makes complete sense when viewed against how consistent it is with the whole set of interface design guidelines for Apple software.
The only reason that this doesn't make sense is that it's incompatible with your world view brought from places with different standards. They will never "fix" this as there's just nothing to fix.
> Backspace makes sense if you see the computer as a fancy typewriter.
Backspace on a typewriter only moved the position (~cursor) back one space. Hence why its symbol is the same as the left arrow key's.
Backwards Delete was a separate additional key, if the typewriter even had one, and its symbol was a cross inside an outlined left-arrow: ⌫.
Current Apple keyboard has this symbol on the "Backspace" key in some regions instead of the text "delete", but older ones did have the left arrow.
Apple calling it "Delete" goes back to Apple II. Many other older computer platforms also called it "Delete". DEC used the ⌫ symbol.
At least you don't have to type the same letters while holding a thin tape over your screen to erase them!
Apple also had separate Return and Enter symbols on keyboards for a while, which also sounds like typewriter territory but their intended use was a bit different: https://creativepro.com/a-tale-of-two-enter-keys/
Not many people use forward-deleting. I find it much easier to just Fn+Backspace anyways, especially when Del is usually part of the shorter function row that you really have to stretch for.
And delete is a perfectly fine name -- it deletes the character you just typed. I've always thought the supposed distinction between backspace and delete was bizarre. If anything, it's the forward-delete that needs a better term, like... well, forward-delete. Fwd-Del.
It's just deleting. And that's a questionable assertion for which you've provided no support. You seriously think people Backspace old E-mails away? They Backspace unwanted files away? They Backspace selected areas away in Photoshop? OK.
"I find it much easier to just Fn+Backspace"
Except most people don't find that at all, because it's not marked on the keyboard. And again, you're asserting that a secret, two-keyed, two-handed hotkey is easier than pressing a clearly marked button?
If you watch real users when they're faced with the lack of Delete, they use the arrow keys to move the cursor across the characters they want to delete, and then Backspace them away. Twice as much work. Or they reach for the mouse or trackpad and tediously highlight the characters to delete.
And there is no separate function row on Apple laptops. The Eject key was right above the Backspace key... easily reachable.
> And that's a questionable assertion for which you've provided no support.
You're the one who's provided zero evidence that the Del key is used with any appreciable frequency at all. And the fact that Apple doesn't even bother to include one strongly suggests it's rarely used. You're literally the first person I've ever heard even complain about it. Since you've started this topic, if you want evidence from someone else, you really ought to start by providing your own.
> You seriously think people Backspace old E-mails away? They Backspace unwanted files away? They Backspace selected areas away in Photoshop? OK.
Um, yes? If you insist on calling it Backspace, the key that deletes the previous character is also the key that deletes e-mails in Mail.app, that deletes files in Finder (with Cmd), and that deletes the selected area in Photoshop on a Mac. Which is why it also makes sense that it's called Delete on a Mac. It's all extremely consistent and logical.
> Except most people don't find that at all, because it's not marked on the keyboard.
And most people don't need to, because they never want to use it anyways, even when it's a dedicated key wasting spacing on the keyboard.
> And again, you're asserting that a secret, two-keyed, two-handed hotkey is easier than pressing a clearly marked button?
Yes, because the Del position on most PC laptops is awkwardly far away and smaller than Backspace. If you find two hands or two keys difficult, are capital letters with Shift hard for you?
> And there is no separate function row on Apple laptops.
I don't know what that means? Apple laptops certainly have a function row, which is where the Eject button you're talking about has always been. And where the Eject key was is where the TouchID button is now.
> ... easily reachable.
Eject/TouchID is one of the two farthest keys on the keyboard, the polar opposite of "easily reachable". There is literally no position less reachable on the keyboard. It's not ergonomic to make it something used in regular text editing, if you're one of the few people who utilize forward delete.
"You're the one who's provided zero evidence that the Del key is used with any appreciable frequency at all."
I never said it was. You're the one who pompously declared the opposite. I merely pointed out an easily-verifiable fact: Apple neglects to provide it.
But since you've exposed yourself to statistics-based ridicule now, I'll lazily rely on Google's so-called "AI"-based indictment of your absurd position:
"Apple's global PC market share generally hovers around 8% to 10%"
This indicates that 90% of the world's computer-using population apparently DOES find Delete to be a compellingly distinct function from Backspace, and sees fit to include a dedicated key for it on its keyboards.
So you can continue to protest and cry about the harmless inclusion of a useful key that doesn't impede YOUR mode of operation at all, while the vast majority of the computer-using world demonstrates its disagreement with you by including it.
How about you lay off the insulting language like "pompous" and "ridicule" and "protest and cry"? It's completely inappropriate for HN, and demonstrates a severe lack of maturity on your part. I think you can be better than that. Maybe re-read:
I don't know what you're bringing up market share for. The idea that most people buy non-Apple because it has a DEL key is not plausible. Like INS, it's a vestigial key maintained mainly for backwards compatibilty with legacy enterprise software used by a tiny minority of businesses. Not for everyday use by normal users.
Now, you started this conversation by complaining about the lack of a DEL key, yet you're the one going on about how I'm continuing to "protest and cry"? Honestly, you might need to look in the mirror there. You're the one asking for a feature almost nobody uses, and all I'm doing is pointing that out. It's much better to respond to disagreement in a productive way by engaging in substance, not defensively by hurling insults.
To reiterate: no, it shouldn't be included on Macs because it's completely and utterly unnecessary. If you need Del functionality, just use the Fn modifier. That's what it's there for. And it's more ergonomic, as established.
"I don't know what you're bringing up market share for."
Says the guy who declared, without evidence, that "Not many people use forward-deleting."
And who, after complaining about my digging-up of statistics, doubles down by crowing about "a feature almost nobody uses," again without any evidence.
See, when rebutting an argument, you gain credibility if you at least make an effort to back up your assertion with facts. It's a fun and useful exercise, because everybody learns something... if they're willing to.
Meanwhile, your comments provide an amusing clinic on hypocrisy.
> Meanwhile, your comments provide an amusing clinic on hypocrisy.
Have you not noticed that you're also providing zero facts? Again, I suggest you look in the mirror. Do you somehow think that when you make a claim you don't need facts, but when people disagree with you they do...?
There aren't any actual studies on rates of usage of the DEL key. So I don't know what you're expecting.
I suggest, in the future, that you don't apply such double standards, where you demand empirical evidence from everyone else, but neglect to give any yourself. It's not a good look, and you aren't going to gain much respect doing it. Hopefully you can learn from this exchange and be better than that in the future. Good luck.
> Except of course that I didn't make any claim requiring statistics, while you did.
You're criticizing Apple for not having a Del key. Presumably this is based on the idea that people mostly want a Del key. Which would need to be based on statistics, just as much as my claim that they mostly don't.
The only alternative would be if you thought Apple's "infantile refusal to put a (real) Delete key on their laptops" was their refusal to cater you just you personally. I'm assuming you're not that much of a narcissist?
> And yet... I'm the one who DID supply statistics, which you ignore...
I didn't ignore anything. I already responded directly to what you said about market share, and explained how it's irrelevant and why. Irrelevant numbers aren't any better than no numbers at all.
> ...in your maniacal histrionics.
Perhaps you don't just need to read the HN guidelines again, but bookmark them and re-read before each comment you post. Also maybe check the dictionary, since you don't seem to know what those words mean? They don't just mean someone who disagrees with you.
Oh yeah, they sometimes put page up and down on up and down which infuriates me very much. There are other issues like less USB ports, but overall quality is poor comparing to MacBooks.
Agree but you quickly run into its limitations. Like if you 3d print something, you need to eliminate when possible sharp edges. That's not fun to do with OpenSCAD.
140 pages on coding style. This looks straight out of the CIA handbook for sabotage [1]. I am sure China or Russia have a version of that.
> (12) Multiply paper work in plausible ways. Start duplicate files.
> (13) Multiply the procedures and clearances involved in issuing instructions, pay checks, and so on. See that three people have to approve everything where one would do..
What's the right amount of standards to have when you're writing 9 million lines of code that controls a 30,000lb machine moving through the sky at mach 1 with a human life inside?
Funnily enough, when I look at my codebase, I often think about this handbook. I try intendedly to ascribe it to incompetence but I always have a doubt. If I only listen to my inner voice, I’d fire everyone all the time.
One thing I don't understand with Windows Server is that it seems no matter how fast the nvme drives I use, or I pair/pool, I can't get a normal file copy to go faster than around 1.5GB/s (that's local, no network). The underlying disks show multi GB/s performance under crystal disk mark. But I suspect something in the OS must get in the way.
Your system ~3+ years old or so? Your story screams DMI 3.0 or similar PCH/PCIe switch saturation issue. DMI 3.0 caps at ~3.5GB/s but about 1.5-1.7 when bidirectional, such as drive to drive. If 100% reads hit about 3.5 then you’ve all but confirmed it. ~1.5GB/s bidirectional is a super common issue for a super common hardware combination.
It’ll happen if your U.2 ports route through DMI 3.0 PCH/Chipset/PCIe switch rather than directly to the CPU PCIe lanes. Easiest to just check motherboard manual, but you can use hwinfo to inspect the PCI tree and see if your U.2 ports are under a “chipset” labeled node. You might have different ports on the mobo that are direct, or possibly bios changes to explore. Sometimes lots of options, sometimes none. Worst case a direct PCIe adapter will resolve it.
Actually more than 3y old. My servers are EPYC Gen 2 and Gen 3 mostly.
But I don't think it is a hardware thing, as I see in CrystalDiskMark (which I believe bypasses the Windows write cache) performances that are close to the SSD specs. But it is when I do windows copy, the performance goes down the drain.
And it's not just parallelism. If I copy one file from a fast nvme to fast nvme, it caps at about 1.5GB/s. If I copy two files in parallel, they seem to split that speed, even if file1 is copied from diskA to diskB while file2 is copied from diskC to diskD.
Ahh. EPYC doesn't use DMI, so there’s an easy elimination, and they have enough PCIe lanes so that switches only come up with boards supporting oodles of drives. It’s still worth checking your mobo manual to make sure there isn’t a specific mention of port selection related to PCIe switches, or a silly bios option.
There might be some confusion that’s holding you back on diagnostics though. If a PCIe switch was in the path and causing the issue then there’s no meaningful difference between “parallel” and “bidirectional.” The nature of the switch is that all the traffic goes through it and it has a top speed and that’s split among the operations. Read or write to a single drive gets full bandwidth, but copying from one to another is read and write, so each part gets 50%. Even on the same drive, write/read and write/read also get 50% each. Include other drives on the same switch and divide it again. “And” = divide.
Your platform makes that specific issue less likely, but hardware can be a bit more quirky than you might be giving it credit.
Of course you’ve turned off windows defender real-time scanning, but otherwise I can’t think of an on-by-default reason for Windows Server to cause that issue, but it isn’t inherent in the OS. I’ve used multiple versions in dozen GB/s arrangements - 1.5GB/s was beatable 20 years ago. There’s something going on with your settings or hardware. Good news is it might be fixable, bad news is you’ll have to go trekking around to find the issue. :/
No PCIe switch involved, every SSD is directly connected to the PCIe ports, the MB supports bifurcation. Observed on H11SSL-i and H12SSL-i motherboards with various models of SSDs (mostly Intel, Micron and Samsung). Windows defender switched off.
Hyper-V is on though. And I understand that when you switch on Hyper-V, even the host OS is really a VM on top of the hypervisor. So I wonder if that's not contributing (but disabling Hyper-V is not an option).
When you say 1.5GB/s was beatable, how do you measure it? Windows copy (or xcopy) or some benchmarking software?
In addition to my other comments about parallel IO and unbuffered IO, be aware that WS2022 has (had?) a rather slow NVMe driver. It has been improved in WS2025.
If it's over SMB/Windows file sharing then you might be looking at some kind of latency-induced limit. AFAIK SMB doesn't stream uploads, they occur as a sequence of individual write operations, which I'm going to guess also produce an acknowledgement from the other end. It's possible something like this (say, client waiting for an ACK before issuing a new pending IO) is responsible
What does iperf say about your client/server combination? If it's capping out at the same level then networking, else something somewhere else in the stack.
I noticed recently that OS X file IO performance is absolute garbage because of all the extra protection functionality they've been piling into newer versions. No idea how any of it works, all I know is some background process burns CPU just from simple operations like recursively listing directories
The problem I describe is local (U.2 to U.2 SSD on the same machine, drives that could easily performs at 4GB/s read/write, and even when I pool them in RAID0 in arrays that can do 10GB/s).
Windows has weird behaviors for copying. Like if I pool some SAS or NVMe SSD in storage space parity (~RAID5) the performance in CrystalDiskMark is abyssal (~250MB/s) but a windows copy will be stable at about 1GB/s over terabytes of data.
So it seems that whatever they do hurts in certain cases and severely limits the upside as well.
I think the point is that if you are not Netflix, you can use AV1 as most of your clients devices support hardware acceleration thanks to the big guys using AV1 themselves.
reply