What did you still need to connect with 10mbit half duplex in 2014? I had gigabit to the desktop for a relatively small company in 2007, by 2014 10mb was pretty dead unless you had something Really Interesting connected....
If you worked in an industrial setting, legacy tech abounds due to the capital costs of replacing the equipment it supports (includes manufacturing, older hospitals, power plants, and etc). Many of these even still use token ring, coax, etc.
One co-op job at a manufacturing plant I worked at ~20 years ago involved replacing the backend core networking equipment with more modern ethernet kit, but we had to setup media converters (in that case token ring to ethernet) as close as possible to the manufacturing equipment (so that token ring only ran between the equipment and the media converter for a few meters at most).
They were "lucky" in that:
1) the networking protocol that was supported by the manufacturing equipment was IPX/SPX, so at least that worked cleanly on ethernet and newer upstream control software running on an OS (HP-UX at the time)
2) there were no lives at stake (eg nuclear safety/hospital), so they had minimal regulatory issues.
There is always some legacy device which does weird/old connections. I distinctly remember the debit card terminals in the late '00 required a 10mbit capable ethernet connection which allowed x25 to be transmitted over the network. It is not a stretch to add 5 to 10 more years to those kind of devices.
Technical debt goes hard, I had a discussion with a facilities guy why they never got around to ditch the last remnants of token ring in an office park. Fortunately in 2020 they had plenty of time to rip that stuff out without disturbing facility operation. Building automation, security and so on often lives way longer than you'd dare planning.
Everyone is forgetting the no delay is per application and not a system configuration. Yep, old things will still be old and that’s ok. That new fangled packet farter will need to set no delay which is a default in many scenarios. This article reminds us it is a thing and especially true for home grown applications.
This hasn't mattered in 20 years for me personally, but in 2003 I killed connectivity to a bunch of Siemens 505-CP2572 PLC ethernet cards by switching a hub from 10Mbps to 100Mbps mode. The button was right there, and even back then I assumed there wouldn't be anything requiring 10Mbps any more. The computers were fine but the PLCs were not. These things are still in use in production manufacturing facilities out there.
There's plenty of use cases for small things which don't need any sorts of speeds, where you might as well have used a 115200 baud serial connection but ethernet is more useful. Designing electronics for 10Mbit/s is infinitely easier and cheaper than designing electronics for 100Mbit/s, so if you don't need 100Mbit/s, why would you spend the extra effort and expense?
There is also power consumption and reliability. I have part of my home network on 100Mbps. It eats about 60% less energy compared to Gb Ethernet. Less prone to interference from PoE.
Some old DEC devices used to connect console ports of servers. Didn't need it per say but also didn't need to spend $3k on multiple new console routers.
Was an old isp/mobile carrier so could find all kinds of old stuff. Even the first SMSC from the 80s (also DEC, 386 or similar cpu?) was still in it's racks because they didn't need the rack space as 2 modern racks used up all the power for that room, was also far down in a mountain so was annoying to remove equipment.
Thanks for the clarification. They're so close to being the same thing that I always call it CSMA/CD. Avoiding a collision is far more preferable than just detecting one.
Yeah, many enterprise switches don't even support 100Base-T or 10Base-T anymore. I've had to daisy chain an old switch that supports 100Base-T onto a modern one a few times myself. If you drop 10/100 support, you can also drop HD (simplex) support. In my junk drawer, I still have a few old 10/100 hubs (not switches), which are by definition always HD.
Is avoiding a collision always preferable? CSMA/CA has significant overhead (backoff period) for every single frame sent, on a less congested line CSMA/CD has less overhead.
CSMA/CD only requires that you back off if there actually is a collision. CSMA/CA additionally requires that for every frame sent, after sensing the medium as clear, that you wait for a random amount of time before sending it to avoid collisions. If the medium is frequently clear, CA will still have the overhead of this initial wait where CD will not.
Depending upon how it's actually implemented, CSMA/CA may have the same (untended?) behavior of CSMA/CD in the sense that setting TCP_NODELAY will also set the backoff timer to zero. It would be interesting to test.
Like he said he didn't know anything about project 2025?
Steve Bannon is the one working on this, has said they have a plan to do it. Trump himself seems to believe that if the country is at war elections are postponed because that is how it works in Ukraine. Ergo Venezuela.
> “There is no such thing as a Jewish terrorist,” said Limor Son Har-Melech, of the far-right Otzma Yehudit party, as she and the other noose-wearing supporters insisted the measure will deter militant attacks.
IRC is such a simple protocol seeing its implementation in <1k loc in many languages and I assume bot building process must be simple too compared to signal.
I have built bare minimum hello world bots in simplex and session and I think both had a lot of troubles to go through but if someone's interested, they can look at simplex for bot creation but they started to have client side verification/alert of content which admittedly is a very honeypot-alike activity/slippery slope itself.
Signal has some of the least controversies even though its centralized, Matrix is another good one and personally I sort of prefer matrix because all these other protocols require apps whereas matrix can work on top of a browser thus having more widespread adoption imo.
XMPP is another good protocol and at this point pardon me for yapping but I once saw someone break a nat using XMPP and using it to create website endpoint creation which was good too but personally I feel like signal is the most trustworthy overall. I wish someone can make signal's bot genuinely simple as telegram bot creation as there is a lot of potential
Unfortunately Linux requires zero effter to create cheats on, might as well run no anti cheat. And the root stuff is overblown as user space programs can already read all your files and process memory of that user. How many bother with multiple users?
Not all gamers are playing games where cheating is an issue. It's really only the MOBA Call of Battlefield AAA crowd who care about that. That's not the largest group of gamers, and certainly not the largest market for games.
Fortnite and Call of Duty are consistently the #1 and #2 games every year. The others like GTA, Battlefield, League of Legends and Valorant also have anti-cheat that blocks Linux. It's not a minor issue.
The top game tag by sales [0] is #singleplayer, which obviously doesn't care about anti-cheat.
There's a demographic of gamers who only play the one competitive multiplayer game (such as Fornite or CoD). They don't buy many games, they're not the most lucrative market for game publishers, but they do keep those titles in business. And yes, for them, anti-cheat is important and they're unlikely to move to Linux.
The push back on kernel level anti-cheat on security grounds has always felt odd to me. If you don't trust them to run kernel level code why do you trust them to run usermode code as your user? A rogue anticheat software could still do enormous damage in usermode, running as your user, no kernel access required.
Being in kernel mode does give the rogue software more power, but the threat model is all wrong. If you're against kernel anti-cheat you should be against all anti-cheat. At the end of the day you have to chose to trust the software author no matter where the code runs.
it isn't about what I allow them run on my computer, it's about what they don't allow me run on my own goddamn computer. you can't run modded biıs, self compiled kernel or unsigned drivers. with secure boot enabled.
reply