No I think I wrote something in C (it was written a while ago) accepting the and then discarding the connection in such a way the RST/FIN was never send, making sure to clean the socket server side.
I guess a timeout will need to be adjusted/implemented on the bot's end I remember fixing a similar bug at work and it was quite involved. At any rate the very least the connection was made and discarded.
I guess the iptables solution would also work well and you would have a correctly working serverside.
I wish this wasn't necessary, but the next steps forward are likely:
a) Have a reverse proxy that keeps a "request budget" per IP and per net block, but instead of blocking requests, causing the client to rotate their IP, the requests get throttled/slowed down, without dropping them.
b) Write your API servers in more efficient languages. According to their Github, their backend runs on Perl and Python. These technologies have been "good enough" for quite some time, but considering current circumstances and until a better solution is found, this may not be the case anymore and performance and cpu cost per request does matter these days.
c) Optimize your database queries, remove as much code as possible from your unauthenticated GET request handlers, require authentication for the expensive ones.
The second argument doesn't really work out in praxis. We have a quarter century knowledge about SQL injection at this point, yet it keeps happening.
Instead of trying to educate everybody about how to safely use error-prone programming abstractions, we should instead de-normalize use of them and come up with more robust ones. You don't need to have in-depth exploit development skills to write secure Rust code.
Unfortunately, there's more money to be made selling security consulting if people stick to the error-prone ones.
The rootkit runs in ring0, at that point all kernel-enforced security controls are potentially compromised. Instead, you need to prevent the kernel module from being loaded in the first place. There are multiple ways to ensure no further kernel modules can be loaded without rebooting the computer, e.g. by having pid=1 drop CAP_SYS_MODULE out of it's bounding set before starting any child processes. After it has been loaded it's too late to do anything about the integrity of your system.
That is a critical observation. Last time I had to root an Android device it hat pretty robust defenses like dm-verity and strict SELinux policies (correctly configured) and then everything collapsed because the system loaded a exfat kernel module from an unverified filesystem.
Permitting user-loaded kernel modules effectively invalidates all other security measures.
What would it be checking against? There's no central signing authority the way there is with Windows. (I mean I guess a distro could implement that but then how would I load my own custom modules?)
The kernel provides the option to embed a signing key for kernel modules at compile time. But (AFAIK) you'll need to compile your own kernel to go that route.
The author seems a little lost tbh, it's starting with "your users should not all clone your database" which I definitely agree with, but that doesn't mean you can't encode your data in a git graph.
It then digresses into implementation details of Github's backend implementation (how is 20k forks relevant?), then complains about default settings of the "standard" git implementation. You don't need to checkout a git working tree to have efficient key value lookups. Without a git working tree you don't need to worry about filesystem directory limits, case sensitivity and path length limits.
I was surprised the author believes the git-equivalent of a database migration is a git history rewrite.
What do you want me to do, invent my own database? Run postgres on a $5 VPS and have everybody accept it as single-point-of-failure?
> Run postgres on a $5 VPS and have everybody accept it as single-point-of-failure
Oh how times have changed. Yes, maybe run two $5 VPSs behind a load balancer for HA so you can patch and then put a CDN in front of it to serve the repository content globally to everyone. Sign the packages cryptographically so you can invite people in your community to become mirrors.
How do people think PyPI, RubyGems, CPAN, Maven Central, or distro Packages work?
The target audience for the article are people building these systems, so the people who would have to pay for the centralized infrastructure.
With git there's a sync protocol built-in that allows anybody who's interested to pull a copy of the index (this shouldn't be the default distribution model for the package clients, but anybody who truely wants it can pull it). PyPi is keeping their index private and you'd have to scrape all data through a heavily rate-limited API.
The problem is that "once the package is qualified to be included in Debian" is _mostly_ about "has the package metadata been filled in correctly" and the fact that all your build dependencies also need to be in Debian already.
If you want a "simple custom repository" you likely want to go in a different direction and explicitly do things that wouldn't be allowed in the official Debian repositories.
For example, dynamic linking is easy when you only support a single Debian release, or when the Debian build/pkg infrastructure handles this for you, but if you run a custom repository you either need a package for each Debian release you care about and have an understanding of things like `~deb13u1` to make sure your upgrade paths work correctly, or use static binaries (which is what I do for my custom repository).
Out of all micro controllers I've worked with, only a single one had AES cpu instructions.
reply