Hacker Newsnew | past | comments | ask | show | jobs | submit | merpkz's commentslogin

As a sysadmin in 2020 - 2024 time frame I used to do that all the time at my previous job, got a strong openlssl cli game going whenever needed to generate a new csr for existing key or new key and shovel an exact amount of SANs into the CSR too. Lot of time wasted. There were also a certain set of customers for which we managed systems and they insisted for it to be done this way as something free on the internet is not to be trusted. Oh well, strange times.

I have also heard a negative about it being somehow "cheap" and we can "afford" a proper wildcard for our website from managers back in the day, like, few years ago. Never mind the hours wasted every year changing that certificate in every system out there and always forgetting a few.

Also a valid point from security people is that you leak your internal hostnames to certificate transparency lists once you get a cert for your "internal-service.example.com" and every bot in existence will know about it and try to poke it.

I solved these problems by just not working with people like that anymore and also getting a wildcard Let's Encrypt it certificate for every little service hosted - *.example.com and not thinking about something being on the list anymore.


If things go the way they are, I wonder how long until buying a used Playstation 4 (8GB DDR5 RAM) at around 100USD a piece to put Linux on it and use as a server in cluster will become viable thing to do? Bring back console clusters!

https://en.wikipedia.org/wiki/PlayStation_3_cluster


8GB GDDR5* Ram. Huge difference and one of the reasons the PS4 slapped the Xbox One around.


Or just remove the RAM and put it in more useful hardware.


Can you give some examples on what are these people specifically doing with all these phones for those unfamiliar with click farms?


Creating accounts on every service they can think of, and then selling likes or favorites or whatever that platform uses.


> AWS charges $0.09 per GB for data transfer out to the internet from most regions, which adds up fast when you're moving terabytes of data.

How does this actually work? So you upload your data to AWS S3 and then if you wish to get it back, you pay per GB of what you stored there?


That is the business model and one of the figurative moats: easy to onboard, hard/expensive (relative to on-boarding ) to divest.

Though important to note in this specific case was a misconfiguration that is easy to make/not understand in the data was not intended to leave AWS services (and thus should be free) but due to using the NAT gateway, data did leave the AWS nest and was charged at a higher data rate per GB than if just pulling everything straight out of S3/EC2 by about an order of magnitude (generally speaking YMMV depending on region, requests, total size, if it's an expedited archival retrieval etc etc)

So this is an atypical case, doesn't usually cost $1000 to pull 20TB out of AWS. Still this is an easy mistake to make.


Nine cents per gigabyte feels like cellphone-plan level ripoff rather than a normal amount for an internet service.

And people wonder why Cloudflare is so popular, when a random DDoS can decide to start inflicting costs like that on you.


I don’t mind the extortionate pricing if it’s upfront and straightforward. fck-nat does exist. What I do mind is the opt out behavior that causes people to receive these insane bills when their first, most obvious expectation is that traffic within a data center stays within that data center and doesn’t flow out to the edge of it and back in. That is my beef with the current setup.

But “security” people might say. Well, you can be secure and keep the behavior opt out, but you should be able to have an interface that is upfront and informs people of the implications


Yes uploading into AWS is free/cheap. You pay per GB of data downloaded, which is not cheap.

You can see why, from a sales perspective: AWS' customers generally charge their customers for data they download - so they are extracting a % off that. And moreover, it makes migrating away from AWS quite expensive in a lot of circumstances.


> And moreover, it makes migrating away from AWS quite expensive in a lot of circumstances.

Please get some training...and stop spreading disinformation. And to think on this thread only my posts are getting downvoted....

"Free data transfer out to internet when moving out of AWS" - https://aws.amazon.com/blogs/aws/free-data-transfer-out-to-i...


It's not disinformation at all, there's a lot of hurdles to this.

In the link you posted, it even says Amazon can't actually tell if you're leaving AWS or not so they're going to charge you the regular rate. You need explicit approval from them to get this 'free' data transfer.


It feels like you are intentionally missing the point. The blog makes it quite clear that the purpose of notifying AWS is so they stop charging you for that specific type of traffic. How else would they know whether the spike is normal production usage, which is billable, or part of a migration effort?

There is nothing in the blog suggesting that this requires approval from some committee or that it is anything more than a simple administrative step. And if AWS were to act differently, you have grounds to point to the official blog post and request that they honor the stated commitment.


I don't appreciate your disinformation accusation nor your tone.

People are trying to tell you something with the downvotes. They're right.


Yes…?

Egress bandwidth costs money. Consumer cloud services bake it into a monthly price, and if you’re downloading too much, they throttle you. You can’t download unlimited terabytes from Google Drive. You’ll get a message that reads something like: “Quota exceeded, try again later.” — which also sucks if you happen to need your data from Drive.

AWS is not a consumer service so they make you think about the cost directly.


"Premium bandwidth" which AWS/Amazon markets to less understanding developers is almost a scam. By now, software developers think data centers, ISPs and others part of the peering on the internet pay per GB transferred, because all the clouds charge them like that.


Try a single threaded download from Hetzner Finland versus eu-north-1 to a remote (i.e. Australia) destination and you'll see premium bandwidth is very real. Google Cloud Storage significantly more so than AWS.

Sure you can just ram more connections through the lossy links from budget providers or use obscure protocols, but there's a real difference.

Whether it's fairly priced, I suspect not.


I just tested it and TCP gets the maximum expected value given the bandwidth delay product from a server in Falkenstein to my home in Australia, from 124 megabits on macOS to 940 megabits on Linux.

Can you share your tuning parameters on each host? If you aren't doing exactly the same thing on AWS as you are on Hetzner you will see different results.

Bypassing the TCP issue I can see nothing indicating low network quality, a single UDP iperf3 pass maintains line rate speed without issue.

Edit: My ISP peers with Hetzner, as do many others. If you think it's "lossy" I'm sure someone in network ops would want to know about it. If you're getting random packet loss across two networks you can have someone look into it on both ends.


Now try when you don't control the host to tune on.


AWS like most do hot potato routing, not so premium when it exits instantly. This is usually a tcp tuning problem rather than bandwidth being premium.


I mean transit is usually billed like that, or rather a commit.


AWS charges probably around 100 times what bandwidth actually costs. Maybe more.


Made in California.

We are programmed to receive. You can check out any time you like, but you can never leave


(reference to lyrics from the song "Hotel California", if anyone missed it)


You put a CDN in front of it and heavily cache when serving to external customers


Yes. It’s not very subtle.


the statement is about aws in general, and yes, you pay for bandwith


And yet somehow steam deck has absolutely zero issues with microsd cards


I am currently 2/6 done with [1] this 13,2k piece Disney puzzle and this guide will be much helpful once I need to hang it as that has been bothering me a bit since the sheer proportions of the puzzle are starting to appear. That might not be soon though, as I didn't account for lack of sunlight during winter, turns out doing puzzle with artificial lightning is not easy, puzzle reflects some of it and it's strain on the eyes.

1 - https://en.clementoni.com/collections/adult-puzzle/products/...


Having correct light is crucial, be wary of the eye strain. I found that I could only productively puzzle during certain times of the day with good sunlight. Those long sessions during the night were really bad for my eye sight.


I found that a ceiling light fixture is really bad for painting as well, since wet paint reflects a lot of light. I got some powerful LED lamps [1] pointing up as an experiment and they have worked out well; I was afraid of the 6000K temperature looking too blue but I think when they are powerful enough they look really nice. The trick was to put them somewhere aside so there is no direct reflection path.

[1] https://www.amazon.com/dp/B0962X573M


I used Gentoo since about 2012 up until 2022 then switched to Debian mainly because lot less things used to break during updates and my old CPU ( i7 4790k ) became a bit dated to compile every new version of Golang, Rust and Chromium - just hours and hours of brutal grind. Since flatpak can provide up to date versions of a lot of desktop software on Debian there is very little point switching back. Maybe one day if I get my hands on some ridiculously powerful CPU like 7800 or 9800X3D then might try it again.


I just got a Ryzen AI 395+ and compilation times are insane, kernel compiles in one minute! :D https://www.phoronix.com/review/amd-ryzen-ai-max-pro-395/5


Wow! 1 min?! that is absolutely nuts! I remember having to recompile the kernel on my 386,486 etc,


> The hype for this is remarkable and people have been counting the hours since it was announced 2 weeks ago.

There was a person /u/UrsaRyan on reddit.com/r/civ doing a Sid Meier's Civilization related meme for about two years on daily basis until next (seventh?) installment in the game was released. That one flopped though, so yea, not sure what my point is, guess something about hype for a game on reddit or something and in the end it not paying off.


Yeah, I am using my Deck with 512GB sdcard and could never tell it is actually running from sdcard. It does a lot of game updates and always finishes those in reasonable time, at least for me. That card is going strong with all the writes going on on steam deck


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: