Hacker Newsnew | past | comments | ask | show | jobs | submit | jhoelzel's commentslogin

it does but its pretty close.

the best option is just to replace everything buuut that will make you said as well as all your storage will life in network drives


github pages integration with jekyll.

all you need is some setup and a git repo the rest ist done for you and unless you host traffic heavy content it will most likely be free forever.


Truth be told i installed HA on a pie about 3 years ago and have largely forgotten about it.

It just works for me and what i do, all you have to do is occasionally apply updates. Granted i mainly focus on creating my own powersockets and fuses with shelly products and otherwise focus on local homekit devices.

I also have some of those tuya devices and i hope to get rid of them at some point very soon. Zigbee is good for local and as long as the switch runs by clicking it too, i really have no issue nor care what its running on.

I just have a secops feeling that wifi is really not what i want.


I am with you. 2.5 years running on a Pi. I also have 6 security cameras writing to a MicroSD in it which I thought was surely a recipe for disaster, but it has been rock solid. I have experienced no reliability issues and I am running everything out of Home Assistant exposed as HomeKit resources.


I would back it up regularly.

Alternatively you boot and run a pi off a USB stick.


your choice of words are quite poor my dear poster.

Infrastructure is not crumbling, you can be across the entire country in less than 4 hours.

I agree that public transit in Berlin specifically has been better, but you can still get across the city which is wide and not high in about 40 minutes. Car, bike and scooter sharing is plenty-full too and if you really want to, everything is walkable.

The challenges you call out for are also an international problems and the follow up of the boomer generation


When everything takes 4+ weeks to get any piece of paper and the Deutsche Bahn has 52% punctuality (cancellations excluded), I would say that my choice of words is appropriate. These things are getting worse every year. It is crumbling.


Are you reading all of this online or have you actually used any of those transportation methods recently? I have just had a trip for from the boondocks of baden würtenberg to the center of Berlin in said 4:30 hours.

As for your pieces of paper, that is because Berlin is at capacity. Which is also why housing is expensive. Its also -the city to be- for anyone under 50 and one of the top 10 cities for clubbing in the world.

Nothing is crumbling and your punctuality are mostly slight shifts in arrival times.

You are blowing this way out of proportion.


What kind of article is this? So in 30 years we where not supposed to equalize the country?

And from England nonetheless?

Apart from the recruiting and financial circles England is more on par with Poland that it is with any real western country...

This literally has been this way for more than 20 years now, we are even closing hospitals and banks because we have to many.

Finally this article has shown that even the level of cambridge has sunk, so maybe the title should read "how england has fallen constantly for the last 200 years". Real household income hasn’t increased For the past 15 years. The average UK household is 20% poorer than may others in northwestern Europe.


You must understand, for your average wealthy SE UK voter, the UK is still a dominant world force, a peer to China and the United States. Brexit was quite popular down there, despite not being the majority. Of course, their worldview is about 30 years out of date, so they need comparisons to East Germany or Poland to shake them out of it.


I understand! Henceforth I will compare my style of living to Nottingham and from that point of view everything is awesome. :D


Isn't this largely because there are dozens of angrybird clones on github where the ai could learn from?

Do GTA next :D


Yes it is. Same with flappy birds dude from Tencent. Try to do a simple game that is less found on internet, some obscure board game and you'll see how ChatGPT fails spectacularly. For those who knows Witcher 3 game, try asking ChatGPT to implement Gwent board game found within W3 world, or try to implement the board game Orlog that is found within the world of Assassin's Creed Valhalla game. See those implementations how they go - no need to do a GTA complex style one.


I don't see why you'd fail. With a more obscure game, you need to elaborate on details some more. E.g. describe the specific rule instead of saying "like in Angry Birds".


Well the fact is that it does not really learn. So yes you go ahead and explain in detail what will be forgotten once the context exceeds 8k tokens


Sure, but no one says you have to get it to write you the whole game in one session. Divide and conquer! As long as you keep the work items small enough to solve them within 8k context limit, GPT-4 will be able to help you with all of it!

Creating a collection of system prompts to select from, as well as plain old copy-paste, are two tools you can use to provide the same context to multiple sessions, with little to zero effort.


I'm trying something like this with 3.5; the problem is that it keeps changing its mind about what the API "should" look like for the other parts it can't remember in context. As I'm using it as a collaborator, this is actually fine, but it wouldn't work if I was trying to 1-shot it or otherwise use pure LLM output.


I wouldn't expect this to work well with GPT-3.5. And, with GPT-4 available, I wouldn't bother trying; there's too big a capability difference between the two. However, it's nice to know 3.5 is still somewhat useful here.


yes and no. it will forget about your properties and structs just the same way it will forget about keys design decisions.


Within a session, not unless you stay under context window limit. Between sessions, it's up to the user to carry over relevant context.


It would be very interesting to see a post about this then please. For me it has definitely been an issue that anything meaningful will take multiple files and henceforth cap my input for it.


European data protection sais no.


SEEKING WORK | Europe/North America | Remote

Freelance Sr. Engineer with Kubernetes/DevOps/Golang Focus

  Location: Berlin, Germany
  Remote: yes
  Current preferences:
  Kubernetes, Golang, Kubernetes Operators, Ci/CD, Terraform, Ansible, ArgoCD, Containers and automating everything
  Languages: German (Native), English (Native)
  LinkedIn: https://www.linkedin.com/in/johannes-h%C3%B6lzel
  Github: https://github.com/jhoelzel
  Blog: https://www.hoelzel.it

Hi, I'm Johannes, a tech specialist with a focus on Kubernetes. I have been working in the industry since 2003 and have extensive experience building and managing large-scale systems using Kubernetes, with a focus on Amazon AWS, Microsoft Azure, and bare-metal deployments.

In addition to my technical skills, I also have a background in psychology which gives me unique insights on team leadership and communication. I have a proven track record of leading and creating successful international teams.

One of my specialties are bare-metal kubernetes deployment with RKE2 or K3s which are CIS compliant and use SELINUX under the hood. They can effectively be used to bring new wind to your existing data-centers and hardware or create hybrid systems that easily connect your online and on-premise servers. My Mantra is that in then end of the day your kubernetes clusters should be able to run anywhere with minimal modifications, because that is the promise kubernetes actually offers.

I'm currently offering my services as a freelancer and would be happy to discuss potential opportunities involving Kubernetes. If you're looking for someone with seniority and experience in the field, please feel free to reach out. I have a wealth of experience with Amazon AWS, Microsoft Azure, and bare-metal Kubernetes and am confident in my abilities.

You can read some articles I wrote:

https://www.hoelzel.it

or contact me on my linkedin:

https://www.linkedin.com/in/johannes-h%C3%B6lzel

or see some of my work on Github:

https://github.com/jhoelzel


While i rally appreciate the effort and am astounded that they already have the pledge of 10 Full time engineers for 5 years I am left wondering 2 things:

A) This could easily be a another reddit moment, where corps and actual bill payers will forget about it in two weeks since it simply -does not concern them-. The supporting list mostly consists of actual competitors.

B) If they can manage all these resources, why can we just simply not do something new along the way? Terraform has its flaws and everyone that seriously had to worked with it can name many.

For instance: Did you know that you cant easily shut down a server when deleting a terraform resource? At least not without hacks or workarounds?

Its time for a "cloud-init native" solution to all these problems and while appreciating the effort i think this fork will actually hinder future development by having things remain the same.

- Cluster Api all the way -


> why can we just simply not do something new along the way

Marcin here, one of the member of the OpenTF.

99% of the value of Terraform is its ecosystem - providers, modules, tutorials, courses etc., and millions of lines of battle-tested production code already written in that language. 1% of the value is in the tool itself, with the tool serving as the gatekeeper to all these riches. One of the things that I personally want to see is opening up the codebase to allow building new things on top of it, which then don't need to reinvent the wheel.

You dislike HCL? Fine, have something else give the tool an AST and we'll take it from here. You don't need/want to go through the CLI? Not a problem, embed some of these libraries directly in your app.


Not sure what you mean exactly about "shutting down a server when deleting a Terraform resource". But do you think that's something inherent to the design that OpenTF wouldn't be able to address?

Personally I think Terraform hit on a really good pattern for IaC, and while there are lots of rough edges that could be polished, the overall approach is by far the best fit yet invented for the problem it's aiming to solve.


I'm not sure what they mean by that. But one case where terraform's model doesn't work very well, is updating a certificate on a load balancer (to be concrete, say an ACM certificate attached to an NLB in AWS) to a new cert and remove the old one. The proper way to do that, without service interruption is the following:

1. Create new certificate

2. Update the certificate attached to the load balancer

3. Delete old certificate

But it isn't actually possible to do that in that order with terraform because of how dependencies work.

By default what terraform will try to do is:

1. Delete old certificate. this will either fail, because the certificate is in use (as is the case in AWS) or destroy a resource that is still in use and cause the load balancer to enter a bad state

2. Create new certificate

3. Update the load balancer

The only ways I have found to work around this is with targeted applies (which are discouraged), or splitting the change up into multiple code changes, with separate applies for each change.


Time to check the 3-2-1 backups ;)


According to the company the attacker managed to encrypt both the primary and secondary backup systems.


Yes, but as a customer your 3-2-1 strategy should include a backup off that cloud. Not the first time, and won't be the last time a cloud provider has a catastrophic data loss incident. Relying solely on your cloud provider for backups is a risk.


You know that after fire in OVH datacenter, they asked their customers to start their disaster recovery plans - and people asked where is such option in OVH admin menu? Not excusing them, but many customers are completely clueless about backups and data security in general.


the 1 in the 3-2-1 should be somewhere on premise or at least not directly reachable from the internet.

Think: ssh cron job that copies backups from cloud to cold storage


And what if the backup you're copying to cold storage is also encrypted?

How did the saying go? You don't have backups until you've successfully restored from them or something like that. =)

Basically any 3-2-1 system is Schrödinger's backup until you've actually used it.


So you only have 1 backup that you daily overwrite?


You can have X daily backups in rotation and after X days of infiltration they're all garbage because they were overwritten by the malware-encrypted code.

A backup isn't real until you've restored from it. That's why you should restore from backups regularly. Firstly so that you know the process and see it actually works and secondly you can confirm you're actually backing up what you think you are backing up.

We've all set backup scripts and forgot to include new directories or files in the configuration as time went on... =)


No, I think you're misunderstanding.

The parent comment is intending to remind people that many things can happen to a backup after it's done. Backups cannot be "set and forget", as just making the backup isn't enough since so many things can happen after you've taken that backup.

- Bitrot/bitflips silently corrupt your backups and your filesystem doesn't catch it

- The storage your backups are on goes bad suddenly before you can recover

- Your storage provider closes up shop suddenly or the services go down completely, etc

- malicious actors intentionally infiltrate and now your data is held hostage

- Some sysadmin accidentally nukes the storage device holding the backups or some other mistake (to summon the classic, I'm betting there are a few persons who have stories where an admin trying to clean up some leftover .temp files accidentally hit SHIFT while typing

```rm -rf /somedirectory/.temp```

and instead writes:

```rm -rf /somedirectory/>temp```

- (for image level backups) The OS was actually in a bad state/was infected, so even if you do restore the machine, the machine is in an unusable state

- A fault in the backup system results in garbage data being written to the backup "successfully" (If you're a VMware administrator and you got hit by a CBT corruption bug, you know what I'm talking about. If you aren't look just search VMware CBT and imagine that this system screws up and starts returning garbage data instead of the correct and actual changed blocks that the backup application was expecting)

Basically, unless you're regularly testing your backups, there isn't really any assurance that the data that was successfully written at the time of backup is still the same. Most modern backup programs have in-flight CRC checks to ensure that at the time of the backup, the data read from source is the same going into the backup, but this only confirms that the data integrity is stable at the time of the backup.

Many backup suites have "backup health checks" which can ensure the backup file integrity, but again, a successful test only means "at the time you ran the test, it was "okay". Such tests _still_ don't tell you whether or not the data in the backup file is actually usable/not compromised, it only tells you that the backup application confirms the data in the backup right now is the same as when the backup was first created.

So the parent post is correct; until you have tested your backups properly, you can't really be sure if your backups are worth anything.

Combine this with the fact that many companies handle backups very badly (no redundant copies, storing the backups directly with production data, relying only on snapshots, etc), and you end up with situations like in the article where a single ransomware attack takes down entire businesses.


If the data you’re reading is encrypted, you’re still screwed.


A 3-2-1 backup strategy involves keeping three copies of your data, stored on two different types of media, with one copy kept offsite for disaster recovery.

you are still supposed to have multiple backups =)


Incremental backups and alerts on large deltas seem like a good idea

(I mean, on large deltas anywhere)


Backups need an air gap.


I often have people complain after comparing my works instance pricing to other cloud providers...

Then try to explain that rotating a few dozen TB of data offsite to cold offline storage every week isn't cheap. Because unlike some vendors, we take pride in data integrity and ensuring that our DR plan is actually.... you know, recoverable :P


If they are not incremental but append only, an air gap is not strictly needed and can be used as an additional safeguard performed less frequently because of manual overhead. The crux of the matter is to assume the main system has been compromised and preventing overwriting existing data.


I would not agree with this. Append-only file systems and storages aren't a bad idea and definitely help with accidental overwrites, but these systems have been punked quite frequently in many ways, and I've worked with backup companies that home-rolled their own append-only backup implementations.

It didn't stop attackers from using extremely common ways to punk the systems even under the best circumstances for the systems. A forgotten password gets leaked, using the backup applications/storage system's own encryption schemes against the victims, just deleting entire volumes or compromising the OS on the systems, the list goes on.

I wouldn't consider append-only an anti-ransomware technique, it just stops one of many common ways of compromising data. This is good, but I wouldn't rely on it to protect against even a run of the mill ransomware scheme.


... until the next update to these viruses.

To utterly destroy an organisation you don't erase or encrypt their data. You change it. Slowly. A little by a little. A birthday here, a name there, a number ... Using the normal ways to change this data. In this way you can go undiscovered for years, employees get blamed for making stupid errors for a LONG time and there is absolutely no way to fix things, no matter what the backup strategy is.


But for ransomware there needs to be a hope of restoring the data. In this case the value would need to be more oblique.


The ransomware gang buys put options on the victim’s stock. Sabotage-backed options scams have been around for a long time.


Doesn’t matter.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: