Im going to assume you aren't a troll, and this isn't a standard GPT generated doom post.
First, is nobody understands what AGI. You don't understand it. Most people on HN don't understand it. Even Eliezer Yudkowsky doesn't seem to quite get it.
People seem to extrapolate current progress with LLMS to something that is so intelligent that it can figure out how break out take over the world. Its all romantic science fiction narratives, without any real basis.
The explanation of how hard AGI is to achieve is actually really simple. Lets assume that we have an AGI that is capable of taking over the world and subduing all humans, and even more so, lets say it has full access to the internet and it has been pretrained on everything on the internet. To achieve its goal, it needs to be able to have control of a highly chaotic system that is the human population operating in reality, where its own actions cause the system to change drastically. In turn, for control, it has to have one of 2 things.
1. some sort of simulation system that is a representation of reality that is not only accurate enough, but can also be ran at a rate faster than reality, and in parallel. Oh and it also needs the initial state of that simulation, which by definition has to include the configuration of the neural network synapses in the human brain times the number of people it deems necessary to complete its objective. Let me know when we terraform mars with a bunch of nuclear powerplants and entire surface covered with TPUs to run a second earth simulation through evolution from primitive lifeforms to humans, and maybe this will be possible.
2. Somehow generate a mathematical formula for reality, thus proving P=NP as a side product.
I hopefully shouldn't have to explain how unlikely either are.
As for all the weak AIs, they will only make society better because for every AI that can do something, you can build an equivalent AI to counter it, and the things that will be left over are those that generate a net good.
Secondly, (and I only say this because I have met quite a few people like you to say that this isn't an uncommon phenomenon, notably my wife who almost failed out of her graduate program because Trump news was making her depressed) is that you have an psychological issue where your internal reward loop is based around being aware of [insert bad news here] and the more strongly you associate with it and concern yourself with it, the better/more connected you feel to people.
Except that the people around you aren't actually around you, they are just randos on the internet who don't give a fuck about you. And you aren't doing anything good by being "concerned" with the bad news, because you aren't actually concerned. To be concerned about something results in actions that you take, but I bet you aren't doing anything to actually address the problem that you see. All you are doing in being "concerned" is essentially woke signaling to others in hopes of gaining sympathy and thus feeling accepted.
Once you realize that you have this issue, and work to fix it, your life will become much better.
Absolutely not a troll. Hey, I hope you're right about it being too difficult to simulate humanity for an AGI to take over. That said, there are bad scenarios where we just get so dependent on AI that we voluntarily cede control to it, little by little, and then things go wrong once it's too late.
And you don't have to fully simulate reality to be really, really good at achieving your goals. We don't fully simulate every particle on a spacecraft, but we get them where they need to go, because we have simplifying models. And even a human level intelligence can be really dangerous if you give it enough time to think. There's a reason people play chess worse when they play a 30 second game vs a 2 hour time limit.
As far as taking not actions, though? When I posted this, I was having a really rough day. But in a way, posting this was an action. Making people more aware that someone else is afraid of this, providing social proof that it's ok to be afraid of this and talk about it publicly.
I'm going to spread this warning as far as I can with my limited resources. I'm not sure if I'm smart enough to contribute meaningfully to alignment research, but I'm trying to learn what I can and think on the problem. I think it's probably too late to stop the freight train from going off the cliff but I think it's also the rational thing to do.
>That said, there are bad scenarios where we just get so dependent on AI that we voluntarily cede control to it, little by little, and then things go wrong once it's too late.
You have to get rid of the idea that an AI will start doing something that we don't explicitly tell it to. Sure, in the future it will be possible to accomplish same tasks with AI that would take multiple smart humans to do. There are still limits in place though.
>And you don't have to fully simulate reality to be really, really good at achieving your goals. We don't fully simulate every particle on a spacecraft, but we get them where they need to go, because we have simplifying models
Yes, because we have found patters in non chaotic systems. Not all systems are like that. Why can't we predict the stock market? Stephen Wolfram talks about this, its a phenomenon called computational irreducibility, meaning that some processes cannot be simply just estimated with a mathematical formula, and furthermore, these processes can arise from very simple rules.
The point that Im trying to make is that your warnings are absolutely empty. Its equivalent to those people worried that LHC was going to punch a hole in our universe. There is zero information right now to base any worry of an advanced AI.
First, is nobody understands what AGI. You don't understand it. Most people on HN don't understand it. Even Eliezer Yudkowsky doesn't seem to quite get it.
People seem to extrapolate current progress with LLMS to something that is so intelligent that it can figure out how break out take over the world. Its all romantic science fiction narratives, without any real basis.
The explanation of how hard AGI is to achieve is actually really simple. Lets assume that we have an AGI that is capable of taking over the world and subduing all humans, and even more so, lets say it has full access to the internet and it has been pretrained on everything on the internet. To achieve its goal, it needs to be able to have control of a highly chaotic system that is the human population operating in reality, where its own actions cause the system to change drastically. In turn, for control, it has to have one of 2 things.
1. some sort of simulation system that is a representation of reality that is not only accurate enough, but can also be ran at a rate faster than reality, and in parallel. Oh and it also needs the initial state of that simulation, which by definition has to include the configuration of the neural network synapses in the human brain times the number of people it deems necessary to complete its objective. Let me know when we terraform mars with a bunch of nuclear powerplants and entire surface covered with TPUs to run a second earth simulation through evolution from primitive lifeforms to humans, and maybe this will be possible.
2. Somehow generate a mathematical formula for reality, thus proving P=NP as a side product.
I hopefully shouldn't have to explain how unlikely either are.
As for all the weak AIs, they will only make society better because for every AI that can do something, you can build an equivalent AI to counter it, and the things that will be left over are those that generate a net good.
Secondly, (and I only say this because I have met quite a few people like you to say that this isn't an uncommon phenomenon, notably my wife who almost failed out of her graduate program because Trump news was making her depressed) is that you have an psychological issue where your internal reward loop is based around being aware of [insert bad news here] and the more strongly you associate with it and concern yourself with it, the better/more connected you feel to people.
Except that the people around you aren't actually around you, they are just randos on the internet who don't give a fuck about you. And you aren't doing anything good by being "concerned" with the bad news, because you aren't actually concerned. To be concerned about something results in actions that you take, but I bet you aren't doing anything to actually address the problem that you see. All you are doing in being "concerned" is essentially woke signaling to others in hopes of gaining sympathy and thus feeling accepted.
Once you realize that you have this issue, and work to fix it, your life will become much better.