I hear you. The concepts we are dealing with here are very abstract.
It sounds like you are getting hung up on details of a particular example. That is not useful. We can’t give you exact details for a particular black ball because we haven’t encountered one yet. Sadly the fact that we haven’t encountered one yet doesn’t mean that they don’t exists.
Think about it like this: There are technologies which are easier to stop spreading and there are technologies which are harder to stop spreading.
Example for a technology which is easier to stop: Imagine that a despotic government wants to stop people from space launches. All the known tech to reach orbit is big and heavy and requires a lot of people. It is comperatively easy to send out agents who look at all the big industrial installations and dismantle the ones used for space launches. There will be only a handful of them and they are hard to hide.
Now an example for a technology which is harder to stop: imagine that the fictional despotic government has it in for cryptography. That is a lot harder to stop. One can do it alone in the privacy of their own home! All you need to do is some pen and paper. That can be hidden anywhere! A lot lot harder thing for the agents to find and distrupt.
We talked about how easy to stop the spread of a given technology. Now let’s think about something else. The potential of a given tech to cause harm.
An example for a risky technology: nuclear weapons. If you have them you can level a city. That is a lot of harm in one pile.
An example for a less risky technology: ergonomic tool handles. Those rubbery overmoldings which make it nicer to use the tool long term. There is no risk free technology, but I hope you agree that these are a lot less dangerous than a nuclear bomb.
Do I have you so far? Good. Because this was the easy part. We talked about things which already exists. Now comes the hard part. This requires some imagination: We have seen tech which was easy to control and tech which was harder. We have seen tech which was risky and tech which was less risky. Can these properties come in all combinations? In particular: are there technologies which are both risky and hard to control? Something for example where any able human can accidentally or intentionally level a city or kill all humans? I can’t give you an example, we don’t have technology like that yet.
The example you are asking about is an example for this kind of technology: high risk, hard to control.
Nobody says that you can download software today from github which can help you engineer a deadly virus from household chemicals. This does not exist. It is a stand in for the kind of tech which if it were possible it would mean that we have a high risk, hard to control technology
Does this help explain the context better? Let me know if you still have questions.
> Does this help explain the context better? Let me know if you still have questions.
That's the problem. The thing you're scared about (dangerous technology) has nothing to do with the context (AGI) because there's no reason to think AGI is especially capable of creating any of it or is going to. Humans create general intelligences (babies) all the time and you aren't capable of, nor are you putting any effort into, "aligning" babies or stopping them from existing.
AGI being superintelligent won't give it superhuman creation powers, because creating things involves patience, real-life experimentation and research funds, and while I'll grant you computers have the first they won't have the other two.
Sorry. Where did i mention anything about AGI? Why is that the “context”?
Some form of AGI under some circumstances might be black ball tech. There can be other black balls which have nothing to do with AI let alone AGI.
> The thing you're scared about
I’m scarred about many things but black ball tech is not one of them.
One can discuss existential risks without being scared about it.
> they won't have the other two
If you say so? I don’t agree with you on this, but it feels this would mislead the conversation, since AGIs and black ball tech has at most some overlap.
It sounds like you are getting hung up on details of a particular example. That is not useful. We can’t give you exact details for a particular black ball because we haven’t encountered one yet. Sadly the fact that we haven’t encountered one yet doesn’t mean that they don’t exists.
Think about it like this: There are technologies which are easier to stop spreading and there are technologies which are harder to stop spreading.
Example for a technology which is easier to stop: Imagine that a despotic government wants to stop people from space launches. All the known tech to reach orbit is big and heavy and requires a lot of people. It is comperatively easy to send out agents who look at all the big industrial installations and dismantle the ones used for space launches. There will be only a handful of them and they are hard to hide.
Now an example for a technology which is harder to stop: imagine that the fictional despotic government has it in for cryptography. That is a lot harder to stop. One can do it alone in the privacy of their own home! All you need to do is some pen and paper. That can be hidden anywhere! A lot lot harder thing for the agents to find and distrupt.
We talked about how easy to stop the spread of a given technology. Now let’s think about something else. The potential of a given tech to cause harm.
An example for a risky technology: nuclear weapons. If you have them you can level a city. That is a lot of harm in one pile.
An example for a less risky technology: ergonomic tool handles. Those rubbery overmoldings which make it nicer to use the tool long term. There is no risk free technology, but I hope you agree that these are a lot less dangerous than a nuclear bomb.
Do I have you so far? Good. Because this was the easy part. We talked about things which already exists. Now comes the hard part. This requires some imagination: We have seen tech which was easy to control and tech which was harder. We have seen tech which was risky and tech which was less risky. Can these properties come in all combinations? In particular: are there technologies which are both risky and hard to control? Something for example where any able human can accidentally or intentionally level a city or kill all humans? I can’t give you an example, we don’t have technology like that yet.
The example you are asking about is an example for this kind of technology: high risk, hard to control.
Nobody says that you can download software today from github which can help you engineer a deadly virus from household chemicals. This does not exist. It is a stand in for the kind of tech which if it were possible it would mean that we have a high risk, hard to control technology
Does this help explain the context better? Let me know if you still have questions.