Any rogue AI, if it proves to be a danger, will still need to maintain its servers somehow. So to be a danger it either needs to get humans to do what it wants, in which case it can bomb things just as well with manned fighters, or develop its own robots and manufacturing, in which case it can build its own fighters.
We ought to worry that these will let General Ripper go rogue more easily or an adversary hack them, but I don't think this moves the needle at all one whether advanced AIs could be dangerous.
If the movie "the Cube" has taught us anything (and I guess it hasn't) it is that it's possible work on something without knowing what it's actually for. With some misdirection, an AI could probably run in a data center that is on paper doing something completely different. Do most people maintaining cloud servers today even know what they are running?
We ought to worry that these will let General Ripper go rogue more easily or an adversary hack them, but I don't think this moves the needle at all one whether advanced AIs could be dangerous.