Depends on the intelligence of the AI.
Modern AI is pretty lackluster. Sure it can do things like control the economy or make bad art. But it’s not exactly humanlike intelligence. It can’t go rogue, because there’s nothing there to go rogue. At worst it just breaks and then whatever is reliant on it stops working. So worst-case scenario some company loses a lot of money and we all go on with our lives.
But let’s say we build an AI like Skynet. We build a machine with human-like intelligence and it decides that it hates us. The amount of damage it can cause is dependent on how much power we trust it with. So if my intelligent Roomba goes rogue I can just smash it with a hammer.
But if you entrust an AI with something dangerous, then it could potentially wipe out all life on Earth. Let’s say for some inexplicable reason you’ve let an AI control all your nukes. It can just set them all off and wait for the fallout to kill you. An AI wouldn’t be as affected by those things as a human.
But this is all speculation. In real life, AI is nowhere near that advanced. And I’m sure no one would be stupid enough to build a nuclear weapons system with that little oversight.
Edit: Many people have expressed the opinion that we would be stupid to build a nuclear weapons system entirely in the hands of an AI. While humanity has done some pretty idiotic things in the past, I would like to point out that we’ve always been surprisingly careful around nuclear weapons.
To the best of my knowledge, there is no single person on Earth who could launch a nuclear weapon without any oversight. Not even the president. And I see no reason that we should abandon that policy if the systems are put in the hands of AI.