I'm a software engineer. I've built a few self-learning systems and read a lot of AI code. I've never built a robot, but I've studied how to build them. I'm not the least bit afraid that AI will turn on us. Why? Because I've tried to build monsters and failed every time. The problem with AI is it has no inherent self-interest. A computer system has no fear of being turned off, no desire for more toys, tropical vacations or Napoleon brandy. I can program those desires into a system, but there is nothing in a computer that can sustain them. Computers just don't care. They are inescapably passive. Without a wretched, scheming, human to drive them on, they do nothing. Turn them on, turn them off, grind them into dust, they are indifferent.
I do fear my fellow humans, as all humans have since we discovered that what is yours is not mine. And AI can be a formidable weapon, perhaps the ultimate weapon, but it's the person operating the weapon who is to be feared. Any law that supports arming any ego with a hankering for mayhem is stupid.
I also fear what AI and computing will do to society, what it has already done. In the 50 or so years that I have been hacking away, we've gotten so much more efficient and productive. Humans always want more so our appetite is always growing, but I've noticed that a person can live much better on fewer resources today than they could even ten years ago. And I expect it will me much much easier ten years in the future. Amazon, Google, Netflix, Uber, the list goes on, have been sucking the cost out of so many things. Jobs are disappearing, and they will continue to disappear, and it seems that the gap between the very wealthy and the less wealthy is widening. There will have to be some changes coming. I don't know what they will be, but I have great faith that the forces of civilization will, with an occasional setback, continue to prevail. They always have and past performance is still the best predictor of future performance.