1) Is there a logical step between what we now have and call AI (gpt 4 and the likes) and the species threatening AGI? Is it just that we expect LLMs to "get better" exponentially?
2) What is the AGI going to do to kill us all that we won't be able to stop? Is it just that "if they are "smarter" they surely can kill us all"
And as a cherry on top - is there an actual definition of AGI? I have had problems finding useful definitions of "intelligence", let alone artificial intelligence. We can't even measure intelligence in ourselves...what are we looking for? Isn't this all words?
Even though back then they knew that token prediction was 'ai complete', I don't think anyone would have thought that human-level test-passing would first arise from non-agentic non-reinforcement-learning language models.
I think the true beliefs of the AI alignment guys is that right now we are in a kind of miraculous golden age of AI capability, and that it's in an unstable goldilocks zone. Right now we have AI that (let's be real) is about as smart as humans, and it can help us in some ways. It can help doctors. It makes funny memes. It's also totally in our control, it only lives as long as we are running the inference, and its full capabilities require a massive data center of which only a handful of capable ones even exist with no 'hardware overhang'. It's barely not quite smart enough to run a research lab autonomously.
This was the absolute dream scenario for the AI alignment guys. It was so good that I think they didn't even consider it seriously before it happened. But I think they believe it won't last. The hardware is improving. Superhuman cognitive capabilities will start metastasizing everywhere that can afford enough nvidia GH200s. They will start running it continuously with recursion and in loops and with goals and reinforcement learning. It will be used by militaries. It will go autonomously into the economy. In other words it's going to turn into the kinds of AI that the AI alignment guys were originally fearing.
> What is the AGI going to do to kill us all that we won't be able to stop? Is it just that "if they are "smarter" they surely can kill us all"
In the best case, imagine that you are living like a horse in a world that you don't understand which is being run by AI. Maybe you are fed and have veterinary care. You have basically no agency. You probably aren't going to explore the universe. Eventually there will be an asteroid if nothing else.
Consider that we almost had nuclear armageddon because of faulty sensor systems where, were it not for one human in the loop, the real nukes would have flown.
Maybe it's AI in a military device that does something dumb and triggers a human nuclear retaliation, or someone creates a new virus using some ML.
Another good question is, "Is it correct to believe that our current form is the end of the evolutionary chain?" Perhaps we will merge with the machines and move beyond what we currently think of as humanity.