Can an autocomplete launch nukes?

There are a lot of discussions if GPT is really intelligent or just repeating the patterns from its training set. The problem is, for practical purposes that might be a bad question to ask.

Let's say it is inherently just a dumb stochastic parrot but we keep optimizing it to be a great autocomplete and instruction follower. We hope to get to a point where we can ask "Here's a stack of papers about cancer research, what insights are the scientists missing? Create a Python notebook showing your meta-regression models and email it to me."

Now, if we end up having an autocomplete that is powerful and useful in the ways we envision, how different is that from asking: "Write me a Python script that searches for vulnerable devices and traverses the network until it finds a way to launch a nuke from the US arsenal. Email it to me."


Now, ok, the nuclear arsenal might be safe. Maybe we do have great safeguards in place for that particular failure scenario.

But that's not the point. The point is that as we continue to improve the general abilities of our ML models, the leverage these models give will wildly increase as well.

It does not matter if it is a sentient autocomplete. A very useful one might just be enough to end us.