Widely useful is also widely dangerous.

I think the debates about AI safety need not rely on anything more than a simple chain of logic.

  1. Beneath all the buzzwords, the goal of building AGI is for it to be a superhuman problem solver in any domain.

  2. If a system cannot solve a problem that the person most qualified for the task would solve, it is therefore not AGI.

  3. If a system can solve any problem we can, it can also create plans for any dangerous acts a human might scheme. If it cannot, it's either because it's not actually AGI or because it's been aligned.

I do not see another option. Either it is widely capable, and thus dangerous if unaligned - or it is not AGI and thus mostly irrelevant.