Update: this post has been instantly demoted from #1 to #26 on HN frontpage :) Hmm.
I think the debates about AI safety need not rely on anything more than a simple chain of logic.
I propose a rule of thumb to be used as a pragmatic definition of a superintelligent system. It is meant to be practical to a degree where everyone can have a shared understanding if a system has reached superintelligence or not.
. LLMs' intelligence is different from human intelligence. Evolutionary forces that forged our homo sapiens brains are not what created LLMs.