Technically Exists

Superintelligence reference page


Superintelligence is a very broad and complex topic. These resources are for those who are mostly or entirely unfamiliar with the subject.

Alignment problem

The alignment problem is the problem of aligning an AI’s goals with the values of humanity. Solving this problem is often believed to be critical in ensuring that superintelligence has a positive impact on the world.

Orthogonality thesis

The orthogonality thesis states that, barring a few edge cases, any level of intelligence can be combined with any terminal goal. This precludes scenarios where, for example, sufficiently smart superintelligences will automatically replace any arbitrary goals they were initially given with a goal of behaving morally.

Instrumental convergence thesis

The instrumental convergence thesis states that there are instrumental goals that will be useful to agents with a wide variety of terminal goals. Self-preservation, resource acquisition, and self-improvement are all examples of convergent instrumental goals.


When will superintelligence be created? The answer to this question is important because it determines how long there is to prepare for its arrival. As such, it is worth looking to predictions of when various AI milestones will be reached.