Technically Exists

Superintelligence reference page

Introductions

Superintelligence is a very broad and complex topic. These resources are for those who are mostly or entirely unfamiliar with the subject.

Alignment problem

The alignment problem is the problem of aligning an AI’s goals with the values of humanity. Solving this problem is often believed to be critical in ensuring that superintelligence has a positive impact on the world.

Intelligence explosion

An intelligence explosion is a hypothetical scenario in which a self-improving intelligence is able to improve itself more and more rapidly thanks to the very improvements it makes. The feasibility of this scenario has great implications for how superintelligence may arise.

Slow takeoff

A slow takeoff, sometimes called a soft takeoff, is a scenario in which a self-improving intelligence improves itself slowly enough for humans to intervene during the process. This is the opposite of the fast or hard takeoff experienced in the intelligence explosion scenario.

Orthogonality thesis

The orthogonality thesis states that, barring a few edge cases, any level of intelligence can be combined with any terminal goal. This precludes scenarios where, for example, sufficiently smart superintelligences will automatically replace any arbitrary goals they were initially given with a goal of behaving morally.

Instrumental convergence thesis

The instrumental convergence thesis states that there are instrumental goals that will be useful to agents with a wide variety of terminal goals. Self-preservation, resource acquisition, and self-improvement are all examples of convergent instrumental goals.

Timelines

When will superintelligence be created? The answer to this question is important because it determines how long there is to prepare for its arrival. As such, it is worth looking to predictions of when various AI milestones will be reached.