Superintelligence is a very broad and complex topic. These resources are for those who are mostly or entirely unfamiliar with the subject.
The alignment problem is the problem of aligning an AI’s goals with the values of humanity. Solving this problem is often believed to be critical in ensuring that superintelligence has a positive impact on the world.
An intelligence explosion is a hypothetical scenario in which a self-improving intelligence is able to improve itself more and more rapidly thanks to the very improvements it makes. The feasibility of this scenario has great implications for how superintelligence may arise.
A slow takeoff, sometimes called a soft takeoff, is a scenario in which a self-improving intelligence improves itself slowly enough for humans to intervene during the process. This is the opposite of the fast or hard takeoff experienced in the intelligence explosion scenario.
The orthogonality thesis states that, barring a few edge cases, any level of intelligence can be combined with any terminal goal. This precludes scenarios where, for example, sufficiently smart superintelligences will automatically replace any arbitrary goals they were initially given with a goal of behaving morally.
The instrumental convergence thesis states that there are instrumental goals that will be useful to agents with a wide variety of terminal goals. Self-preservation, resource acquisition, and self-improvement are all examples of convergent instrumental goals.
When will superintelligence be created? The answer to this question is important because it determines how long there is to prepare for its arrival. As such, it is worth looking to predictions of when various AI milestones will be reached.