Technically Exists

Why discuss superintelligence?

2018-07-01

My site now contains a miscellaneous section with exactly one page, the Superintelligence reference page. I’ve briefly brought up superintelligence in the past, but I’ve never discussed it in detail. However, given that others have already written so much about it, I feel like it makes more sense to collect the resources I think do a good job of explaining it. Therefore, if you have no idea what superintelligence is or why you should care about it, I’d recommend starting with some of the links in the Introductions section before reading on.

You back? Good. At this point, you might be wondering why I’ve singled out this topic as one worth prioritizing. The answer to that question is that superintelligence is an existential risk. Using a conservative estimate, it is possible for humanity to create 1034 fulfilling years of human life throughout the remainder of the universe’s existence. Thus, ensuring humanity remains capable of doing so is extremely important. This and other concerns justify treating existential risk prevention as a global priority.

Because superintelligence is an existential risk, ensuring that it is aligned with human values is a very important problem. Some organizations even consider it to potentially be the most important global problem that humanity has identified thus far. At the same time, media coverage of this issue and AI in general has been increasing, and a lot of it is not very good. Therefore, it seems important to have resources available on the subject.

As of this post, the reference page has only three sections, but I plan to expand upon it in the future. Topics that I plan to add include the orthogonality and instrumental convergence theses, the intelligence explosion concept, and scenarios without an intelligence explosion. There’s a lot of room to grow the page, and I think doing so is definitely worthwhile.


comments powered by Disqus