6 Smarter Than Us
Last updated
Last updated
Transformative artificial intelligence (TAI) may well be developed this century.
TAI would be a massive deal and could pose significant existential risks
Transformative Events
What are examples of transformative events that shifted our way, quality or potential of life on a global scale?
Could we foresee this coming? Does this support prioritizing x-risk?
Benefits of AI
How does AI benefit you and society, today?
What are the potential benefits from AI? What problems could be solved?
Getting to know AI a bit more
What is required for transformative AI to exist? For AI to become an x-risk?
What is narrow vs general AI?
What are the limits of AI?
Do you think AI could never have something that humans have?
Do you think Humans could be inferior to AI in most, if not all aspects of life?
Dangers of AI
How does AI harm us, presently?
How could AI be an x-risk?
Does it depend on the ill-intent of AI or that of bad actors?
Do you consider digital sentience another danger?
Will humanity be able to control or influence a being >100x more intelligent than humanity?
Imagine you are the most powerful being in the world.
Actually, you are already more powerful than most insects. How do you care about them?
Will an AI care about humans?
Forecasting
Do you think Transformative or General AI is possible - in this century?
What are some signs that transformative or general AI is coming soon?
What do the experts say?
Why does it matter when TAI arrives?
Solvability
What are realistic solutions to the problems addressed?
What are current interventions supporting AI Safety?
How does humanity contribute to increasing the risk from AI?
Do you consider global cooperation a realistic solution?
Considering the current competitiveness between nations, do you believe e.g., USA & China, will focus on Safety rather than power?
Exercise:
Should we nationalize AI?
What’s the difference between an x-risk and an s-risk?
Do you think avoiding suffering is more or less important than protecting the potential for a great future?
Could it imply that we should rather non-existent, than exist in a world where there is s-catastrophe?
How would applying Bayes’ Rule to an analysis of the world’s problems change how you see where you think you could make the biggest impact, if at all?
What’s something that you feel very unsure about that you think is important for your life or your priorities? How would applying Bayes’ rule to this issue impact your process for thinking it through?