7 Safeguarding Humanity
Last updated
Last updated
In Week 7 we examine a grand strategy for humanity. We refer to the time when our long-term potential is secured as the Long Reflection, in which we can think about what is best for us and our moral circle.
Existential Security
What is meant by 'existential security'?
How likely do you see achieving Existential Security?
What is your top solution for society for reaching existential security?
What is the prior cause for that top pick?
e.g., Global Coordination -> EA Movement Building causes Global Coordination
How can you, as an indiviudal, contribute to Existential Security?
Criticism about it being easy to brush off āparochial conflicts of the dayā when they donāt impact us but do others (e.g., poverty, discrimination, factory farming, etc). What are the best ways to address these tensions (āItās easy to focus on humanity in 1 million years when you and your loved ones arenāt affected by very real issues like x,y,z.ā) How do you respond to this? How much do these tensions need to be addressed?
Long Reflection
What is the Long Reflection period about?
How likely do you see the Long Reflection being successful?
What has to clarified and done during LR before taking action and achieving our potential?
How likely do you see a 100% consensus during the long reflection period?
What could a <99% consensus imply for x or s-risk?
(Next week)
Do you find any of those particularly beneficial or actually disagreeable?
Do you have anything to add?
Donāt regulate prematurely. At the right time, regulation may be a very useful tool for reducing existential risk. But right now, we know very little about how best to do so. Pushing for ill-considered regulation would be a major mistake.
Donāt take irreversible actions unilaterally. Some countermeasures may make our predicament even worse (think radical geoengineering or publishing the smallpox genome). So we should be wary of the unilateralistās curse (here), where the ability to take actions unilaterally creates a bias toward action by those with the most rosy estimates.
How likely is it that humanity goes through a process anything like an ideal reflection before making irreversible (or very hard to reverse) decisions?
What will make it more or less likely that cooperative, informed, thoughtful reflection determines the future of humanity?
Donāt spread dangerous information. Studying existential risk means exploring the vulnerabilities of our world. Sometimes this turns up new dangers. Unless we manage such information carefully, we risk making ourselves even more vulnerable (see the box āInformation Hazards,ā here).
Donāt exaggerate the risks. There is a natural tendency to dismiss claims of existential risk as hyperbole. Exaggerating the risks plays into that, making it much harder for people to see that there is sober, careful analysis amidst the noise.
Donāt be fanatical. Safeguarding our future is extremely important, but it is not the only priority for humanity. We must be good citizens within the world of doing good. Boring others with endless talk about this cause is counterproductive. Cajoling them about why it is more important than a cause they hold dear is even worse.
Donāt be tribal. Safeguarding our future is not left or right, not eastern or western, not owned by the rich or the poor. It is not partisan. Framing it as a political issue on one side of a contentious divide would be a disaster. Everyone has a stake in our future and we must work together to protect it.
What are concrete suggestions to counter 'tribalism', to achieve unity and global cooperation that benefits all?
Where are current problems you see in international/global decision making, and how to overcome them?
Donāt act without integrity. When something immensely important is at stake and others are dragging their feet, people feel licensed to do whatever it takes to succeed. We must never give in to such temptation. A single person acting without integrity could stain the whole cause and damage everything we hope to achieve.
Donāt despair. Despairing would sap our energy, cloud our judgment and turn away those looking to help. Despair is a self-fulfilling prophecy. While the risks are real and substantial, we know of no risks that are beyond our power to solve. If we hold our heads high, we can succeed.
Donāt ignore the positive. While the risks are the central challenges facing humanity, we canāt let ourselves be defined by them. What drives us is our hope for the future. Keeping this at the center of our thinking will provide usāand othersāwith the inspiration we need to secure our future.
As an individual, what are the ways you can do good related to x-risk?
As a society, what are the best ways to do good related to x-risk?
Discussion 1
āBut now our longterm survival requires a deliberate choice to survive. As more and more people come to realize this, we can make this choice. There will be great challenges in getting people to look far enough ahead and to see beyond the parochial conflicts of the day. But the logic is clear and the moral arguments powerful. It can be done.ā
Criticism about it being easy to brush off āparochial conflicts of the dayā when they donāt impact us but do others (e.g., poverty, discrimination, factory farming, etc). What are the best ways to address these tensions (āItās easy to focus on humanity in 1 million years when you and your loved ones arenāt affected by very real issues like x,y,z.ā) How do you respond to this? How much do these tensions need to be addressed?
How much should space colonization within the next few decades be prioritized given uncertainty about AGI timelines and other X-risks? Is there consensus about how important this is, and how much work is already going into it? Any good reading/resource recommendations?
Does the benefit of increased resiliency outweigh the potential cost of increased complacency?
Discussion 2
In footnote 67, Ord warns that, if funding for X-risk came from sources more general than OpenPhil, thereās a risk that it will distort the priorities of the field (eg make asteroids the top priority). How should we approach getting funding for x-risk from these general sources?
Ord recommends that we must not be tribal in order to protect the future. Are we succeeding in this?
How likely is it that humanity goes through a process anything like an ideal reflection before making irreversible (or very hard to reverse) decisions? What will make it more or less likely that cooperative, informed, thoughtful reflection determines the future of humanity?
are useful for thinking about field-building failure modes