Week 7 - What do you think?

Making beliefs pay rent

Beliefs should include the anticipation of specific consequences

Recommended Approach

  • What to anticipate > what to believe

    • what does the beliefs predict or prohibit?

Discouraged Approach

  • Beliefs that do not allow for anticipation or world modelling

Bayes' rule: Guide

What?

  • a way to update your guess when you get new clues

  • how much to revise our probabilities (change our minds) when we learn a new fact or observe new evidence.

How?

  1. You start with a belief (Prior)

    • I think it’s likely to rain today

  2. You get new evidence.

    • I see dark clouds

  3. Bayes tells you how much that new evidence should change your belief (prior)

    • Not all clues are equal. Some are strong, some are weak.

Example

  • You start with a belief. (Prior)

    • I think it’s likely to rain today

    • Assign Probability to belief

      • 80% yes rain

      • 20% no rain

  • You get new evidence.

    • I see dark clouds.

    • Assign probability to evidence

      • 'Imagine that scenario really is happening. Given that, how often would we see the evidence we just observed?'

        • If it really is raining, how often would I see dark clouds?

          • Dark clouds means 90% rain

        • If it isn’t raining, how often would I still see dark clouds?

          • Dark cloud means 10% no rain

  • Bayes tells you how much that new evidence should change your belief.

    • Prior x Evidence-Probability

      • 0.8*0.9 = 0.72

      • 0.2*0.1 = 0.02

  • 'Normalize' to have probabilities gain

    • Divide each by total to make proper probabilities

      • Total = 0.72+0.02 = 0.74

    • Updated Belief for Rain: 0.72/0.74 = 0.973

    • Updated Belief for No Rain: 0.02/0.74 = 0.027

What is evidence?

Reality (evidence) is based on causality i.e., if this than that or if this not, than that not

  • If this

    • Shoelaces being untied

  • than that

    • Seeing the shoelaces as being untied

Non-Causality would remove the possibility of evidence

Otherwise if the shoelaces are untied and you could see both

  • seeing shoelaces as being untied

  • seeing shoelaces as being tied

Rationality follows cause and effect and hence = reality = evidence

Rational beliefs mirror reality. Rational beliefs arise from cause and effect of observing reality.

  • Light hitting the untied shoelaces are reflected into my retina

  • I am aware of what I am seeing

  • I see the shoelaces being untied

Check whether your thought processes represent reality i.e.,

  • end up believing 'snow is white' if and only if snow is white and not otherwise

Independent impressions

Two types of Beliefs

  • Independent Impression (II) Definition

    • what you'd believe about that thing if you weren't updating your beliefs in light of peer disagreement

      • if you weren't taking into account your knowledge about what other people believe and how trustworthy their judgement seems on this topic

  • all-things-considered belief (ATCB) Definition

    • take into account peer disagreement.

How to Approach Discussions

  • Discern Independent Impressions & ATCB

  • Feeling comfortable reporting my own independent impressions

  • Specify whether one is expressing II or ATCB

  • Benefit of forming & reporting II

    • communities I'm part of might end up with overly certain and homogenous beliefs

How to Approach Decisions

  • Always based on ATCB

Example

  • My independent impression

    • it's plausible that an unrecoverable dystopia is more likely than extinction and that we should prioritise such risks more than we currently do.

  • All-Things-Considered-Belief

    • My independent impression seems relatively uncommon among people who've thought a lot about existential risks.

    • That observation pushes my all-things-considered belief somewhat away from my independent impression and towards what most of those people seem to think.

Reflecting on the Last Year — Lessons for EA (opening keynote at EAG)

Unilateral Curse

  • few members of a very large and informal group do something wrong, when this is against the wishes of almost all the others

  • Example - Sam Bankman Fried & FTX

    • SBF - most famous person in crypto had become the most famous person in EA

    • Someone whose views and actions were quite radical and unrepresentative of most EAs

    • Became the most public face of effective altruism

  • Negative Consequences

    • Distorting public perception

      • EA became more closely connected to an industry that was widely perceived as sketchy

      • Politics: SBF donated agreat deal of money going into politics

        • EA tried hard over the previous 10 years to avoid EA being seen as a left or right issue — immediately alienating half the population

    • Distorting our self-perception of what it meant to be an EA

Approach to Morality

  • EA is not a complete moral theory

  • Moral Compatibility

    • e.g., side-constraints, options, the distinction between acts and omissions, egalitarianism, allowing the intentions behind an act to matter, and so on

  • Compatibility of the 3 Moral Frameworks

    • They don't have to be in disagreement with each other

    • Utilitarianism

      • Consequentialism

        • The only thing that matters, morally speaking, is how good the outcome is.

      • Utilitarianism

        • Consequentialism + The consequence must be the total wellbeing of all individuals.

          • Scope Sensitivity

            • 'Indeed, it is compatible with almost anything, just so long as we can still agree that saving a life is a big deal, and saving ten is a ten times bigger deal.'

      • Dangers of Utilitarniasm

        • Immoral Utilitarianism

          • customers’ own deposits were raided to pay for an increasingly desperate series of bets to save the company.

            • Even if that strategy had worked and the money was restored to the customers, I still think it would have been illegal and immoral.

        • Imperfect attempts to follow it can lead to very bad outcomes.

      • I could leave behind the controversial claims of utilitarianism:

        • that only effects on wellbeing mattered

        • that these should be simply added together

        • and that wellbeing only took only the form of happiness and suffering.

    • Deontology: Rules

    • Virtue: Character/Intent

      • What?

        • traits and think about whether they are conducive to good outcomes, calling those that are conducive ‘virtues’.

      • Assessing character

        • tendency to produce good outcomes.

      • Neglect

        • 'And we should put more focus on character into our community standards.'

        • 'I think the importance of character is seriously neglected in EA circles'

        • Reason

          • Perhaps one reason is that unlike many other areas, we don’t have comparative advantage when it comes to identifying virtues.

            • This means that we should draw on the accumulated wisdom as a starting point.

  • Don’t act without integrity.

    • When something immensely important is at stake and others are dragging their feet, people feel licensed to do whatever it takes to succeed.

    • A single person acting without integrity could stain the whole cause and damage everything we hope to achieve.

  • Excellence > Perfection

    • Caution with Maximization, due to moral uncertainty

      • if the thing you are maximising is even slightly off, it can go very wrong in the extremes

    • Scale matters

    • But not a fixation on the absolute maximum — on getting from 99% to 100%.

  • Stay close to common-sense on almost everything.

    • It encodes the accumulated wisdom of thousands of years of civilisation (and hundreds of thousands of years before that).

    • Indeed, even when the stated reasons for some rule are wrong, the rule itself can still be right — preserved because it leads to good outcomes, even if we never found out why.

  • Don’t trust common-sense morality fully — but trust your deviations from it even less.

    • It has survived thousands of years; your clever idea might not survive even one.

  • Explore various ways common-sense might be importantly wrong.

    • Discuss them with friends and colleagues. There have been major changes to common-sense morality before, and finding them is extremely valuable.

  • Make one or two big bets.

    • For example mine were that giving to the most cost-effective charities is a key part of a moral life and that avoiding existential risk is a key problem of our time.

  • But then keep testing these ideas.

    • Listen to what critics say — most new moral ideas are wrong.

  • And don’t break common-sense rules to fulfil your new ideas.

EA is about maximization, and maximization is perilous

Maximization is Perilous

  • What?

    • Do the most good possible by maximizing X

  • Why?

    • Uncertainty of what 'good' actually is

      • 'None of us really knows.'

      • EA is about maximizing a property of the world that we’re conceptually confused about, can’t reliably define or measure, and have massive disagreements about even within EA

  • Risks

    • Risk breaking/downplaying/shortchanging lots of things that aren’t X, which may be important in ways you’re not seeing

    • We’d have a bitterly divided community, with clusters having diametrically opposed goals.

      • Focus on the implications of your actions for the long-run future

        • View 1 “The more persons there are in the long-run future, the better it is”

        • View 2 “The more persons there are in the long-run future, the worse it is.”

      • Potential Negative

        • You should take a fundamentally adversarial and low-trust stance toward each other

    • We’d have a community full of low-integrity people, and “bad people” as most people define it.

      • Communicate honestly, even when this would make our arguments less persuasive and cause fewer people to take action based on them?

      • “say whatever it takes” to e.g. get people to donate to the charities we estimate to be best?

      • Stick to promises we made? Or does utilitarianism recommend that we go ahead and break them when this would free us up to pursue our current best-guess actions?

    • We’d probably have other issues that should just generally give us pause.

      • Being a bad friend (e.g., refusing to do inconvenient or difficult things when a friend is in need)

      • bad partner (same)

      • narrow thinker (not taking an interest in topics that don’t have clear relevance to the maximand)

  • Recommendation

    • Already Established

      • most EAs are reasonable, non-fanatical human beings, with a broad and mixed set of values like other human beings, who apply a broad sense of pluralism and moderation to much of what they do.

      • EAs’ writings and statements are much more one-dimensional and “maximizy” than their actions.

    • Embrace the core ideas of EA with limits or reservations

    • Need to constantly inject pluralism and moderation

      • The core ideas on their own seem perilous, and that’s an ongoing challenge.

    • Caution about “show off” how little moderation they accept

      • How self-sacrificing, “weird,” extreme, etc. they’re willing to be in the pursuit of EA goals.

Last updated