circumscribed realm of controlled intervention is its susceptibility to being

Circumscribed realm of controlled intervention is its

This preview shows page 6 - 7 out of 10 pages.

circumscribed realm of controlled intervention is its susceptibility to being used to tell us exactly what we want to hear. A three-day conference, ‘E±ective Altruism Global’, was held this summer at Google’s headquarters in Mountain View, California. While some of the sessions focused on the issues closest to MacAskill’s heart – cost-e±ective philanthropy, global poverty, career choice – much of it was dominated, according to Dylan Matthews, who was there and wrote about it for Vox , by talk of existential risks (or x-risks, as the community calls them). An x-risk, as de²ned by the Oxford philosopher Nick Bostrom, who popularised the concept, is an event that would ‘permanently and drastically curtail humanity’s potential’ – total annihilation is the obvious case. Given the number of people who might live in the future if not for such an event – Bostrom estimates the ²gure at 10 , assuming that we master interstellar travel and the uploading of human minds to computers – the expected value of preventing an x-risk dwarfs the value of, say, curing cancer or preventing genocide. This is so even if the probability of being able to do anything about an x-risk is vanishingly small. Even if Bostrom’s 10 estimate has only a 1 per cent chance of being correct, the expected value of reducing an x-risk by one billionth of one billionth of a percentage point (that’s 0.0000000000000000001 per cent) is still a hundred billion times greater than the value of saving the lives of a billion people living now. So it turns out to be better to try to prevent some hypothetical x-risk, even with an extremely remote chance of being able to do so, than to help actual living people. X-risks could take many forms – a meteor crash, catastrophic global warming, plague – but the one that e±ective altruists like to worry about most is the ‘intelligence explosion’: arti²cial intelligence taking over the world and destroying humanity. Their favoured solution is to invest more money in AI research. Thus the humanitarian logic of e±ective altruism leads to the conclusion that more money needs to be spent on computers: why invest in anti-malarial nets when there’s a robot apocalypse to halt? It’s no surprise that e±ective altruism is popular in Silicon Valley: PayPal founder Peter Thiel, Skype developer Jaan Tallinn and Tesla CEO Elon Musk are all major ²nancial supporters of x-risk research. Who doesn’t want to believe that their work is of overwhelming humanitarian signi²cance? The subtitle of Doing Good Better promises ‘a radical new way to make a di±erence’; one of the organisers of the Googleplex conference declared that ‘e±ective altruism could be the last social movement we ever need.’ But e±ective altruism, so far at least, has been a conservative movement, calling us back to where we already are: the world as it is, our institutions as they are. MacAskill does not address the deep sources of global misery – international trade and ²nance, debt, nationalism, imperialism, racial and gender-based subordination, war, environmental degradation, corruption, exploitation of labour – or the forces that ensure its reproduction.
Image of page 6
Image of page 7

You've reached the end of your free preview.

Want to read all 10 pages?

  • Spring '19
  • Hassan Kasfy

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

Stuck? We have tutors online 24/7 who can help you get unstuck.
A+ icon
Ask Expert Tutors You can ask You can ask You can ask (will expire )
Answers in as fast as 15 minutes