Not sure how good the Robert Miles channel is for mums (mine might not be particularly interested in his channel!) but for communicating about AI risk Robert Miles is (generally) good and I second this recommendation
Just a quick comment to say that SoGive would be well positioned to be another consultancy providing services like Rethink.
We have collaborated with Rethink before (see this research) and are in moderately frequent informal contact with them.
We have c10 analysts who are a mixture of volunteers and staff. Mostly volunteers, as the organisation is funded solely by me, and there is a limit to what I can afford.
I’m open to the idea of us doing more of this sort of work, although it would need a discussion before we commit to anything, as we already have a separate strategic focus in mind.
Thanks for this, good question!
I agree with your point that investors have some blind spots, in particular that some areas of finance are not good at incorporating long term considerations.
So I think you’re right, the ESG concept probably could achieve some impact by helping address that sort of blind spot.
I probably should have said something more like “To judge whether I, as someone working in ESG investing, is having material impact, we need to see if I’m actually having an influence on scenarios where there is a tension/trade-off”. This is because ESG-related work is already working to address that blind spot.
Sorry I didn’t spot your comment earlier. Yes, more than happy for this to be shared more widely. Feel free to use this link if you wish: https://effectiveesg.com/2021/05/24/esg-investing-needs-thoughtful-trade-offs/
Thanks very much for pointing out that error—now corrected. I’ve looked at the answers which have been recorded, and they include an answer which includes comments similar to the comment you made here, so I think it’s been recorded. Thank you very much!
I have now expanded the acronym when it’s used in the first sentence.
How nervous should we be about talking about/recommending action on AI risk?
I think a lot of people in the EA community worry that AI risk is “weird”, sufficiently weird that you should probably be careful talking about it to a broad audience or recommending what they donate to. Many would fear alienating people or damaging credibility. (Especially when “AI risk” refers to the existential risks from AI, as opposed to, e.g., how algorithms could cause inadvertent bias/prejudice)
A thought experiment to make this more concrete: imagine you were organising a big sponsored event where lots of people would see 3 recommended charities. Would you recommend that (say) MIRI would be one of the recommended charities?
Thank you to Alex for writing this piece, which I think is really helpful.
I am a Founder and Director of SoGive. We support donors to achieve more impact, and we influence c£1m per annum, the majority of which is from a very small number of major donors.
In this comment, I will say that I think the thrust of Alex’s concerns are valid and still stand, to my mind. But first:
I want to take my hat off to the guys at Giving Green.
My first tentative forays into getting SoGive going were as early as 2015 and the official start date was 2017, so it’s taken a long time to get to where we are. By contrast Giving Green has achieved a much higher profile than we have, and they’ve achieved it quickly. I would also say that Giving Green’s analytical capabilities are ahead of where we were in 2016. Furthermore, the team is still only working on Giving Green in their spare time, so their progress is impressive.
While achieving traction quickly is great, I question whether Giving Green has achieved their traction too quickly.
For the first several years of our existence, SoGive’s recommendations were solely borrowed from other better-resourced organisations like GiveWell, and we’re only now in the process of updating our website to reflect our own analysis.
And of course just because SoGive is doing things one way, it doesn’t mean that that way is right. But there are reasons for our cautious approach.
I believe it is premature for Giving Green to put equal emphasis on recommendations where there is an EA consensus (like CATF) and recommendations where Giving Green is going out on a limb (like TSM).
I have had a small number of conversations with the Giving Green team now, and I think they are good guys who could create a good analytical organisation given time.
And on some of the points that Dan made in this thread, I have sympathies with his position. For example, on Climeworks, he made the point that “you are betting on the technology, not the company”. Contra Alex, I think this is a reasonable argument in favour of the claim that one of the Metaculus forecasts is not analytically helpful. (although doesn’t support Dan’s claim that both are irrelevant)
Having said that, the majority of Alex’s concerns still stand, to my mind.
Furthermore, I have read some of the Giving Green analysis, and believe that Alex’s list of concerns would be longer, if only there were time to do a more detailed review.
I’m conscious that reading much of this thread may feel punishing for the Giving Green team. However I really am positive about the long-term potential for this project.
There is a low cost to signing the petition, so no harm in doing so.
However a petition will have minimal upside too.
No MP will be surprised to know that some people are in favour of maintaining the 0.7%, but they will largely imagine those people to lefties who would never vote for a Conservative MP anyway.
Emails to your MP are more valuable because they help to bring you, an aid supporter, to life.
Thanks Matt. One of our team is in close contact with Oxfam. Thank you.
Thanks for your message sindirella.
Our approach came about as a result of conversations with people who know generally what works best in influencing lawmakers/lobbying, and specifically in the UK.
Agreed with alexrjl re opinion polls. Implementing a poll/survey is straightforward for us (I used to run a research team when I was a strategy consultant). The reason we’re not doing it is that our discussions with experts suggest that there is not much value in doing this.
Great question! We want to do this, but there are a few practicalities we are working through. Also I think your experience would be really valuable for us—I’ll ping you a message.
Thanks for the suggestion.
We reached out to that MP and several other MPs and parliamentarians in the days immediately after the announcement, and are also in conversation with several NGOs active in this space, and other groups.