Anonymous feedback me: https://www.admonymous.co/tetraspace
Tetraspace
Nowadays I would not be so quick to say that existential risk probability is mostly sitting on “never” 😔. This does open up an additional way to make a clock, literally just tick down to the median (which would be somewhere in the acute risk period).
I was looking for the address of the venue to plan travel, but couldn’t find it on this events page, so I’ll make a comment. It’s on effectivealtruism.org here, namely:
Tobacco Dock, Tobacco Quay, Wapping Lane, London, E1W 2SF, London, United Kingdom.
Also, lending is somewhat of a commitment mechanism: if someone gets or buys a book, they have forever which can easily mean it takes forever, but if they borrow it there’s time pressure to give it back which means either read it soon or lose it.
For fiction, AI Impacts has an incomplete list here sorted by what kind of failure modes they’re about and how useful AI Impacts thinks they are for thinking about the alignment problem.
As of this comment: 40%, 38%, 37%, 5%. I haven’t taken into account time passing since the button appeared.
With 395 total codebearer-days, a launch has occurred once. This means that, with 200 codebearers this year, the Laplace prior for any launch happening is 40% (). The number of participants is about in between 2019 (125 codebearers) and 2020 (270 codebearers), so doing an average like this is probably fine.
I think there’s a 5% chance that there’s a launch but no MAD, because Peter Wildeford has publicly committed to MAD, says 5%, and he knows himself best.
I think the EA forum is a little bit, but not vastly, more likely to initiate a launch, because the EA Forum hasn’t done Petrov day before and qualitatively people seem to be having a bit more fun and irreverance over here, so I’m giving 3% of the no-MAD probability to EA Forum staying up and 2% to Lesswrong staying up.
I looked up GiveDirectly’s financials (a charity that does direct cash transfers) to check how easily it could be scaled up to megaproject-size and it turns out, in 2020, it made $211 million in cash transfers and hence is definitely capable of handling that amount! This is mostly $64m in cash transfers to recipients in Sub-Saharan Africa (their Givewell-recommended program) and $146m in cash transfers to recipients in the US.
Another principle, conservation of total expected credit:
Say a donor lottery has you, who donates a fraction of the total with an impact judged by you if you win of , the other participants, who collectively donate a fraction of the total with an average impact as judged by you if they win of , and the benefactor, who donates a fraction of the total with an average impact if they win of . Then total expected credit assigned by you should be (followed by A, B and C), and total credit assigned by you should be if you win, if they win, and otherwise (violated by C).
Under A, if you win, your credit is , their credit is , and the benefactor’s credit is , for a total credit of . If they win, your credit is , their credit is , and the benefactor’s credit is , for a total credit of .
Your expected credit is , their expected credit is , and the benefactor’s expected credit is , for a total expected credit of .
Under B, if you win, your credit is and everyone else’s credit is , for a total credit of . If they win, their credit is and everyone else’s credit is , for a total credit of . If the benefactor wins, everyone gets no credit.
Your expected credit is and their expected credit is , for a total expected credit of .
Under C, under all circumstances your credit is and their credit is , for a total credit of .
Your expected credit is and their expected credit is , for a total expected credit of .
I’ve been thinking of how to assign credit for a donor lottery.
Some ways that seem compelling:
A: You get X% credit for the actual impact of the winner
B: You get 100% credit for the impact if you win, and 0% credit otherwise
C: You get X% credit for what your impact would have been, if you won
Some principles about assigning credit:
Credit is predictable and proportional to the amount you pay to fund an outcome (violated by B)
Credit depends on what actually happens in real life (violated by C)
Your credit depends on what you do, not what uncorrelated other people do (violated by A)
Some actual uses of assigning credit and what they might say:
When I’m tracking my own impact, I use something kind of like C—there’s a line on my spreadsheet that looks like “Donor lottery - £X”, which I smile at a little more than the Long Term Future Fund, because C is how I estimate my expected impact ahead of time.
Impact certificates can’t be distributed according to C because they correspond to actual impacts in the world, and are minted by the organizations that receive the grants and sold in exchange for the grants. You could kind of recover C by selling the rights to any impact certificates you would receive before the lottery is drawn.
A means that your credit is correlated with the decisions of other participants, which the CEA Donor Lottery is designed to avoid and which makes the decision whether to participate harder.
Reasons why one might not give to a donor lottery
What were your impressions for the amount of non-Open Philanthropy funding allocated across each longtermist cause area?
I also completed Software Foundations Volume 1 last year, and have been kind of meaning to do the rest of the volumes but other things keep coming up. I’m working full-time so it might be beyond my time/energy constraints to keep a reasonable pace, but would you be interested in any kind of accountability buddy / sharing notes / etc. kind of thing?
Simple linear models, including improper ones(!!). In Chapter 21 of Thinking Fast and Slow, Kahneman writes about Meehl’s book Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review, which finds that simple algorithms made by getting some factors related to the final judgement and weighting them gives you surprisingly good results.
The number of studies reporting comparisons of clinical and statistical predictions has increased to roughly two hundred, but the score in the contest between humans and algorithms has not changed. About 60% of the studies have shown significantly better accuracy for the algorithms. The other comparisons scored a draw in accuracy [...]
If they are weighted optimally to predict the training set, they’re called proper linear models, and otherwise they’re called improper linear models. Kahneman says about Dawes’ The Robust Beauty of Improper Linear Models in Decision Making that
A formula that combines these predictors with equal weights is likely to be just as accurate in predicting new cases as the multiple-regression formula that was ptimal in the original sample. More recent research went further: formulas that assign equal weights to all the predictors are often superior, because they are not affected by accidents of sampling.
That is to say: to evaluate something, you can get very far just by coming up with a set of criteria that positively correlate with the overall result and with each other and then literally just adding them together.
[Question] How much do cultured animal products cost in 2020?
How has the landscape of malaria prevention changed since you started? Especially since AMF alone has bought on the order of 100 million nets, which seems not insignificant compared to the total scale of the entire problem.
In the list at the top, Sam Hilton’s grant summary is “Writing EA-themed fiction that addresses X-risk topics”, rather than being about the APPG for Future Generations.
Miranda Dixon-Luinenburg’s grant is listed as being $23,000, when lower down it’s listed as $20,000 (the former is the amount consistent with the total being $471k).
Christiano operationalises a slow takeoff as
There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles.
in Takeoff speeds, and a fast takeoff as one where there isn’t a complete 4 year interval before the first 1 year interval.
The Double Up Drive, an EA donation matching campaign (highly recommended) has, in one group of charities that it’s matching donations to:
StrongMinds
International Refugee Assistance Project
Massachusetts Bail Fund
StrongMinds is quite prominent in EA as the mental health charity; most recently, Founders Pledge recommends it in their report on mental health.
The International Refugee Assistance Project (IRAP) works in immigration reform, and is a recipient of grants from OpenPhilanthropy as well as recommended for individual donors by an OpenPhil member of staff.
The Massachusetts Bail Fund, on the other hand, seems less centrally EA-recommended. It is working in the area of criminal justice reform, and posting bail is an effective-seeming intervention that I do like, but I haven’t seen any analysis of its effectiveness or strong hints of non-public trust placed in it by informed donors (e.g. it has not received any OpenPhil grants; though note that it is listed in the Double Up Drive and the 2017 REG Matching Challenge).
I’d like to know more about the latter two from an EA perspective because they’re both working on fairly shiny and high-status issues, which means that it would be quite easy for me to get my college’s SU to make a large grant to them from the charity fund.
Is there any other EA-aligned information on this charity (and also on IRAP and StrongMinds, since the more the merrier)?
The sum of the grants made by the Long Term Future fund in August 2019 is $415,697. Listed below these grants is the “total distributed” figure $439,197, and listed above these grants is the “payout amount” figure $445,697. Huh?
Two people mentioned the CEA not being very effective as an unpopular opinion they hold; has any good recent criticism of the CEA been published?
One issue that comes up with multi-winner approval voting is: suppose there are 15 longtermists and 10 global poverty people. All the longtermists approve the LTFF, MIRI, and Redwood; all the global poverty people approve the Against Malaria Foundation, GiveWell, and LEEP.
The top three vote winners are picked: they’re the LTFF, with 15 votes, MIRI, with 15 votes, and Redwood, with 15 votes.
It is maybe undesirable that 40% of the people in this toy example think those charities are useless, yet 0% of money is going to charities that aren’t those. (Or maybe it’s not! If a coin lands heads 60% of the time; then you bet on heads 100% of the time.)