EA Forum discourse tracks actual stakes very poorly
Examples:
There have been many posts about EA spending lots of money, but to my knowledge no posts about the failure to hedge crypto exposure against the crypto crash of the last year, or the failure to hedge Meta/Asana stock, or EA’s failure to produce more billion-dollar start-ups. EA spending norms seem responsible for $1m–$30m of 2022 expenses, but failures to preserve/increase EA assets seem responsible for $1b–$30b of 2022 financial losses, a ~1000x difference.
People are demanding transparency about the purchase of Wytham Abbey (£15m), but they’re not discussing whether it was a good idea to invest $580m in Anthropic (HT to someone else for this example). The financial difference is ~30x, the potential impact difference seems much greater still.
Basically I think EA Forum discourse, Karma voting, and the inflation-adjusted overview of top posts completely fails to correctly track the importance of the ideas presented there. Karma seems to be useful to decide which comments to read, but otherwise its use seems fairly limited.
Are there posts about those things which you think are under karma’d? My guess is the problem is more that people aren’t writing about them, rather than karma not tracking the importance of things which are written about. (At least in these two specific cases.)
Cool, fwiw I’d predict that a well-written anthropic piece would get more than the 150 karma the Whytam post currently has, though I acknowledge that “well-written” is vague. Based on what this commenter says, we might get to test that prediction soon.
FWIW the Wytham Abbey post also received ~240 votes, and I doubt that a majority of downvotes were given for the reason that people found the general topic unimportant. Instead I think it’s because the post seemed written fairly quickly and in a prematurely judgemental way. So it doesn’t seem right to take the karma level as evidence that this topic actually didn’t get a ton of attention.
Good point, I wasn’t tracking that the Wytham post doesn’t actually have that much Karma. I do think my claim would be correct regarding my first example (spending norms vs. asset hedges).
My claim might also be correct if your metric of choice was the sum of all the comment Karma on the respective posts.
Yeah, seems believable to me on both counts, though I currently feel more sad that we don’t have posts about those more important things than the possibility that the karma system would counterfactually rank those posts poorly if they existed.
discussing whether it was a good idea to invest $580m in Anthropic (HT to someone else for this example). The financial difference is ~30x, the potential impact difference seems much greater still.
There is a write-up specifically on this, that has been reviewed by some people. The author is now holding it back for ~4-6 weeks because they were requested to.
Yeah, I think making sure discussion of these topics (both Anthropic and Wytham) is appropriately careful seems good to me. E.g., the discussion of Wytham seemed very low-quality to me, with few contributors providing sound analysis of how to think about the counterfactuals of real estate investments.
At this point, I think it’s unfortunate that this post has not been published, a >2 month delay seems too long to me. If there’s anything I can do to help get this published, please let me know.
so we’re waiting 1.5 month to see if Anthropic was a bag idea?
On the other hand: Wytham Abbey was purchased by EV / CEA and made the news. Anthropic is a private venture. If Anthropic shows up in an argument, I can just say I don’t have anything to do with that. But if someone mentioned that Wytham Abbey was bought by the guy who wrote the basics of how we evaluate areas and projects… I still don’t know what to say.
I think this is about scope of responsibility. To my knowledge, “EA” doesn’t own any Meta/Asana stock, didn’t have billions of (alleged) assets caught up in crypto, and didn’t pour $580MM into Anthropic. All that money belongs/belonged to private individuals or corporations (or so we thought...), and there’s arguably something both a bit unseemly and pointless about writing on a message board about how specific private individuals should conduct their own financial affairs.
On the other hand, EVF is a public charity—and its election of that status and solicitation of funds from the general public rightly call for a higher level of scrutiny of its actions vis-a-vis those of private individuals and for-profit corporations.
I don’t actually know the details, but as far as I know, EVF is primarily funded by private foundations/billionaires, too.
Also, some of this hedging could’ve been done by community members without actual ownership of Meta/Asana/crypto. Again, the lack of discussion of this seems problematic to me.
EVF and its US affiliate, CENTRE FOR EFFECTIVE ALTRUISM USA INC, are public charities. That means there is an indirect public subsidy (in terms of foregone tax revenues on money donated) by US and UK taxpayers somewhere in the ballpark of about 25% of donations. Based on the reports I linked, that seems to be about $10MM per year, probably more in recent years given known big grants. EVF also solicits donations from the general public on its website and elsewhere, which I don’t think is true of the big holder of Meta/Asana stock. (Good Ventures, which has somewhat favorable tax treatment as a private foundation, does not seem concentrated in this stock per the most recent 990-PF I could find.)
If an organization solicits from the public and accepts favorable tax treatment for its charitable status, the range of public scrutiny it should expect is considerably higher than for a private individual.
As far as I know, large philanthropic foundations often use DAFs to attain public charity status, getting the same tax benefits. And if they’re private foundations, they’re still getting a benefit of ~15%, and possibly a lot more via receiving donations of appreciated assets.
I also don’t think public charity status and tax benefits are especially relevant here. I think public scrutiny is not intrinsically important; I mainly care about taking actions that maximize social impact, and public scrutiny seems much worse for this than figuring out high-impact ways to preserve/increase altruistic assets.
there’s arguably something both a bit unseemly and pointless about writing on a message board about how specific private individuals should conduct their own financial affairs
I think you would care about this specific investment if you had more context (or at least I expect that you believe you would deserve to understand the argument). In some sense, this proves Jonas right.
A tangentially related point about example 1: Wow, it really surprises me that crypto exposure wasn’t hedged! I can think of a few reasons why those hedges might be practically infeasible (possibilities: financial cost, counterparty risk of crypto hedge, relationship with donor, counterparty risk of donor). I take your point that it’d be appropriate if these sorts of things got discussed more, so I think I will write something on this hedging tomorrow. Thanks for the inspiration!
I agree that the hedges might be practically infeasible or hard. But my point is that this deserves more discussion and consideration, not that it was obviously easy to fix.
Should we be using Likelihood Ratios in everyday conversation the same way we use probabilities?
Disclaimer: Copy-pasting some Slack messages here, so this post is less coherent or well-written than others.
I’ve been thinking that perhaps we should be indicating likelihood ratios in everyday conversation to talk about the strength of evidence the same way we indicate probabilities in everyday conversation to talk about beliefs, that there should be a likelihood ratio calibration game, and that we should have cached likelihood ratios for common types of evidence (eg experimental research papers of a given level of quality).
However, maybe this is less useful because different pieces of evidence are often correlated? Or can we just talk about the strength of the uncorrelated portion of additional evidence?
So if you start out with a 50% probability, your prior odds are 1:1, your posterior odds after seeing all the evidence are 6:3 or 2:1, so your posterior probability is 67%.
If another person starts out with a 20% probability, their prior odds are 1:4, their posterior odds are 1:2, their posterior probability is 33%.
These two people agree on the strength of evidence but disagree on the prior. So the idea is that you can talk about the strength of the evidence / size of the update instead of the posterior probability (which might mainly depend on your prior).
Calibration game
A baseline calibration game proposal:
You get presented with a proposition, and submit a probability. Then you receive a piece of evidence that relates to the proposition (e.g. a sentence from a Wikipedia page about the issue, or a screenshot of a paper/abstract). You submit a likelihood ratio, which implies a certain posterior probability. Then both of these probabilities get scored using a proper scoring rule.
My guess is that you can do something more sophisticated here, but I think the baseline proposal basically works.
I really like the proposed calibration game! One thing I’m curious about is whether real-world evidence more often looks like a likelihood ratio or like something else (e.g. pointing towards a specific probability being correct). Maybe you could see this from the structure of priors+likelihoodratios+posteriors in the calibration game — e.g. check whether the long-run top-scorers likelihood ratios correlated more or less than their posterior probabilities.
(If someone wanted to build this: one option would be to start with pastcasting and then give archived articles or wikipedia pages as evidence. Maybe a sophisticated version could let you start out with an old relevant wikipedia page, and then see a wikipedia page much closer to the resolution date as extra evidence.)
There is also the question of whether people assign different strength to the same evidence. Maybe reporting why you think that the evidence is 1:3 rather than 1:1.5 or 1:6 would help.
Yeah exactly, that’s part of the idea here! E.g., on Metaculus, if someone posts a source and updates their belief, they could display the LR to indicate how much it updated them.
Today’s shower thought: When strong downvoting a post, one should be required to specify a reason for the strong downvote (either from a list, or through text entry if no pre-made reason fits). These reasons should be publicly displayed, but not linked to individual voters. [Alternative: They should be displayed to the commenter who is being downvoted only.] This is not intended to apply to strong disagreevotes.
Getting downvoted isn’t fun, and I’ve seen a number of follow-up comments recently along the lines of “why am I being downvoted for this?” Right now, we generally don’t provide any meaningful feedback for people who are being downvoted. In some cases (including some where I didn’t vote at all), I’ve tried to provide feedback—e.g., that particular language could be seen as “premature and unfriendly without allowing [an organization] time for a response”—which I hope has been sometimes helpful. But I’m wondering whether there is a broader way to give people some feedback.
The other reason that I think a reasons-giving requirement might make sense is that it serves as a tiny stop-and-think moment. It interrupts the all-too-human tendency to reach for the strong-downvote icon when one viscerally disagrees with a post, and reminds the user what the generally appropriate reason for strong downvotes are.
In this comment I was going to quote the following from R. M. Hare:
“Think of one world into whose fabric values are objectively built; and think of another in which those values have been annihilated. And remember that in both worlds the people in them go on being concerned about the same things—there is no difference in the ‘subjective’ concern which people have for things, only in their ‘objective’ value. Now I ask, What is the difference between the states of affairs in these two worlds? Can any other answer be given except ‘None whatever’?”
I remember this being quoted in Mackie’s Ethics during my undergraduate degree, and it’s always stuck with me as a powerful argument against moral non-naturalism and a close approximation of my thoughts on moral philosophy and meta-ethics.
But after some Google-Fu I couldn’t actually track down the original quote. Most people think it comes from the Essay Nothing Matters in Hare’s Applications of Moral Philosophy. While this definitely seems to be in the same spirit of the quote, the online scanned pdf version of Nothing Matters that I found doesn’t contain this quote at all. I don’t have access to any of the academic institutions to check other versions of the paper or book.
Maybe I just missed the quote by skimreading too quickly? Are there multiple versions of the article? Is it possible that this is a case of citogenesis? Perhaps Mackie misquoted what R.M. Hare said, or perhaps misattributed it and and it actually came from somewhere else? Maybe it was Mackie’s quote all along?
Help me EA Forum, you’re my only hope! I’m placing a £50 bounty to a charity of your choice for anyone who can find the original source of this quote, in R. M. Hare’s work or otherwise, as long as I can verify it (e.g. a screenshot of the quote if it’s from a journal/book I don’t have access to).
It was introduced by the Party for the Animals and passed in 2021. However, it only passed because the government had just fallen and the senate was distracted by passing covid laws, which meant they were very busy and didn’t have a debate about it. Since the law is rather vague there’s a good chance it wouldn’t have passed without the covid crisis.
It was supposed to start this year, but the minister of agriculture has decided he will straight up ignore the law . The current government is not in favor of this law and so they’re looking at ways to circumvent it.
It’s very unusual for the Dutch government to ignore laws, so they might get sued by animal rights activists. I expect they will introduce a new law rather quickly that repeals this ban, but the fact that it passed at all and that this will now become a big issue in the news is very promising for the 116 million Dutch farm animals.
I listened to this episode today Nathan, I thought it was really good, and you came across well. I think EAs should consider doing more podcasts, including those not created/hosting by EA people or groups. They’re an accessible medium with the potential for a lot of outreach (the 80k podcast is a big reason why I got directly involved with the community).
I know you didn’t want to speak for EA has a whole, but I think it was a good example with EA talking to the leftist community in good faith,[1] which is (imo) one of our biggest sources of criticism at the moment. I’d recommend others check out the rest of Rabbithole’s series on EA—it’s a good piece of data on what the American Left thinks of EA at the moment.
Summary:
+1 to Nathan for going on this podcast
+1 for people to check out the other EA-related Rabbithole episodes
I notice I am pretty skeptical of much longtermist work and the idea that we can make progress on this stuff just by thinking about it.
I think future people matter, but I will be surprised if, after x-risk reduction work, we can find 10s of billions of dollars of work that isn’t busywork and shouldn’t be spent attempting to learn how to get eg nations out of poverty.
The Collingridge dilemma: it is difficult to predict the future impact of a technology. However, once the technology has been implemented, it becomes difficult to manage.
Hello! I’m an EA in university, currently studying engineering. I’ve previously worked at CEA, done the in-depth fellowship, and am currently founding a startup alongside my studies.
I’m looking for a mentor or partner in EA space to meet with weekly, briefly, to help with setting goals and following up, as I think I’d really benefit from support with weekly prioritisation. A founder or engineering background is a plus but not necessary! Happy to talk more or be referred. Just send a private message.
I know this short form might be a shot in the dark but wanted to put it out there. Thanks!
I know of at least 1 NDA of an EA org silencing someone for discussing what bad behaviour that happened at that org. Should EA orgs be in the practice of making people sign such NDAs?
I have no idea, but would like to! With things like “organizational structure” and “nonprofit governance”, I really want to understand the reference class (even if everyone in the reference class does stupid bad things and we want to do something different).
Strongly agree that moving forward we should steer away from such organizational structures; much better that something bad is aired publicly before it has a chance to become malignant
I would like to see posts give you more karma than comments (which would hit me hard). Seems like a highly upvtoed post is waaaaay more valuable than 3 upvoted comments on that post, but it’s pretty often the latter gives more karma than the former.
I know you can figure them out, but I don’t see them presented separately on users pages. Am I missing something? Is it shown on the website somewhere?
Been flagging more often lately that decision-relevant conversations work poorly if only A is sayable (including “yes we should have this meeting”) and not-A isn’t.
At the same time I’ve been noticing the skill of saying not-A with grace and consideration, breezily and not with “I know this is going to be unpopular, but...” energy and it’s an extremely useful skill.
An objection to the non-identity problem: shouldn’t disregarding the welfare of non-existent people preclude most interventions on child mortality and education?
One objection against favoring the long-term future is that we don’t have duties towards people who still don’t exist. However, I believe that, when someone presents a claim like that, probably what they want to state is that we should discount future benefits (for some reason), or that we don’t have a duty towards people who will only exist in the far future. But it turns out that such a claim apparently proves too much; it proves that, for instance, we have no obligation to invest on reducing the mortality of infants less than one year old in the next two years
The most effective interventions in saving lives often do so by saving young children. Now, imagine you deploy an intervention similar to those of Against Malaria Foundation—i.e., distributing bednets to reduce contagion. At the beggining, you spend months studying, then preparing, then you go to the field and distribute bednets, and then one or two years later you evaluate how many malaria cases were prevented in comparison to a baseline. It turns out that most cases of averted deaths (and disabilities and years of life gained) correspond to kids who had not yet been conceived when you started studying.
Similarly, if someone starts advocating an effective basic education reform today, they will only succeed in enacting it in some years—thus we can expect that most of the positive effects will happen many years later.
(Actually, for anyone born in the last few years, we can expect that most of their positive impact will affect people who are not born yet. If there’s any value in positivel influencing these children, most of it will happen to people who are not yet born)
This means that, at the beggining of this project, most of the impact corresponded to people who didn’t exist yet—so you were under no moral obligation to help them.
It’s also a significant problem for near-term animal welfare work, since the lifespan of broiler chickens is so short, almost certainly any possible current action will only benefit future chickens.
People complained about how the Centre for Effective Altruism (CEA) had said they were trying not to be like the “government of Effective Altruism” but then they kept acting exactly like they were the Government of EA for years and years.
Yet that’s wrong. The CEA was more like the police force of effective altruism. The de facto government of effective altruism was for the longest time, maybe from 2014-2020, Good Ventures/Open Philanthropy. All of that changed with the rise of FTX. All of that changed again with the fall of FTX.
I’ve put everything above in the past tense because that was the state of things before 2022. There’s no such thing as a “government of effective altruism” anymore, regardless of whether anyone wants one or not. Neither the CEA, Open Philanthropy, nor Good Ventures could fill that role, regardless of whether anyone would want it or not.
We can’t go back. We can only go forward. There is no backup plan anyone in effective altruism had waiting in the wings to roll out in case of a movement-wide leadership crisis. It’s just us. It’s just you. It’s just me. It’s just left to everyone who is still sticking around in this movement together. We only have each other.
EA Forum discourse tracks actual stakes very poorly
Examples:
There have been many posts about EA spending lots of money, but to my knowledge no posts about the failure to hedge crypto exposure against the crypto crash of the last year, or the failure to hedge Meta/Asana stock, or EA’s failure to produce more billion-dollar start-ups. EA spending norms seem responsible for $1m–$30m of 2022 expenses, but failures to preserve/increase EA assets seem responsible for $1b–$30b of 2022 financial losses, a ~1000x difference.
People are demanding transparency about the purchase of Wytham Abbey (£15m), but they’re not discussing whether it was a good idea to invest $580m in Anthropic (HT to someone else for this example). The financial difference is ~30x, the potential impact difference seems much greater still.
Basically I think EA Forum discourse, Karma voting, and the inflation-adjusted overview of top posts completely fails to correctly track the importance of the ideas presented there. Karma seems to be useful to decide which comments to read, but otherwise its use seems fairly limited.
(Here’s a related post.)
Are there posts about those things which you think are under karma’d? My guess is the problem is more that people aren’t writing about them, rather than karma not tracking the importance of things which are written about. (At least in these two specific cases.)
There aren’t posts about them I think, but I’d also predict that they’d get less Karma if they existed.
Cool, fwiw I’d predict that a well-written anthropic piece would get more than the 150 karma the Whytam post currently has, though I acknowledge that “well-written” is vague. Based on what this commenter says, we might get to test that prediction soon.
FWIW the Wytham Abbey post also received ~240 votes, and I doubt that a majority of downvotes were given for the reason that people found the general topic unimportant. Instead I think it’s because the post seemed written fairly quickly and in a prematurely judgemental way. So it doesn’t seem right to take the karma level as evidence that this topic actually didn’t get a ton of attention.
How do you see it got 240 votes?
Anyway I agree that I wrote it quickly and prematurely. I edited it to add my current thoughts.
You can see the number of votes by hovering your mouse above the karma.
Good point, I wasn’t tracking that the Wytham post doesn’t actually have that much Karma. I do think my claim would be correct regarding my first example (spending norms vs. asset hedges).
My claim might also be correct if your metric of choice was the sum of all the comment Karma on the respective posts.
Yeah, seems believable to me on both counts, though I currently feel more sad that we don’t have posts about those more important things than the possibility that the karma system would counterfactually rank those posts poorly if they existed.
There is a write-up specifically on this, that has been reviewed by some people. The author is now holding it back for ~4-6 weeks because they were requested to.
Requested to, by who/for what reason? Is this information you have access to?
Yeah, I think making sure discussion of these topics (both Anthropic and Wytham) is appropriately careful seems good to me. E.g., the discussion of Wytham seemed very low-quality to me, with few contributors providing sound analysis of how to think about the counterfactuals of real estate investments.
At this point, I think it’s unfortunate that this post has not been published, a >2 month delay seems too long to me. If there’s anything I can do to help get this published, please let me know.
so we’re waiting 1.5 month to see if Anthropic was a bag idea? On the other hand: Wytham Abbey was purchased by EV / CEA and made the news. Anthropic is a private venture. If Anthropic shows up in an argument, I can just say I don’t have anything to do with that. But if someone mentioned that Wytham Abbey was bought by the guy who wrote the basics of how we evaluate areas and projects… I still don’t know what to say.
I think this is about scope of responsibility. To my knowledge, “EA” doesn’t own any Meta/Asana stock, didn’t have billions of (alleged) assets caught up in crypto, and didn’t pour $580MM into Anthropic. All that money belongs/belonged to private individuals or corporations (or so we thought...), and there’s arguably something both a bit unseemly and pointless about writing on a message board about how specific private individuals should conduct their own financial affairs.
On the other hand, EVF is a public charity—and its election of that status and solicitation of funds from the general public rightly call for a higher level of scrutiny of its actions vis-a-vis those of private individuals and for-profit corporations.
I don’t actually know the details, but as far as I know, EVF is primarily funded by private foundations/billionaires, too.
Also, some of this hedging could’ve been done by community members without actual ownership of Meta/Asana/crypto. Again, the lack of discussion of this seems problematic to me.
EVF and its US affiliate, CENTRE FOR EFFECTIVE ALTRUISM USA INC, are public charities. That means there is an indirect public subsidy (in terms of foregone tax revenues on money donated) by US and UK taxpayers somewhere in the ballpark of about 25% of donations. Based on the reports I linked, that seems to be about $10MM per year, probably more in recent years given known big grants. EVF also solicits donations from the general public on its website and elsewhere, which I don’t think is true of the big holder of Meta/Asana stock. (Good Ventures, which has somewhat favorable tax treatment as a private foundation, does not seem concentrated in this stock per the most recent 990-PF I could find.)
If an organization solicits from the public and accepts favorable tax treatment for its charitable status, the range of public scrutiny it should expect is considerably higher than for a private individual.
As far as I know, large philanthropic foundations often use DAFs to attain public charity status, getting the same tax benefits. And if they’re private foundations, they’re still getting a benefit of ~15%, and possibly a lot more via receiving donations of appreciated assets.
I also don’t think public charity status and tax benefits are especially relevant here. I think public scrutiny is not intrinsically important; I mainly care about taking actions that maximize social impact, and public scrutiny seems much worse for this than figuring out high-impact ways to preserve/increase altruistic assets.
I think you would care about this specific investment if you had more context (or at least I expect that you believe you would deserve to understand the argument). In some sense, this proves Jonas right.
A tangentially related point about example 1: Wow, it really surprises me that crypto exposure wasn’t hedged!
I can think of a few reasons why those hedges might be practically infeasible (possibilities: financial cost, counterparty risk of crypto hedge, relationship with donor, counterparty risk of donor). I take your point that it’d be appropriate if these sorts of things got discussed more, so I think I will write something on this hedging tomorrow. Thanks for the inspiration!
I agree that the hedges might be practically infeasible or hard. But my point is that this deserves more discussion and consideration, not that it was obviously easy to fix.
Ah, I see—I’ll edit it a bit for clarity then
Edit: should be better now
Should we be using Likelihood Ratios in everyday conversation the same way we use probabilities?
Disclaimer: Copy-pasting some Slack messages here, so this post is less coherent or well-written than others.
I’ve been thinking that perhaps we should be indicating likelihood ratios in everyday conversation to talk about the strength of evidence the same way we indicate probabilities in everyday conversation to talk about beliefs, that there should be a likelihood ratio calibration game, and that we should have cached likelihood ratios for common types of evidence (eg experimental research papers of a given level of quality).
However, maybe this is less useful because different pieces of evidence are often correlated? Or can we just talk about the strength of the uncorrelated portion of additional evidence?
See also: Strong Evidence is Common
Example
Here’s an example with made-up numbers:
Question: Are minimum wages good or bad for low-skill workers?
Theoretical arguments that minimum wages increase unemployment, LR = 1:3
Someone sends an empirical paper and the abstract says it improved the situation, LR = 1.2:1
IGM Chicago Survey results, LR = 5:1
So if you start out with a 50% probability, your prior odds are 1:1, your posterior odds after seeing all the evidence are 6:3 or 2:1, so your posterior probability is 67%.
If another person starts out with a 20% probability, their prior odds are 1:4, their posterior odds are 1:2, their posterior probability is 33%.
These two people agree on the strength of evidence but disagree on the prior. So the idea is that you can talk about the strength of the evidence / size of the update instead of the posterior probability (which might mainly depend on your prior).
Calibration game
A baseline calibration game proposal:
You get presented with a proposition, and submit a probability. Then you receive a piece of evidence that relates to the proposition (e.g. a sentence from a Wikipedia page about the issue, or a screenshot of a paper/abstract). You submit a likelihood ratio, which implies a certain posterior probability. Then both of these probabilities get scored using a proper scoring rule.
My guess is that you can do something more sophisticated here, but I think the baseline proposal basically works.
I really like the proposed calibration game! One thing I’m curious about is whether real-world evidence more often looks like a likelihood ratio or like something else (e.g. pointing towards a specific probability being correct). Maybe you could see this from the structure of priors+likelihoodratios+posteriors in the calibration game — e.g. check whether the long-run top-scorers likelihood ratios correlated more or less than their posterior probabilities.
(If someone wanted to build this: one option would be to start with pastcasting and then give archived articles or wikipedia pages as evidence. Maybe a sophisticated version could let you start out with an old relevant wikipedia page, and then see a wikipedia page much closer to the resolution date as extra evidence.)
Interesting point, agreed that this would be very interesting to analyze!
Relevant calibration game that was recently posted - I found it surprisingly addictive—maybe they’d be interested in implementing your ideas.
There is also the question of whether people assign different strength to the same evidence. Maybe reporting why you think that the evidence is 1:3 rather than 1:1.5 or 1:6 would help.
Yeah exactly, that’s part of the idea here! E.g., on Metaculus, if someone posts a source and updates their belief, they could display the LR to indicate how much it updated them.
Note that bits might be better because you can sum them.
Yeah fair, although I expect people to have more difficulty converting log odds back into probabilities.
Can you walk through the actual calculations here? Why did the chicago survey shift the person from 1.2:1 to 5:1, and not a different ratio?
No, this is not the description of the absolute shift (i.e., not from 1.2:1 to 5:1) but for the relative shift (i.e., from 1:x to 5:x).
Yeah. Here’s the example in more detail:
Prior odds: 1:1
Theoretical arguments that minimum wages increase unemployment, LR = 1:3 → posterior odds 1:3
Someone sends an empirical paper and the abstract says it improved the situation, LR = 1.2:1 → posterior odds 1.2:3
IGM Chicago Survey results, LR = 5:1 → posterior odds 6:3 (or 2:1)
+1 for these forum logos to be added to FontAwesome:
https://github.com/FortAwesome/Font-Awesome/issues/19536
Today’s shower thought: When strong downvoting a post, one should be required to specify a reason for the strong downvote (either from a list, or through text entry if no pre-made reason fits). These reasons should be publicly displayed, but not linked to individual voters. [Alternative: They should be displayed to the commenter who is being downvoted only.] This is not intended to apply to strong disagreevotes.
Getting downvoted isn’t fun, and I’ve seen a number of follow-up comments recently along the lines of “why am I being downvoted for this?” Right now, we generally don’t provide any meaningful feedback for people who are being downvoted. In some cases (including some where I didn’t vote at all), I’ve tried to provide feedback—e.g., that particular language could be seen as “premature and unfriendly without allowing [an organization] time for a response”—which I hope has been sometimes helpful. But I’m wondering whether there is a broader way to give people some feedback.
The other reason that I think a reasons-giving requirement might make sense is that it serves as a tiny stop-and-think moment. It interrupts the all-too-human tendency to reach for the strong-downvote icon when one viscerally disagrees with a post, and reminds the user what the generally appropriate reason for strong downvotes are.
Interesting if it was possible to quickly pick between common reasons, but don’t agree it should be required
In this comment I was going to quote the following from R. M. Hare:
I remember this being quoted in Mackie’s Ethics during my undergraduate degree, and it’s always stuck with me as a powerful argument against moral non-naturalism and a close approximation of my thoughts on moral philosophy and meta-ethics.
But after some Google-Fu I couldn’t actually track down the original quote. Most people think it comes from the Essay Nothing Matters in Hare’s Applications of Moral Philosophy. While this definitely seems to be in the same spirit of the quote, the online scanned pdf version of Nothing Matters that I found doesn’t contain this quote at all. I don’t have access to any of the academic institutions to check other versions of the paper or book.
Maybe I just missed the quote by skimreading too quickly? Are there multiple versions of the article? Is it possible that this is a case of citogenesis? Perhaps Mackie misquoted what R.M. Hare said, or perhaps misattributed it and and it actually came from somewhere else? Maybe it was Mackie’s quote all along?
Help me EA Forum, you’re my only hope! I’m placing a £50 bounty to a charity of your choice for anyone who can find the original source of this quote, in R. M. Hare’s work or otherwise, as long as I can verify it (e.g. a screenshot of the quote if it’s from a journal/book I don’t have access to).
My upskilling study plan:
1. Math
i) Calculus (derivatives, integrals, Taylor series)
ii) Linear Algebra (this video series)
iii) Probability Theory
2. Decision Theory
3. Microeconomics
i) Optimization of individual preferences
4. Computational Complexity
5. Information Theory
6. Machine Learning theory with a focus on deep neural networks
7. The Alignment Forum
8. Arbital
The Netherlands passed a law that would ban factory farming.
It was introduced by the Party for the Animals and passed in 2021. However, it only passed because the government had just fallen and the senate was distracted by passing covid laws, which meant they were very busy and didn’t have a debate about it. Since the law is rather vague there’s a good chance it wouldn’t have passed without the covid crisis.
It was supposed to start this year, but the minister of agriculture has decided he will straight up ignore the law . The current government is not in favor of this law and so they’re looking at ways to circumvent it.
It’s very unusual for the Dutch government to ignore laws, so they might get sued by animal rights activists. I expect they will introduce a new law rather quickly that repeals this ban, but the fact that it passed at all and that this will now become a big issue in the news is very promising for the 116 million Dutch farm animals.
I did a podcast where we talked about EA, would be great to hear your criticisms of it. https://pca.st/i0rovrat
Should I do more podcasts?
I listened to this episode today Nathan, I thought it was really good, and you came across well. I think EAs should consider doing more podcasts, including those not created/hosting by EA people or groups. They’re an accessible medium with the potential for a lot of outreach (the 80k podcast is a big reason why I got directly involved with the community).
I know you didn’t want to speak for EA has a whole, but I think it was a good example with EA talking to the leftist community in good faith,[1] which is (imo) one of our biggest sources of criticism at the moment. I’d recommend others check out the rest of Rabbithole’s series on EA—it’s a good piece of data on what the American Left thinks of EA at the moment.
Summary:
+1 to Nathan for going on this podcast
+1 for people to check out the other EA-related Rabbithole episodes
A similar podcast for those interested would be Habiba’s appearance on Garrison’s podcast The Most Interesting People I Know
“Find where the difficult thing hides, in its difficult cave, in the difficult dark.” Iain S. Thomas
Why do some shortforms have agree voting and others don’t?
Depends on when the shortform was created.
As in they’ve recently removed it? If not, that doesn’t seem true.
I notice I am pretty skeptical of much longtermist work and the idea that we can make progress on this stuff just by thinking about it.
I think future people matter, but I will be surprised if, after x-risk reduction work, we can find 10s of billions of dollars of work that isn’t busywork and shouldn’t be spent attempting to learn how to get eg nations out of poverty.
The Collingridge dilemma: it is difficult to predict the future impact of a technology. However, once the technology has been implemented, it becomes difficult to manage.
Hello! I’m an EA in university, currently studying engineering. I’ve previously worked at CEA, done the in-depth fellowship, and am currently founding a startup alongside my studies.
I’m looking for a mentor or partner in EA space to meet with weekly, briefly, to help with setting goals and following up, as I think I’d really benefit from support with weekly prioritisation. A founder or engineering background is a plus but not necessary! Happy to talk more or be referred. Just send a private message.
I know this short form might be a shot in the dark but wanted to put it out there. Thanks!
I know of at least 1 NDA of an EA org silencing someone for discussing what bad behaviour that happened at that org. Should EA orgs be in the practice of making people sign such NDAs?
I suggest no.
I think I want a Chesterton’s TAP for all questions like this that says “how normal are these and why” whenever we think about a governance plan.
I am unsure what you mean? As in, because other orgs do this it’s probably normal?
I have no idea, but would like to! With things like “organizational structure” and “nonprofit governance”, I really want to understand the reference class (even if everyone in the reference class does stupid bad things and we want to do something different).
Strongly agree that moving forward we should steer away from such organizational structures; much better that something bad is aired publicly before it has a chance to become malignant
I would like to see posts give you more karma than comments (which would hit me hard). Seems like a highly upvtoed post is waaaaay more valuable than 3 upvoted comments on that post, but it’s pretty often the latter gives more karma than the former.
Sometimes comments are better, but I think I agree they shouldn’t be worth exactly the same.
People might also have a lower bar for upvoting comments.
There you go, 3 mana. Easy peasy.
simplest first step would be just showing both separately like Reddit
You can see them separately, but it’s how they combine that matters.
I know you can figure them out, but I don’t see them presented separately on users pages. Am I missing something? Is it shown on the website somewhere?
They aren’t currently shown separately anywhere. I added it to the ForumMagnum feature-ideas repo but not sure whether we’ll wind up doing it.
They are shown separately here: https://eaforum.issarice.com/userlist?sort=karma
Is there a link to vote to show interest?
Been flagging more often lately that decision-relevant conversations work poorly if only A is sayable (including “yes we should have this meeting”) and not-A isn’t.
At the same time I’ve been noticing the skill of saying not-A with grace and consideration, breezily and not with “I know this is going to be unpopular, but...” energy and it’s an extremely useful skill.
I have now turned this diagram into an angsty blog post. Enjoy!
Pareto priority problems
An objection to the non-identity problem: shouldn’t disregarding the welfare of non-existent people preclude most interventions on child mortality and education?
One objection against favoring the long-term future is that we don’t have duties towards people who still don’t exist. However, I believe that, when someone presents a claim like that, probably what they want to state is that we should discount future benefits (for some reason), or that we don’t have a duty towards people who will only exist in the far future. But it turns out that such a claim apparently proves too much; it proves that, for instance, we have no obligation to invest on reducing the mortality of infants less than one year old in the next two years
The most effective interventions in saving lives often do so by saving young children. Now, imagine you deploy an intervention similar to those of Against Malaria Foundation—i.e., distributing bednets to reduce contagion. At the beggining, you spend months studying, then preparing, then you go to the field and distribute bednets, and then one or two years later you evaluate how many malaria cases were prevented in comparison to a baseline. It turns out that most cases of averted deaths (and disabilities and years of life gained) correspond to kids who had not yet been conceived when you started studying.
Similarly, if someone starts advocating an effective basic education reform today, they will only succeed in enacting it in some years—thus we can expect that most of the positive effects will happen many years later.
(Actually, for anyone born in the last few years, we can expect that most of their positive impact will affect people who are not born yet. If there’s any value in positivel influencing these children, most of it will happen to people who are not yet born)
This means that, at the beggining of this project, most of the impact corresponded to people who didn’t exist yet—so you were under no moral obligation to help them.
It’s also a significant problem for near-term animal welfare work, since the lifespan of broiler chickens is so short, almost certainly any possible current action will only benefit future chickens.
People complained about how the Centre for Effective Altruism (CEA) had said they were trying not to be like the “government of Effective Altruism” but then they kept acting exactly like they were the Government of EA for years and years.
Yet that’s wrong. The CEA was more like the police force of effective altruism. The de facto government of effective altruism was for the longest time, maybe from 2014-2020, Good Ventures/Open Philanthropy. All of that changed with the rise of FTX. All of that changed again with the fall of FTX.
I’ve put everything above in the past tense because that was the state of things before 2022. There’s no such thing as a “government of effective altruism” anymore, regardless of whether anyone wants one or not. Neither the CEA, Open Philanthropy, nor Good Ventures could fill that role, regardless of whether anyone would want it or not.
We can’t go back. We can only go forward. There is no backup plan anyone in effective altruism had waiting in the wings to roll out in case of a movement-wide leadership crisis. It’s just us. It’s just you. It’s just me. It’s just left to everyone who is still sticking around in this movement together. We only have each other.
Is it possible to re-collapse a shortform after expanding it on /allPosts? If so, how? If not, feature request :)