Builds web apps (eg viewpoints.xyz) and makes forecasts. Currently I have spare capacity.
Nathan Young
Sure, and do you want to stand on any of those accusations? I am not going to argue the point with 2 blogposts. What is the point you think is the strongest?
As for Moskovitz, he can do as he wishes, but I think it was an error. I do think that ugly or difficult topics should be discussed and I don’t fear that. LessWrong, and Manifest, have cut okay lines through these topics in my view. But it’s probably too early to judge.
I often don’t respond to people who write far more than I do.
I may not respond to this.
Option B clearly provides no advantage to the poor people over Option A. On the other hand, it sure seems like Option A provides an advantage to the poor people over Option B.
This isn’t clear to me.
If the countries in question have been growing much slower than the S&P 500, then the money at the future point might be far more money to them than it is to them now. And they aren’t going to invest in the S&P 500 in the meantime.
I guess I can send you a mediocre prototype.
Sure, but I think there are also relatively accurate comments about the world.
Hi this is the second or third of my comments you’ve come and snarked on. I’ll ask again. Have I upset you that you should talk to me like this?
Maybe I’m being too facile here, but I genuinely think that even just taking all these numbers, making them visible in some place, and then taking the median of them, and giving a ranking according to that, and then allowing people to find things they think are perverse within that ranking, would be a pretty solid start.
I think producing suspect work is often the precursor to producing good work.
And I think there’s enough estimates that one could produce a thing which just gathers all the estimates up and displays them. That would be sort of a survey or something, which wouldn’t therefore make it bad in itself even if the answers were sort of universally agreed to be pretty dubious. But I think it would point to the underlying work which needs to be done more.
I appreciate the correction on the Suez stuff.
If we’re going to criticise rationality, I think we should take the good with the bad. There are multiple adjacent cults, which I’ve said in the past. They were also early to crypto, early to AI, early to Covid. It’s sometimes hard to decide which things are from EA or Rationality, but there are a number of possible wins. If you don’t mention those, I think you’re probably fudging the numbers.
For example, in 2014, Eliezer Yudkowsky wrote that Earth is silly for not building tunnels for self-driving cars to drive in,
I can’t help but feel you are annoyed about this in general. But why speak to me in this tone. Have I specifically upset you?
I have never thought that Yudkowsky is the smartest person in the world, so this doesn’t really bother me deeply.
On the charges of racism, I think you’ll have to present some evidence for that.
I’ve seen you complain elsewhere that the ban times for negative karma comments are too long. I think they may be, but I guess they exist to stop behaviour exactly like this. Personally, I think it’s pretty antisocial to respond to a short message with an extremely long one that is kind of aggressive.
Sure but a really illegible and hard to search one.
I guess lots of money will be given. Seems reasonable to think about the impacts of that. Happy to bet.
This is an annoying feature of search:
Sure, seems plausible.
I guess I kind of like @William_MacAskill’s piece or as much as I remember of it.
My recollection is roughly this:
Yes, it’s strange to have lots more money.
Perhaps we’re spending it badly.
But also seeking not to spend enough money might be a bad thing, too.
Frugal EA had something to recommend it.
But more impact probably requires more resources.
This seems good, though I guess it feels like a missing piece is:
Are we sure this money is got ethically?
How much harm will getting this money for bad reasons hurt us?
Also, looking back @trammell’s takes have aged very well:
It is unlikely we are in the most important time in history
If not, it is good to save money for that time
Had Phil been listened to, then perhaps much of the FTX money would have been put aside, and things could have gone quite differently.
So my non-EA friends point out that EAs have incentives to suck up to any group that are about to become rich. This seems something which I haven’t seen a solid path through:
It is much more effective to deal with the people who have the most money.
It is hard to retain one’s virtue while doing so.
Having known, and had conflict with a number of wealthy people, it is hard to retain ones sense of integrity in the face of lifechanging funds. I’ve talked to SBF and even after the crash I felt a gravity that I didn’t want to insult him lest he one day return to the heights of his influence. Sometimes that made me too cautious, sometimes, avoiding caution I was reckless.
I guess in some sense the problem is that finding ways through uncomfortable situations requires sitting in discomfort, and I don’t find EA to have a lot of internal battery for that kind of thing. Have we really resolved most of the various crises in a way that created harmony between those who disagreed? I’m not sure we have. So it’s hard to be optimistic here.
Naaaah, seems cheems. Seems worth trying. If we can’t then fair enough. But it doesn’t feel to me like we’ve tried.
Edit, for specificity. I think that shrimp QALYs and human QALYs have some exchange rate, we just don’t have a good handle on it yet. And I think that if we’d decided that difficult things weren’t worth doing we wouldn’t have done a lot of the things we’ve already done.
Also, hey Elliot, I hope you’re doing well.
Reading Will’s post about the future of EA (here) I think that there is an option also to “hang around and see what happens”. It seems valuable to have multiple similar communities. For a while I was more involved in EA, then more in rationalism. I can imagine being more involved in EA again.
A better earth would build a second suez canal, to ensure that we don’t suffer trillions in damage if the first one gets stuck. Likewise, having 2 “think carefully about things movements” seems fine.
It hasn’t always felt like this “two is better than one” feeling is mutual. I guess the rationalist in me feels slighted by EA discourse around and EA funder treatment of rationalist orgs over the years. But maybe we can let that go and instead be glad that should something go wrong with rationalism, that EA will still be around.
Yeah I’m making something like that :)
I do not see 14 charity ranking tools. I don’t really think I see 2? What, other than asking claude/chatGPT/gemini are you suggesting?
Could you give a concise explanation of what giving circles are?
Thanks, someone else mentioned them. Do you think there is anything else I’m missing?
There was. It was on Gathertown, I was one of the organisers.
EA still seems to have a GatherTown, though I don’t know what’s inside it:
https://app.gather.town/app/Yhi4XYj0zFNWuUNv/EA%20coworking%20and%20lounge
The Lightcone (LessWrong) gathetown was extensive and, in my view, pretty beautiful.
@Gavriel Kleinwaks, do you back these?