Builds web apps (eg viewpoints.xyz) and makes forecasts. Currently I have spare capacity.
Nathan Young
My comments are on LessWrong (see link below) but I thought I’d give you lot a chance to comment also.
EA Yale Destiny Debate Discussion:
@Gavriel Kleinwaks (who works in this area) Gives her recommendation. When asked whether she “backed” them:
I do! (Not in the financial sense, tbc.) But just want to flag that my endorsement is confounded. Basically, Aerolamp uses the design of the nonprofit referenced in my post, OSLUV, and most of my technical info about far-UV comes from a) Aerolamp cofounder Viv Belenky and b) OSLUV. I’ve been working with Viv and OSLUV for a couple of years, long before the founding of Aerolamp, and trust their information, but you should know that my professional opinion is highly correlated with theirs—1Day Sooner doesn’t have the equipment to do independent testing.
I think it’s the ideal outcome that a bunch of excellent researchers took a look at the state of the field and made their own product. So I’m not too worried about relying on this team’s info, but you should just have that context.
Fwiw, Mox (moxsf.com), run by Austin Chen, has installed a couple of Aerolamps and they were easy to set up and are running smoothly.
This is a cool post, though I think it’s kind of annoying not to be able to see the specific numbers that one is putting them on without reading the chart.
@Gavriel Kleinwaks, do you back these?
Sure, and do you want to stand on any of those accusations? I am not going to argue the point with 2 blogposts. What is the point you think is the strongest?
As for Moskovitz, he can do as he wishes, but I think it was an error. I do think that ugly or difficult topics should be discussed and I don’t fear that. LessWrong, and Manifest, have cut okay lines through these topics in my view. But it’s probably too early to judge.
I often don’t respond to people who write far more than I do.
I may not respond to this.
Option B clearly provides no advantage to the poor people over Option A. On the other hand, it sure seems like Option A provides an advantage to the poor people over Option B.
This isn’t clear to me.
If the countries in question have been growing much slower than the S&P 500, then the money at the future point might be far more money to them than it is to them now. And they aren’t going to invest in the S&P 500 in the meantime.
I guess I can send you a mediocre prototype.
Sure, but I think there are also relatively accurate comments about the world.
Hi this is the second or third of my comments you’ve come and snarked on. I’ll ask again. Have I upset you that you should talk to me like this?
Maybe I’m being too facile here, but I genuinely think that even just taking all these numbers, making them visible in some place, and then taking the median of them, and giving a ranking according to that, and then allowing people to find things they think are perverse within that ranking, would be a pretty solid start.
I think producing suspect work is often the precursor to producing good work.
And I think there’s enough estimates that one could produce a thing which just gathers all the estimates up and displays them. That would be sort of a survey or something, which wouldn’t therefore make it bad in itself even if the answers were sort of universally agreed to be pretty dubious. But I think it would point to the underlying work which needs to be done more.
I appreciate the correction on the Suez stuff.
If we’re going to criticise rationality, I think we should take the good with the bad. There are multiple adjacent cults, which I’ve said in the past. They were also early to crypto, early to AI, early to Covid. It’s sometimes hard to decide which things are from EA or Rationality, but there are a number of possible wins. If you don’t mention those, I think you’re probably fudging the numbers.
For example, in 2014, Eliezer Yudkowsky wrote that Earth is silly for not building tunnels for self-driving cars to drive in,
I can’t help but feel you are annoyed about this in general. But why speak to me in this tone. Have I specifically upset you?
I have never thought that Yudkowsky is the smartest person in the world, so this doesn’t really bother me deeply.
On the charges of racism, I think you’ll have to present some evidence for that.
I’ve seen you complain elsewhere that the ban times for negative karma comments are too long. I think they may be, but I guess they exist to stop behaviour exactly like this. Personally, I think it’s pretty antisocial to respond to a short message with an extremely long one that is kind of aggressive.
Sure but a really illegible and hard to search one.
I guess lots of money will be given. Seems reasonable to think about the impacts of that. Happy to bet.
This is an annoying feature of search: (this is the wrong will macaskill)
Sure, seems plausible.
I guess I kind of like @William_MacAskill’s piece or as much as I remember of it.
My recollection is roughly this:
Yes, it’s strange to have lots more money.
Perhaps we’re spending it badly.
But also seeking not to spend enough money might be a bad thing, too.
Frugal EA had something to recommend it.
But more impact probably requires more resources.
This seems good, though I guess it feels like a missing piece is:
Are we sure this money is got ethically?
How much harm will getting this money for bad reasons hurt us?
Also, looking back @trammell’s takes have aged very well:
It is unlikely we are in the most important time in history
If not, it is good to save money for that time
Had Phil been listened to, then perhaps much of the FTX money would have been put aside, and things could have gone quite differently.
So my non-EA friends point out that EAs have incentives to suck up to any group that are about to become rich. This seems something which I haven’t seen a solid path through:
It is much more effective to deal with the people who have the most money.
It is hard to retain one’s virtue while doing so.
Having known, and had conflict with a number of wealthy people, it is hard to retain ones sense of integrity in the face of lifechanging funds. I’ve talked to SBF and even after the crash I felt a gravity that I didn’t want to insult him lest he one day return to the heights of his influence. Sometimes that made me too cautious, sometimes, avoiding caution I was reckless.
I guess in some sense the problem is that finding ways through uncomfortable situations requires sitting in discomfort, and I don’t find EA to have a lot of internal battery for that kind of thing. Have we really resolved most of the various crises in a way that created harmony between those who disagreed? I’m not sure we have. So it’s hard to be optimistic here.
Naaaah, seems cheems. Seems worth trying. If we can’t then fair enough. But it doesn’t feel to me like we’ve tried.
Edit, for specificity. I think that shrimp QALYs and human QALYs have some exchange rate, we just don’t have a good handle on it yet. And I think that if we’d decided that difficult things weren’t worth doing we wouldn’t have done a lot of the things we’ve already done.
Also, hey Elliot, I hope you’re doing well.
Reading Will’s post about the future of EA (here) I think that there is an option also to “hang around and see what happens”. It seems valuable to have multiple similar communities. For a while I was more involved in EA, then more in rationalism. I can imagine being more involved in EA again.
A better earth would build a second suez canal, to ensure that we don’t suffer trillions in damage if the first one gets stuck. Likewise, having 2 “think carefully about things movements” seems fine.
It hasn’t always felt like this “two is better than one” feeling is mutual. I guess the rationalist in me feels slighted by EA discourse around and EA funder treatment of rationalist orgs over the years. But maybe we can let that go and instead be glad that should something go wrong with rationalism, that EA will still be around.
Seems like a lot of specific, quite technical criticisms. I don’t edorse Thorstadts work in general (or not endorse it), but often when he cites things I find them valuable. This has enough material that it seems worth reading.
I think my main disagreement is here:
I weakly disagree here. I am very much in the “make up statistics and be clear about that” camp. I disagree a bit with AI 2027 in that they don’t always label their forecasts with their median (which it turns out wasn’t 2027 ??).
I think that it is worth having and tracking individual predictions, though I acknowledge the risk that people are going to take them too seriously. That said, after some number of forecasters I think this info does become publishable (Katja Grace’s AI survey contains a lot of forecasts and is literally published).