Builds web apps (eg viewpoints.xyz) and makes forecasts. Currently I have spare capacity.
Nathan Young
Yeah, I think you make good points. I think that forecasts are useful on balance, and then people should investigate them. Do you think that forecasting like this will hurt the information landscape on average?
Personally, to me, people engaged in this forecasting generally seem more capable of changing their minds. I think the AI2027 folks would probably be pretty capable of acknowledging they were wrong, which seems like a healthy thing. Probably more so than the media and academic?
Seems like a lot of specific, quite technical criticisms.
Sure, so we agree?
(Maybe you think I’m being derogatory, but no, I’m just allowing people who scroll down to the comments to see that I think this article contains a lot of specific, quite technical criticisms. If in doubt, I say things I think are true.)
Some thoughts:
I agree that the Forum’s speech norms are annoying. I would prefer that people weren’t banned for being impolite even white making useful points.
I agree in a larger sense that EA can be innervating, sapping one’s will for conflict with many small touches
I agree that having one main funder and wanting to please them seems unhelpful
I’ve always thought you are a person of courage and integrity
On the other hand:
I think if you are struggling to convince EAs that is some evidence. I too am in the “it’s very likely not the end of the world but still worth paying attention to” camp. You haven’t convinced me.
Your personal tweets have felt increasingly high conflict and less epistemically careful. I think I muted you over a year ago. I guess you hate this take, but it’s true.
I don’t expect this to change your mind, but maybe there are reasons you aren’t convincing very informed people besides us being blind to reality. I admit I’d enjoy being rich, but I’m not particularly convinced I’ll go try and work for a lab. And I don’t think I bend my opinions towards Coefficient, either, and have never been funded by them.
I think you’re right to sat that a large proportion of the public will come to agree with you. But also I expect a large proportion of the public to give talking points about water and energy use and that disney has a moral right to their characters for as long as copyright says they do. This doesn’t seem good to me. I sense it seems fine to you.
I don’t think this is all our war. I guess that you do. If so, we disagree. I will help to the extent I agree with you and be flatfooded and confused to the extent that I don’t. I get that that’s annoying. I feel some of that annoyance myself at ways I disagree with the community. But to me it feels part of being in a community. I have to convince people. And you have’t convinced me.
I feel this quite a lot:
The need to please OpenPhil etc
The sense of inness or outness based on cause area
The lack of comparing notes openly
That one can “just have friends”
And so I think Holly’s advice is worth reading, because it’s fine advice.
Personally I feel a bit differently. I have been hurt by EA, but I still think it’s a community of people who care about doing good per $. I don’t know how we get to a place that I think is more functional, but I still think it’s worth trying for the amout of people and resources attached to this space. But yes, I am less emotionally envolved than once I was.
Seems like a lot of specific, quite technical criticisms. I don’t edorse Thorstadts work in general (or not endorse it), but often when he cites things I find them valuable. This has enough material that it seems worth reading.
I think my main disagreement is here:“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so” … I think the rationalist mantra of “If It’s Worth Doing, It’s Worth Doing With Made-Up Statistics” will turn out to hurt our information landscape much more than it helps.
I weakly disagree here. I am very much in the “make up statistics and be clear about that” camp. I disagree a bit with AI 2027 in that they don’t always label their forecasts with their median (which it turns out wasn’t 2027 ??).
I think that it is worth having and tracking individual predictions, though I acknowledge the risk that people are going to take them too seriously. That said, after some number of forecasters I think this info does become publishable (Katja Grace’s AI survey contains a lot of forecasts and is literally published).
My comments are on LessWrong (see link below) but I thought I’d give you lot a chance to comment also.
EA Yale Destiny Debate Discussion:
@Gavriel Kleinwaks (who works in this area) Gives her recommendation. When asked whether she “backed” them:
I do! (Not in the financial sense, tbc.) But just want to flag that my endorsement is confounded. Basically, Aerolamp uses the design of the nonprofit referenced in my post, OSLUV, and most of my technical info about far-UV comes from a) Aerolamp cofounder Viv Belenky and b) OSLUV. I’ve been working with Viv and OSLUV for a couple of years, long before the founding of Aerolamp, and trust their information, but you should know that my professional opinion is highly correlated with theirs—1Day Sooner doesn’t have the equipment to do independent testing.
I think it’s the ideal outcome that a bunch of excellent researchers took a look at the state of the field and made their own product. So I’m not too worried about relying on this team’s info, but you should just have that context.
Fwiw, Mox (moxsf.com), run by Austin Chen, has installed a couple of Aerolamps and they were easy to set up and are running smoothly.
This is a cool post, though I think it’s kind of annoying not to be able to see the specific numbers that one is putting them on without reading the chart.
@Gavriel Kleinwaks, do you back these?
Sure, and do you want to stand on any of those accusations? I am not going to argue the point with 2 blogposts. What is the point you think is the strongest?
As for Moskovitz, he can do as he wishes, but I think it was an error. I do think that ugly or difficult topics should be discussed and I don’t fear that. LessWrong, and Manifest, have cut okay lines through these topics in my view. But it’s probably too early to judge.
I often don’t respond to people who write far more than I do.
I may not respond to this.
Option B clearly provides no advantage to the poor people over Option A. On the other hand, it sure seems like Option A provides an advantage to the poor people over Option B.
This isn’t clear to me.
If the countries in question have been growing much slower than the S&P 500, then the money at the future point might be far more money to them than it is to them now. And they aren’t going to invest in the S&P 500 in the meantime.
I guess I can send you a mediocre prototype.
Sure, but I think there are also relatively accurate comments about the world.
Hi this is the second or third of my comments you’ve come and snarked on. I’ll ask again. Have I upset you that you should talk to me like this?
Maybe I’m being too facile here, but I genuinely think that even just taking all these numbers, making them visible in some place, and then taking the median of them, and giving a ranking according to that, and then allowing people to find things they think are perverse within that ranking, would be a pretty solid start.
I think producing suspect work is often the precursor to producing good work.
And I think there’s enough estimates that one could produce a thing which just gathers all the estimates up and displays them. That would be sort of a survey or something, which wouldn’t therefore make it bad in itself even if the answers were sort of universally agreed to be pretty dubious. But I think it would point to the underlying work which needs to be done more.
I appreciate the correction on the Suez stuff.
If we’re going to criticise rationality, I think we should take the good with the bad. There are multiple adjacent cults, which I’ve said in the past. They were also early to crypto, early to AI, early to Covid. It’s sometimes hard to decide which things are from EA or Rationality, but there are a number of possible wins. If you don’t mention those, I think you’re probably fudging the numbers.
For example, in 2014, Eliezer Yudkowsky wrote that Earth is silly for not building tunnels for self-driving cars to drive in,
I can’t help but feel you are annoyed about this in general. But why speak to me in this tone. Have I specifically upset you?
I have never thought that Yudkowsky is the smartest person in the world, so this doesn’t really bother me deeply.
On the charges of racism, I think you’ll have to present some evidence for that.
I’ve seen you complain elsewhere that the ban times for negative karma comments are too long. I think they may be, but I guess they exist to stop behaviour exactly like this. Personally, I think it’s pretty antisocial to respond to a short message with an extremely long one that is kind of aggressive.
Sure but a really illegible and hard to search one.
I guess lots of money will be given. Seems reasonable to think about the impacts of that. Happy to bet.
I dunno, I think that sounds galaxy-brained to me. I think that giving numbers is better than not giving them and that thinking carefully about the numbers is better than that. I don’t really buy your second order concerns (or think they could easily go in the opposite direction)