I trust that youâll enforce this trademark against anyone who takes any actions with an unduly large impact on the world, requiring them to first apply for a license to do so.
Ben Millwoodđ¸
This got me thinking:
no name name feedback anonymous form normal no feedback shut up ??? Have you considered making a form where people can submit their names and nothing else?
Not that itâs super important, but TVTropes didnât invent the phrase (nor do they claim they did), itâs from Warhammer 40,000.
I downvoted this because I think this isnât independently valuable /â separate enough from your existing posts to merit a new, separate post. I think it would have been better as a comment on your existing posts (and as Iâve said on a post by someone else about your reviews, I think weâre better off consolidating the discussion in one place).
That said, I think the sentiments expressed here are pretty reasonable, and I would have upvoted this in comment form I think.
They posted about their review of Sinergia on the forum already: https://ââforum.effectivealtruism.org/ââposts/ââYYrC2ZR5pnrYCdSLt/ââsinergia-ace-top-charity-makes-false-claims-about-helping
I suggest we concentrate discussion there and not here.
Someone on the forum said there were ballpark 70 AI safety roles in 2023
Just to note that the UK AI Security Institute employs more than 50 technical staff by itself and I forget how many non-technical staff, so this number may be due an update.
This doesnât seem right to me because I think itâs popular among those concerned with the longer term future to expect it to be populated with emulated humans, which clearly isnât a continuation of the genetic legacy of humans, so I feel pretty confident that itâs something else about humanity that people want to preserve against AI. (Iâm not here to defend this particular vision of the future beyond noting that people like Holden Karnofsky have written about it, so itâs not exactly niche.)
You say that expecting AI to have worse goals than humans would require studying things like what the empirical observed goals of AI systems turn out to be, and similar â sure, so in the absence of having done those studies, we should delay our replacement until they can be done. And doing these studies is undermined by the fact that right now the state of our knowledge on how to reliably determine what an AI is thinking is pretty bad, and it will only get worse as they develop their abilities to strategise and lie. Solving these problems would be a major piece of what people are looking for in alignment research, and precisely the kind of thing it seems worth delaying AI progress for.
another opportunity for me to shill my LessWrong writing posing this question: Should we exclude alignment research from LLM training datasets?
I donât have a lot of time to spend on this, but this post has inspired me to take a little time to figure out whether I can propose or implement some controls (likely: making posts visible to logged-in users only) in ForumMagnum (the software underlying the EA Forum, LW, and the Alignment Forum)
edit: https://ââgithub.com/ââForumMagnum/ââForumMagnum/ââissues/ââ10345
See also:
(Perhaps there should be a forum tag for this issue specifically, idk)
I agree overall but I want to add that becoming dependent on non-EA donors could put you under pressure to do more non-EA things /â less EA thingsâeither party could pull the other towards themselves.
Keep in mind that youâre not coercing them to switch their donations, just persuading them. That means you can use the fact that they were persuaded as evidence that you were on the right side of the argument. You being too convinced of your own opinion isnât a problem unless other people are also somehow too convinced of it, and I donât see why they would be.
I think that EA donors are likely to be unusual in this respectâyouâre pre-selecting for people who have signed up for a culture of doing whatâs best even when it wasnât what they thought it was before.
I guess also I think that my arguments for animal welfare charities are at their heart EA-style arguments, so Iâm getting a big boost to my likelihood of persuading someone by knowing that theyâre the kind of person who appreciates EA-style arguments.
Similarly if you think animal charities are 10x global health charities in effectiveness, then you think these options are equally good:
Move 10 EA donors from global health to animal welfare
Add 9 new animal welfare donors who previously werenât donating at all
To me, the first of these sounds way easier.
Thanks! (I slightly object to âthe normal markdown syntaxâ, since based on my quick reading neither John Gruberâs original markdown spec nor the latest CommonMark spec nor GitHub Flavoured Markdown have footnotes)
FWIW the link to your forum post draft tells me âSorry, you donât have access to this draftâ
The onboarding delay is relevant because in the 80k case it happens twice: the 80k person has an onboarding delay, and then the people they cause to get hired have onboarding delays too.
It feels like when Iâm comparing the person who does object-level work to the person who does meta-level work that leads to 2 people (say) doing object-level work, the latter really does seem better all things equal, but the intuition that calls this model naive is driven by a sense that itâs going to turn out to not âactuallyâ be 2 additional people, that additionality is going to be lower than you think, that the costs of getting that result are higher than you think, etc. etc.
But this intuition is not as clear as Iâd like on what the extra costs /â reduced benefits are, and how big a deal they are. Here are the first ones I can think of:
Perhaps the people that you recruit instead arenât as good at the job as you would have been.
If your orgâs hiring bottlenecks are not finding great people, but instead having the management capacity to onboard them or the funding capacity to pay for them, doing management or fundraising, or work that supports the case for fundraising, might matter more.
but 80k surely also needs good managers, at least as a general matter
I think when an org hires you, thereâs an initial period of your onboarding where you consume more staff time than you produce, especially if you weight by seniority. Different roles differ strongly on where their break-even point is. Iâve worked somewhere who thought their number was like 6-18 months (I forget what they said exactly, but in that range) and I can imagine cases where itâs more like⌠day 2 of employment. Anyway, one way or another, if you cause object level work to happen by doing meta level work, youâre introducing another onboarding delay before stuff actually happens. If the area youâre hoping to impact is time-sensitive, this could be a big deal? But usually Iâm a little skeptical of time-sensitivity arguments, since people seem to make them at all times.
itâs easy to inadvertently take credit for a person going to role that they would actually have gone to anyway, or not to notice when you guide someone into a role thatâs worse (or not better, or not so much better) than what they would have done otherwise (80k are clearly aware of this and try to measure it in various ways, but itâs not something you can do perfectly)
I think this depends on what the specific role is. I think the one Iâm going for is not easily replaceable, but Iâm mostly aiming not to focus on the specific details of my career choice in this thread, instead trying to address the broader questions about meta work generally.
oh! uhh, how?
forgive the self-promotion but hereâs a related Facebook post I made: