I work on AI Governance at Open Philanthropy. Comments here are posted in a personal capacity.
alex lawsen (previously alexrjl)
Why I’m concerned about Giving Green
An easy win for hard decisions.
Can the EA community copy Teach for America? (Looking for Task Y)
New addition to the 80,000 hours 1-on-1 team.
Whether you should do a PhD doesn’t depend much on timelines.
If this is too time-consuming for the current FTX advisers, hire some staffHiring is an extremely labour and time intensive process, especially if the position you’re hiring for requires great judgement. I think responding to a concern about whether something is a good use of staff time with ‘just hire more staff’ is pretty poor form, and given the context of the rest of the post it wouldn’t be unreasonable to respond to it with ‘do you want to post a BOTEC comparing the cost of those extra hires you think we should make to the harms you’re claiming?’
Forecasts about EA organisations which are currently on Metaculus.
If you (mostly) believe in worms, what should you think about WASH?
Incentive Problems With Current Forecasting Competitions.
Didn’t separate karma for helpfulness and disagreement (frequently used on LessWrong) get implemented on the EA forum recently? This post feels like the ideal use case for it:
There are some controversial comments with weakly positive karma despite lots of votes, where I suspect what’s going on is some people are signalling disagreement with downvotes, and others are signalling ‘this post constitutes meaningful engagement’ with upvotes.
There are also some comments where the tone seems to me to be over the line, with varying amounts of karma (from very positive to very negative), from various people.
Were a two-karma system available, I think I would use both [strong upvote, strong disagree] and [strong downvote, strong agree] at least once each.
It’s 10k plus travel plus housing plus co-working space, so it sounds like other than food basically the whole 10k would be disposable income. Potentially the housing provides food also. I’m not sure what cost of living is like in the Bahamas but that hardly sounds like “really low pay”
Know what you’re optimising for
[Speaking for myself here]
I also thought this claim by HLI was misleading. I clicked several of the links and don’t think James is the only person being misrepresented. I also don’t think this is all the “major actors in EA’s GHW space”—TLYCS, for example, meet reasonable definitions of “major” but their methodology makes no mention of wellbys
I’m finding this difficult to interpret—I can’t find a way of phrasing my question without it seeming snarky but this isn’t intended.
One reading of this offer looks something like:
if you have an idea which may enable some progress, it’s really important that you be able to try and I’ll get you the funding to make sure you do
Another version of this offer looks more like:
I expect basically never to have to pay out because almost all ideas in the space are useless, but if you can convince me yours is the one thing that isn’t useless I guess I’ll get you the money.
I guess maybe a way of making this concrete would be:
-have you paid out on this so far, if so, can you say what for?
-if not can you point to any existing work which you would have funded if someone had approached you asking for funding to try it?
I don’t think the right response is to directly respond to this claim. I think the right response is to ask a question aimed to identify what the crux of our disagreement is, and then to respond directly to that. In my experience of talking to many people who make this sort of claim, especially those matching the description given in the post, it is a minority who literally hold the view ‘we should have no pure rate of time preference’, instead most have some other type of reason for the intuition, which I may or may not disagree with in practice.
Would you accept answers of the form:Question which establishes whether the claim is ‘we should have a rate of pure time preference [1]’ or ‘we have practical reasons to weight effects which are near in time more highly in our decision making, even if we are impartial consequentialists with no pure rate of time preference, for example due to uncertainty about the reliability of long-term forecasts, belief that it is impossible to reduce the probability of existential catastrophe per unit time to 0 etc. [2]’
Suggested response 1, if the person holds position [1]
Suggested response 2, or, more productively, suggested avenues for further discussion, if the person holds one of several versions of position [2]
?
I’m not promising to write such an answer in either case, and accepting answers of the above form doesn’t hugely change the probability that I’ll do so, but because I think that the approach above is the best response in this sort of situation, I think it would be great if others were encouraged to consider responses of this form.
High quality, EA Audio Library (HEAAL)
all/meta, though I think the main value add is in AI
(Nonlinear has made a great rough/low quality version of this, so at least some credit/prize should go to them.)
Audio has several advantages over text when it comes to consuming long-form content, with one significant example being that people can consume it while doing some other task (commuting, chores, exercising) meaning the time cost of consumption is almost 0. If we think that broad, sustained engagement with key ideas is important, making the cost of engagement much lower is a clear win. Quoting Holden’s recent post:
I think a highly talented, dedicated generalist could become one of the world’s 25 most broadly knowledgeable people on the subject (in the sense of understanding a number of different agendas and arguments that are out there, rather than focusing on one particular line of research), from a standing start (no background in AI, AI alignment or computer science), within a year
What does high quality mean here, and what content might get covered?
-
High quality means read by humans (I’m imagining paying maths/compsci students who’ll be able to handle mathematical notation), with good descriptions of diagrams. If posts involve conversations (e.g. the MIRI logs), different voices are used for different people. Holden’s cold takes read throughs are a good example.
-
High quality also means paying for or otherwise dealing with copyright, curating pieces into much more searchable/navigable collections than the current podcast feeds.
What sort of things?
-
alignment forum posts, with sequences collated into playlists.
-
The MIRI conversations
-
Key technical reports e.g. Carlsmith on power seeking AI, Cotra on Bioanchors
-
New books (The Long View)
-
everything on key reading lists, again organised into playable feeds. e.g. Tessa’s biosec list, jtm’s longtermism list, the AGI safety and governance fundamentals curricula, the introductory and in-depth fellowship reading materials.
-
After playing with the idea for quite a while, I finally made a couple of youtube videos about forecasting. I’ve still got a lot to learn about both production and editing, but I received really valuable feedback which I expect to help a lot going forward.
Hoping to complete an “Intro to forecasting” series over the next few weeks.
Edit: also this thread is a great idea. Thanks for making it :).
I think your comment is a good example (and from the votes it looks like I’m not the only one). You’re making a good faith, sensible argument for a position I don’t hold—I think the disagreement karma is a big improvement.
I think your comment deserves an upvote for contributing to the discussion, but I disagree and wanted to indicate that.
I’m still saving for retirement in various ways, including by making pension contributions.
If you’re working on GCR reduction, you can always consider your pension savings a performance bonus for good work :)
I think “cost effective way to fundraise” is probably a stretch, and that this would likely have been better as a shortform, but I wanted to stop in and say the post made me smile, because I think it’s a fun example of how you can get a bunch of EV by being risk neutral and thinking outside the box, so thanks for writing it!