Builds web apps (eg viewpoints.xyz) and makes forecasts. Currently I have spare capacity.
Nathan Young
Can I push you on this a bit?
I want to note that this is more consensus than I thought in favour of the proposition. I would have guessed the median was much nearer 50% than it is.
I want to once again congratulate the forum team on this voting tool. I think by doing this, the EA forum is at the forefront of internal community discussions. No communities do this well and it’s surprising how powerful it is.
I do give a fraction to animal welfare and if I changed my mind I would give it to global health.
The question of capacity seems unrelated to the crux to me. I’m pretty confident that if it were known that there was 100mn to spend then people would spin up orgs. I guess there is a question whether all those would be more effective on the margin than global health, but I dunno, it seems to be missing the bit that I care about most.
Would you like to expand on this a bit?
I didn’t realise the comments were from that initially. Thanks.
Seems likely correct. I’m not fully certain because I wouldn’t be that surprised to be wrong. It is much easier to help animals than people on the margin.
I update a bit more because I haven’t read good arguments against and have seen some possible arguments debunked.
Under your model would they be able to opt out of being written about in articles?
Is there a map which doesn’t have a discontinuity at .5?
Seems underrated how much AI Safety policy might reduce european GDP (.2% over 10 years doesn’t seem crazy to me). I hope we will endorse this, should it come to pass.
SB 1047 looks to me focused on reducing downsides while preserving upsides. I hope EA intervention in the EU has been the same.
Lab grown meat → no-kill meat
This tweet recommends changing the words we use to discuss lab-grown meat. Seems right.
I feel like I want 80k to do more cause prioritisation if they are gonna direct so many people. Seems like 5 years ago they had their whole ranking thing which was easy to check. Now I am less confident in the quality of work that is directing lots of people in a certain direction.
I often think about transparency in relation to allies, competitors and others:
Allies are on my team. We have shared norms and can make deals
Competitors are not on my team and are either working against my interests or too chaotic to work with
Others are randoms I don’t know enough about.
Generally I feel obliged to be truthful to allies and others and transparent to allies. So I’d say I like EA to anyone, but I wouldn’t feel the need to reveal it to any random person and particularly not someone who is against me.
What I think is more interesting from some of this discourse is that people see eg their government employers as competitors here. I think that would change the frame of how I went to work if I didn’t think I was in a collaborative relationship with my boss.
Comment I wrote on Samuel’s substack:
Medical innovation—Trump didn’t really do a lot here last time he was in office. Seems like warp speed was motivated by covid.
Housing—Trump did’t really do a lot here last time he was in office. Not sure we know Harris’ policy here yet.
Immigration—I don’t really trust Trump to manage some sort of clever compromise here
AI—Yeah, trump being worried could be better, but it’s a random roll, against Biden being pretty sober.
This feels more like a desire to be controversial, than really weight the pros and cons. I think it’s plausible that Trump is better, for AI reasons, and I don’t think it’s that unlikely, given the uncertainty, but I think it is unlikely.
I think this is an interesting kind of article, but I don’t buy the AI point, which is the most cruxy one for me.
I made it up[1].
But, as I say in the following sentences it seems plausible to me that without betting markets to keep the numbers accessible and Silver to keep pushing on them, it would have taken longer for the initial crash to become visible, it could have faded from the news and it could have been hard to see that others were gaining momentum.
All of these changes seem to increase the chance of biden staying in, which was pretty knife edgy for a long time.
Can I tweet this? I think it’s a good take
I am happy to see this. Have you messaged people on the EA and epistemics slack?
Here are some epistemics projects I am excited about:
Polymarket and Nate Silver—It looks to me that forecasting was 1 − 5% of the Democrats dropping Biden from their ticket. Being able to rapidly see the drop in his % chance of winning during the debate, holding focus on this poor performance over the following weeks and seeing momentum increase for other candidates all seemed powerful[1].
X Community Notes—that one of the largest social media platforms in the world has a truth seeking process with good incentives is great. For all Musk’s faults, he has pushed this and it is to his credit. I think someone should run a think tank to lobby X and other orgs into even better truth seeking
The Swift Centre—large conflict of interest, since I forecast for them, but as a forecasting consultancy that is managing to stand largely (entirely?) without grant funding, just getting standard business gigs, if I were gonna suggest epistemics consulting, I’d probably recommend us. The Swift Centre is a professional or that has worked with DeepMind and the Open Nuclear Network.
Discourse mapping—Many discussions happen often and we don’t move forward. Personally I’m really excited about trying to find consensus positions to allow focus to be freed for more important stuff. Here is the site my team mocked up for Control AI, but I think we could have similar discourse mapping for SB 1047, different approaches to AI safety
The Forum’s AI Welfare Week—I enjoyed a week of focus on a single topic. I reckon if we did about 10 of these we might really start to get somewhere. Perhaps with clustering on different groups based on their positions on initial spectra.
Sage’s Fatebook.io—a tool for quickly making and tracking forecasts. The only tool I’ve found that I show to non-forecasting business people that they say “oh what’s that, can I use that”. I think Sage should charge for this and try and push it as a standard SaaS product.[2]
And a quick note:
An example of a potential project here: A consultancy which provides organisations support in improving their epistemics.
I think the obvious question here should be “how would you know such a consultancy has good epistemics”.
As a personal note, I’ve been building epistemic tools for years, eg estimaker.app or casting around for forecasting questions to write on. The FTXFF was pretty supportive of this stuff, but since it’s fall I’ve not felt like big EA finds my work particularly interesting or worthy of support. Many of the people I see doing interesting tinkering work like this end up moving to AI Safety.
Argument: The money can be spent over a long time and like will be able to be spent.
The footnote on the main question says:
Likewise @Will Howard🔹 argues that this isn’t that significant an additional amount of money anyway: