I was thinking the same, a bet resistant to something like COVID + rebound would be more in the spirit of the argument.
Maybe gdp growth over previous all-time high?
I was thinking the same, a bet resistant to something like COVID + rebound would be more in the spirit of the argument.
Maybe gdp growth over previous all-time high?
I think you have a point. However, I strongly disagree with the framing of your post, for several reasons.
One is advertising your hedge fund here, that made me doubt of the entire post.
Second, is that the link does not go to a mathematical paper, rather to the whitepapers section of your startup. Nevertheless, I believe the first PDF there appears to be the math behind your post.
Third, calling that PDF a mathematical proof is a stretch (at least, from my pov as a math researcher). Expressions like âit is plausible thatâ never belong in a mathematical proof.
And most importantly, the substance of the argument:
In your model, you assume that effort by allies depends on the actorâs confidence signal (sigma), and that alliesâ contribution is monotonic (larger if the actor is more confident). I find this assumption questionable, since, from an ally/âinvestor perspective, unwarranted high confidence can undermine trust.
Then, you conflate the fact that the optimal signal is higher (when optimizing for outcomes) than the optimal forecast (when optimizing for accuracy) as an indication against calibration. I would take it as an indication for calibration, but including possible actions (such as signaling) as variables to optimize for success.
In my view, your model is a nice toy model to explain why, in certain situations, signaling more confidence than what would be accurate can be instrumental.
Ironically, your post and your whitepaper do what they recommend, using expressions like âdemonstrateâ and âproofâ without properly acknowledging that most of the load of the argument rests on the modelling assumptions.
How much time is this expected/ârecommended to take?
Depends on what you count as meaningful earning potential.
One of the big ideas that I take from the old days of effective altruism, is that strategically donating 10% of the median US salary can save more lives than becoming a doctor in the US over oneâs career.
Same logic applies to animal welfare, catastrophic risk reduction, and other priorities.
A different question is would you be satisfied with having a normal job and donating 10% (or whatever % makes sense in your situation)?
Over the last decade, we should have invested more in community growth at the expense of research.
My answer is largely based on my view that short-timeline AI risk people are more dominant in the discourse that the credence I give them, ymmv
ClaraBot: Report duplicated in title!
I would like to see more low quality /â unserious content. Mainly, to lower the barrier to entry for newcomers and make it more welcoming.
Very unsure if this is actually a good idea.
I appreciate the irony and see the value in this, but Iâm afraid that youâre going to be downvoted into oblivion because of your last paragraph.
âAt high levels of uncertainty, common sense produces better outcomes than explicit modellingâ
Hey can you include a link to the blog?
Fantastic post!
Iâm trying to put myself in the shoes of someone that is new around here, and I would appreciate some definitions or links for acronyms (GHD, AIS), and meat eater problem. Maybe others as well, I havenât been thorough.
Can you please update the post, it would be even better in my opinion.
I would be very surprised if [neuron count + noiciceptive capacity as moral weight] are standard EA assumptions. I havenât seen this in the people I know nor in the major funders, who seem to be more pluralistic to me.
My main critique to this post is that there are different claims and itâs not very clear which arguments are supporting what conclusions. I think your message would be more clear after a bit of rewriting, and then it would be easier to have an object-level discussion.
Hey, kudos to you for writing a longform about this. I have talked to some self-identified negative utilitarians, and I think this is a discussion worth having.
I think this post is mixing two different claims.
Critiquing âminimize suffering as the only terminal value â extinction is optimalâ makes sense.
But that doesnât automatically imply that some suffering-reduction interventions (like shrimp stunning) are not worth it.
You can reject suffering-minimization-as-everything and still think that large amounts of probable suffering in simple systems matter at the margin.
Also I appreciated the discussion of depth, but have nothing to say about it here.
I would appreciate:
- Any negative utilitarian or person knowledgeable about negative utilitarianism commenting on why NU doesnât necessarily recommend extinction.
- The OP clarifying the post by making more explicit the claims.
I like your post, especially the vibe of it.
At the same time, I have a hard time understanding what does âquit EAâ even mean:
Stop saying youâre EA? I guess thatâs fine.
Stop trying to improve the world using reason and evidence? Very sad. Probably read this post x50 and I hope it convinces you otherwise.
99% karma-weighted of tagged posts about AI seems wrong
if you check the top 4 posts of all time, the 1st and 3rd are about FTX, the 2nd about earning to give and the 4th about health, totalling > 2k karma
might want to check for bugs
I started, and then realised how complicated is to choose a set of variables and weights to make sense of âhow privileged am Iâ or âhow lucky am Iâ.
I have an MVP (but ran out of free LLM assistance), and right now the biggest downside is that if I include several variables, the results tend to be far from the top. And I donât know what to do about this.
For instance, letâs say that in âhealthcare accessâ, having good public coverage puts you in the top 10% bracket (number made up). Then, if you pick 95% as the reference point for that any weighted average including this will miss on some distance to the top.
So just a weighted average of different questions is not good enough I guess.
We can discuss and workshop it if you want.
I love the sentiment of the post, and tried it myself.
I think a prompt like this makes answers less extreme than what they actually are, because itâs like a vibes-based answer instead of a model-based answer. I would be surprised if you are not in the top 1% globally.
I would really enjoy something like this but more model-based, as the GWWC calculator. Does anyone know of something similar? Should I vibe code it and then ask for feedback here?
I tried this myself and I got âyouâre about 10-15% globallyâ, which I think is a big underestimate.
For context, pp adjusted income is top 2%, I have a PhD (1% globally? less?), live alone in an urban area.
Asking more, a big factor pushing down is that I rent the place that I live in instead of owning it (which, donât get me started on this from a personal finance perspective, but shouldnât be that big of a gap I guess?).
How can I cross text on a comment?
I donât identify as EA. You can check my post history. I try to form my own views and not defer to leadership or celebrities.
I agree with you that thereâs a problem with safetywashing, conflicts of interest and bad epistemic practises in mainstream EA AI safety discourse.
My problem with this post is that the way of presenting the arguments is like âwake up, Iâm right and you are wrongâ, directed to a group of people that includes people that have never thought about what youâre talking about, and people that agree with you.
I also agree that the truth sometimes irritates, but that doesnât mean that if something irritates I should trust it more.
I think allowing this debate to happen would be a fantastic opportunity to put our money where our mouth is regarding not ignoring systemic issues:
https://ââ80000hours.org/ââ2020/ââ08/ââmisconceptions-effective-altruism/ââ#misconception-3-effective-altruism-ignores-systemic-change
On the other hand, deciding that democratic backsliding is off limits, and not even trying to have a conversation about it, could (rightfully, in my view) be treated as evidence of EA being in an ivory tower and disconnected from the real world.