If you agree that EA should:
Give more attention to EA outsiders
Please upvote this comment (see the last paragraph of the post).
If you agree that EA should:
Please upvote this comment (see the last paragraph of the post).
If you agree that EA should:
Please upvote this comment (see the last paragraph of the post).
Love it. The doggos are goddamn adorable.
Two issues:
Robert Miles’ audio quality seemed not great.
The video felt too long and digressive. By about halfway I had to take a break to stop my brain from overheating. Also by about halfway I had lost track of what the original point was and how it had led to the current point. I think it would’ve worked better broken up into at least 3 shorter videos, each with its own hook and punchy finish.
😁
Hell yeah
Great post. 👍 I vibe with this.
Obvious suggestion, but have you tried looking for a Steve Jobs? Like through a founder dating thing? Or posting on this forum? Or emailing the 80k people?
Full of flaws? Yes. Cringe? Yes. 2-3 times longer than it should be? Yes.
Overrated? Only slightly. There’s some great dramatisations of dry academic ideas (similar to Taleb), and the philosophy is plausibly life changing.
Got some nice feedback, but no clear signal that it was genuinely useful so quietly dropped for now.
And yet, this is a great contribution to EA discourse, and it’s one that a “smart” EA couldn’t have made.
You have identified a place where EA is failing a lot of people by being alienating. Smart people often jump over hurdles and arrive at the “right” answer without even noticing them. These hurdles have valuable information. If you can get good at honestly communicating what you’re struggling with, then there’s a comfy niche in EA for you.
“What obstacles are holding you back from changing roles or cofounding a new project?”
Where’s the option for “Cofounding a project feels big and scary and it’s hard to know where to begin or if I’m remotely qualified to try”?
I’m aggregating and visualising EA datasets on https://www.effectivealtruismdata.com/.
I haven’t yet implemented data download links, but they should be done within a week.
I only included karma from posts you’re the first author of.
So the missing karma is probably from comments or second author posts.
Not a philosopher, but I have overlapping interests.
I’m not sure what you mean here. What’s RDM? Robust decision making? So you’d want to formalise decision making in terms of the Bayesian or frequentist interpretation of probability?
Again, I’m not sure what “maximising ambition” means? Could you expand on this?
How would you approach this? Surveys? Simulations? From a probability perspective I’m not sure that there’s anything to say here. You choose a prior based on symmetry/maximum-entropy/invariance arguments, then if observations give you more information you update, otherwise you don’t.
I suspect a better way to approach topic selection is to find a paper you get excited about, and ask “how can I improve on this research by 10%?” This stops you from straying wildly off of the path of “respectable and achievable academic research”.
Oh, great! Your post looks very helpful!
Oh nice. Socratic irony. I like it.
Thanks for the suggestion. I don’t have a super clear idea of what the main issues/chunks actually are at the moment, but I’ll work towards that.
If you agree EA should:
Be more positive
Please upvote this comment (see the last paragraph of the post).