😁
Hmash
Hell yeah
Great post. 👍 I vibe with this.
Obvious suggestion, but have you tried looking for a Steve Jobs? Like through a founder dating thing? Or posting on this forum? Or emailing the 80k people?
Full of flaws? Yes. Cringe? Yes. 2-3 times longer than it should be? Yes.
Overrated? Only slightly. There’s some great dramatisations of dry academic ideas (similar to Taleb), and the philosophy is plausibly life changing.
Got some nice feedback, but no clear signal that it was genuinely useful so quietly dropped for now.
And yet, this is a great contribution to EA discourse, and it’s one that a “smart” EA couldn’t have made.
You have identified a place where EA is failing a lot of people by being alienating. Smart people often jump over hurdles and arrive at the “right” answer without even noticing them. These hurdles have valuable information. If you can get good at honestly communicating what you’re struggling with, then there’s a comfy niche in EA for you.
“What obstacles are holding you back from changing roles or cofounding a new project?”
Where’s the option for “Cofounding a project feels big and scary and it’s hard to know where to begin or if I’m remotely qualified to try”?
I’m aggregating and visualising EA datasets on https://www.effectivealtruismdata.com/.
I haven’t yet implemented data download links, but they should be done within a week.
I only included karma from posts you’re the first author of.
So the missing karma is probably from comments or second author posts.
Not a philosopher, but I have overlapping interests.
I’m not sure what you mean here. What’s RDM? Robust decision making? So you’d want to formalise decision making in terms of the Bayesian or frequentist interpretation of probability?
Again, I’m not sure what “maximising ambition” means? Could you expand on this?
How would you approach this? Surveys? Simulations? From a probability perspective I’m not sure that there’s anything to say here. You choose a prior based on symmetry/maximum-entropy/invariance arguments, then if observations give you more information you update, otherwise you don’t.
I suspect a better way to approach topic selection is to find a paper you get excited about, and ask “how can I improve on this research by 10%?” This stops you from straying wildly off of the path of “respectable and achievable academic research”.
Oh, great! Your post looks very helpful!
Oh nice. Socratic irony. I like it.
Thanks for the suggestion. I don’t have a super clear idea of what the main issues/chunks actually are at the moment, but I’ll work towards that.
Very cute. 🙂
I’m curious about your thinking on colour symbolism. On the one hand, ravens are smart and crafty, so “black bird = smart/strategic bird” makes sense. But on the other hand, blue is kinda an EA colour, so at first I thought the blue bird would represent EA. Why did you choose to make the lay-bird a blue bird?
Thanks. Fixed.
Thank you. I have corrected the mistake.
The relationship between Lindy, Doomsday, and Copernicus is as follows:
The “Copernican Principle” is that “we” are not special. This is a generalisation of how the Earth is not special: it’s just another planet in the solar system, not the centre of the universe.
In John Gott’s famous paper on the Doomsday Argument, he appeals to the the Copernican Principle to assert “we are also not special in time”, meaning that we should expect ourselves to be in a typical point in the history of humanity.
The “most typical” point in history is exactly in the middle. Thus your best guess of the longevity of humanity is twice its current age: Lindy’s Law.
This is brilliant!
I think we can actually do an explicit expected-utility and value-of-information calculation here:
Let one five-star book = one util
Each book’s quality can be modelled as a rate of producing stars.
The star rating you give a book is the sum of 5 Bernoulli trials with rate .
The book will produce utils of value per read in expectation.
To estimate , sum up the total stars awarded and total possible stars .
The probability distribution is then (assuming uniform prior for simplicity).
For any pair of books, we can compute the probability that book 1 is more valuable than book 2 as .
Let’s say there’s a prescribed EA reading list.
Let people who encounter the list be probabilistic automata.
These automata start at the top of the list, then iteratively either: 1) read the book they are currently looking at, 2) move down to the next item on the list, 3) quit.
Intuitively, I think this process will result in books being read geometrically less as you move down the list.
For simplicity, let’s say the first book is guaranteed to be read, the next book has a 50% chance of being read, then 25%, …, and then -th book has chance of being read (with starting at zero).
The expected value of the list is then
To calculate the value of information for reading a given book, you enumerate all the possible outcomes (one-star, two-stars, …., five-stars), calculate the probability of each one, look at how the rankings would change, and re-calculate the the expected value of the list. Multiply the expected values by the probabilities et voila.
Can I get the data please?
It just occurred to me that you don’t actually need to convert the forecaster’s odds to bits. You can just take the ceiling of the odds themselves:
Which is more useful for calibrating in the low-confidence range.
Additional note: BitBets is a proper scoring rule, but not strictly proper. If you round report odds which are rounded up to the next power of two you will achieve the same scores in expectation.
Love it. The doggos are goddamn adorable.
Two issues:
Robert Miles’ audio quality seemed not great.
The video felt too long and digressive. By about halfway I had to take a break to stop my brain from overheating. Also by about halfway I had lost track of what the original point was and how it had led to the current point. I think it would’ve worked better broken up into at least 3 shorter videos, each with its own hook and punchy finish.