Hmash
This is brilliant!
I think we can actually do an explicit expected-utility and value-of-information calculation here:
Let one five-star book = one util
Each book’s quality can be modelled as a rate of producing stars.
The star rating you give a book is the sum of 5 Bernoulli trials with rate .
The book will produce utils of value per read in expectation.
To estimate , sum up the total stars awarded and total possible stars .
The probability distribution is then (assuming uniform prior for simplicity).
For any pair of books, we can compute the probability that book 1 is more valuable than book 2 as .
Let’s say there’s a prescribed EA reading list.
Let people who encounter the list be probabilistic automata.
These automata start at the top of the list, then iteratively either: 1) read the book they are currently looking at, 2) move down to the next item on the list, 3) quit.
Intuitively, I think this process will result in books being read geometrically less as you move down the list.
For simplicity, let’s say the first book is guaranteed to be read, the next book has a 50% chance of being read, then 25%, …, and then -th book has chance of being read (with starting at zero).
The expected value of the list is then
To calculate the value of information for reading a given book, you enumerate all the possible outcomes (one-star, two-stars, …., five-stars), calculate the probability of each one, look at how the rankings would change, and re-calculate the the expected value of the list. Multiply the expected values by the probabilities et voila.
Can I get the data please?
Yes, infinite ethics is a serious problem and deserves criticism.
If you agree that EA should:
Give more attention to EA outsiders
Please upvote this comment (see the last paragraph of the post).
“What obstacles are holding you back from changing roles or cofounding a new project?”
Where’s the option for “Cofounding a project feels big and scary and it’s hard to know where to begin or if I’m remotely qualified to try”?
I had a crack at doing the Fermi Paradox calculations using vanilla JS for benchmarking. Took maybe 5 minutes to build reusable probabilistic estimation functions from scratch. On that basis, it doesn’t look to me like it would be worth the effort of learning a new syntax.
However, what took me almost all day was trying to get a nice visualisation of the probability distribution I came up with. I would like to be able to zoom and pan, hover over different x-values to get the PDF or CDF as a function of x, and maybe vary model parameters by dragging sliders. IMO, this is the real advantage of a probabilistic reasoning web-app.
After like 6 hours, I came up with a janky prototype which has zooming and a hover tooltip on a CDF.
Very messy code here: https://github.com/hamishhuggard/interactive-CDF/blob/main/fermi.html
PS: I hear QURI is hiring? Can I use this as a work trial? :P
Full of flaws? Yes. Cringe? Yes. 2-3 times longer than it should be? Yes.
Overrated? Only slightly. There’s some great dramatisations of dry academic ideas (similar to Taleb), and the philosophy is plausibly life changing.
I ended up significantly reworking the section. Any feedback on the new version?
If you agree EA should:
Have more quiet spaces at conferences
Please upvote this comment (see the last paragraph of the post).
I’d love to enter a competition like this.
Thanks for the suggestion. I don’t have a super clear idea of what the main issues/chunks actually are at the moment, but I’ll work towards that.
Very cute. 🙂
I’m curious about your thinking on colour symbolism. On the one hand, ravens are smart and crafty, so “black bird = smart/strategic bird” makes sense. But on the other hand, blue is kinda an EA colour, so at first I thought the blue bird would represent EA. Why did you choose to make the lay-bird a blue bird?
If you agree that EA should:
Be more accomodating of people who want to work on climate change
Please upvote this comment (see the last paragraph of the post).
If you agree EA should:
Be more positive
Please upvote this comment (see the last paragraph of the post).
If you agree that EA should:
Be more human / emotional
Please upvote this comment (see the last paragraph of the post).
Love it. The doggos are goddamn adorable.
Two issues:
Robert Miles’ audio quality seemed not great.
The video felt too long and digressive. By about halfway I had to take a break to stop my brain from overheating. Also by about halfway I had lost track of what the original point was and how it had led to the current point. I think it would’ve worked better broken up into at least 3 shorter videos, each with its own hook and punchy finish.
Lol. Not bad for 60% joking.
PS, here’s the code actually deployed: https://hamishhuggard.com/misc/fermi.html
If you agree EA should:
Have better mental health support
Please upvote this comment (see the last paragraph of the post).
There’s also the possibility that a maximum doesn’t exist.
Suppose you had a one-shot utility machine, where you simply punch in a number, and the machine will generate that many utils then self-destruct. The machine has no limit in the number of utils it can generate. How many utils do you select?
“Maximise utility” has no answer to this, because there is no maximum.
In real life, we have a practically infinite number actions available to us. There might be a sense in which due to field quantisation and finite negentropy there are technically finite actions available, but certainly there are more actions available than we could ever enumerate, let alone estimate the expected utility for.
In practice, it seems like the best way to actually maximise value is just to do lots of experimental utility-generating projects, and greedily look for low-effort, high-reward strategies.
And yet, this is a great contribution to EA discourse, and it’s one that a “smart” EA couldn’t have made.
You have identified a place where EA is failing a lot of people by being alienating. Smart people often jump over hurdles and arrive at the “right” answer without even noticing them. These hurdles have valuable information. If you can get good at honestly communicating what you’re struggling with, then there’s a comfy niche in EA for you.