why do you think that the worldviews need strong philosophical justification? it seems like this may leave out the vast majority of worldviews.
Halffull
I think thoughtleader sometimes means “has thoughts at the leading edge” and sometimes mean “leads the thoughts of the herd on a subject” and that there is sometimes a deliberate ambiguity between the two.
one values humans 10-100x as much
This seems quite low, at least from a perspective of revelead preferences. If one indeed rejects unitarism, I suspect that the actual willingness to pay is something like 1000x − 10,000x to prevent the death of an animal vs. a human.
- 19 Nov 2023 20:21 UTC; 41 points) 's comment on Open Phil Should Allocate Most Neartermist Funding to Animal Welfare by (
The executive summary is entirely hallucinated.
“To what extent is money important to you?” and found that was much more important than money itself: money has a much bigger effect on happiness if you *think* money is important (a
Or perhaps, you think money is important if it has a bigger effect on your happiness (based on e.g. environmental factors and genetic predispostion)? In other words, maybe these people are making correct predictions about how they work, rather than creating self-fulfilling prophecies? It is at least worth considering that the causality goes this way.AND it found people who equate success with money are less happy.
This of course is slight evidence that the causality goes in the direction you said.
I think it’s also easy to make a case that longtermist efforts have increased the x-risk of artificial intelligence, with the money and talent that grew some of the biggest hype machines in AI (Deepmind, OpenAI) coming from longtermist places.
It’s possible that EA has shaved a couple counterfactual years off of time to catastrophic AGI, compared to a world where the community wasn’t working on it.
I’d also add Vitalik Buterin to the list.
If you’re going to have a meeting this short, isn’t it better to e.g. send a message or email about this? Having very short conversations like this means you’ve wasted a large slot of time on your EAG calendar that you could have used for different types of conversations that you can only do in person at EAG.
It’s pretty clear that being multiplanetary is more anti-fragile? It provides more optionality, allows for more differentiation and evolution, and provides stronger challenges.
I recently gave a talk on one of my own ambitious projects at my organization, and gave the following outside view outcomes in order of likelihood.
The project fails to gain any traction or have any meaningful impact on the world.
The project has an impact on the world, but despite intentions the impact is negative, neutral or too small to matter.
The project has enough of a positive outcomes to matter.
In general, I’d say that outside view this is the most likely order of outcomes of any ambitious/world-saving project. And I was saying it specifically to elicit feedback and make sure people were red-teaming me morally.
However, it’s not specifically clear to me that putting more money into research/thinking improves it much?
For one thing, again the most likely outcome is that the project fails to gain any traction or have any impact at all, so you need to be de-risking the likelihood of that through classic lean-startup MVP style stuff anyway, you shouldn’t wait on that, and spend a bunch of money figuring out the positive or negative effects of an intervention at scale that won’t actually be able to scale (most things won’t scale).For another, I think that a lot of the benefit of potentially world changing projects is through hard to reason about flow through effects. For instance, in your example about Andrew Carnegie and libraries, a lot of the benefits would be some hard to gesture at stuff related to having a more educated populace and how that effects various aspects of society and culture. You can certainly create Fermi estimates and systems models but ultimately people’s models will be very different, and missing one variable or relationship in a complex systems model of society can completely reverse the outcome.
Ultimately, it might be better to use the types of reasoning/systems analyis that work under Knightian Uncertainty, things like “Is this making us more anti-fragile? is this effectuative and allowing us to continually build towards more impact? Is this increasing our capabilities in an asymmetric way?”
This is the exact type of reasoning that would cause someone intuitively to think that space settlements are important—it’s clearly a thing that increases the anti-fragility of humanity, even if you don’t have exact models of the threats that it may help against. By increasing anti-fragility, you’re increasing the ability to face unknown threats. Certainly, you can get into specifics, and you can realize it doesn’t make you as anti-fragile as you thought, but again, it’s very easy to miss some other specifics that are unknown unknowns and totally reverse your conclusion.I ultimately think what makes sense is a sort of culture of continuous oversight/thinking about your impact, rather than specific up front research or a budget. Maybe you could have “impact-analysisathons” once a quarter where you discuss these questions. I’m not sure exactly what it would look like, but I notice I’m pretty skeptical at the idea of putting a budget here or creating a team for this purpose. I think they end up doing lots of legible impact analysis which ultimately isn’t that useful for the real questions you care about.
Sure, but “already working on an EA project” doesn’t mean you have an employer.
Assuming you have an employer
This is great! Curious what (if anything) you’re doing to measure counterfactual impact. Any sort of randomized trial involving e.g. following up with clients you didn’t have the time to take on and measuring their change in productive hours compared to your clients?
Yeah, I’d expect it to be a global catastrophic risk rather than existential risk.
Is there much EA work into tail risk from GMOs ruining crops or ecosystems?
If not, why not?
Yeah, I mostly focused on the Q1 question so didn’t have time to do a proper growth analysis across 2021
Yeah, I was talking about the Q1 model when I was trying to puzzle out what your growth model was.
There isn’t a way to get the expected value, just the median currently (I had a bin in my snapshot indicating a median of $25,000). I’m curious what makes the expected value more useful than the median for you?
A lot of the value of potential growth vectors of a business come in the tails. For this particular forecast it doesn’t really matter because it’s roughly bell-curved shape, but if I was using this as for instance decisionmaking tool to decide what actions to take, I’d really want to look at which ideas had a small chance of being very runaway successes, and how valuable that makes them compared to other options which are surefire, but don’t have that chance of tail success. Choosing those ideas isn’t likely to pay off on any single idea, but is likely to pay off over the course of a business’s lifetime.
Thanks, this was great!
The estimates seem fair, Honestly, much better than I would expect given the limited info you had, and the assumptions you made (the biggest one that’s off is that I don’t have any plans to only market to EAs).
Since I know our market is much larger, I use a different forecasting methodology internally which looks at potential marketing channels and growth rates.
I didn’t really understand how you were working in growth rate into your calculations in the spreadsheet, maybe just eyeballing what made sense based on the current numbers and the total addressable market?
One other question I have about your platform is that I don’t see any way to get the expected value of the density function, which is honestly the number I care most about. Am I missing something obvious?
Hey, I run a business teaching people how to overcome procrastination (procrastinationplaybook.net is our not yet fully fleshed out web presence).
I ran a pilot program that made roughly $8,000 in revenue by charging 10 people for a premium interactive course. Most of these users came from a couple of webinars that my friend’s hosted, a couple came from finding my website through the CFAR mailing list and webinars I hosted for my twitter friends.
The course is ending soon, and I’ll spend a couple of months working on marketing and updating the course before the next launch, as well as:
1. Launching a podcast breaking down skills and models and selling short $10 lessons for each of them teaching how to acquire the skill.
2. Creating a sales funnel for my pre-course, which is a do-it-yourself planning course for creating the “perfect procrastination plan”. Selling for probably $197
3. Creating the “post-graduate” continuity program after people have gone through the course, allowing people to have a community interested in growth and development, priced from $17/month for basic access to $197 with coaching.
Given those plans for launch in early 2021:
1. What will be my company’s revenue in Q1 2021?
2. What will be the total revenue for this company in 2021?
I recommend Made to Stick by Chip and Dan Heath.
This just seems like you’re taking on one specific worldview and holding every other worldview up to it to see how it compares.
Of course this is an inherent problem with worldview diversification, how to define what counts as a worldview and how to choose between them.
But still intuitively if your meta-wolrdview screens out the vast majority of real life views that seems undesirable. The meta-worldview that coherency matters is impotant but should be balanced with other meta worldviews, such as that what matters is how many people hold a worldview, or how much harmony it creates