You might want to check out some of Phil Trammell’s reports, where he analyzes what he calls time preference (time discount rate) with respect to philanthropy: https://docs.google.com/document/d/1NcfTgZsqT9k30ngeQbappYyn-UO4vltjkm64n4or5r4/edit
lincolnq
Will MacAskill has stepped down as trustee of EV UK
Nick Beckstead is leaving the Effective Ventures boards
Congrats on having invented something exciting!
Usually, the best way to get innovative new technology into the hands of beneficiaries quickly is to get a for-profit company to invest with a promise of making money. This can happen via licensing a patent to an existing manufacturer, or creating a whole startup company and raising venture capital, etc.
One of the things such investors want to see is a ‘moat’: something that this company can do that no other company can easily copy. A patent/exclusive license is a good way to create a moat.
There are some domains like software where simply publishing ‘open source’ ideas causes those ideas to get used, but for most domains including manufacturing, my default expectation is that new tech is not used unless someone can make money off it. Pharma is a great example—there are tons of vaccines and niche treatments that we don’t have manufacturing for, even though we know how, because nobody can make enough money doing it.
I’d be really interested to hear whether you are considering seeing this idea through yourself? It sounds like you’re doing a Ph.D; but if you would consider dropping out to work on this as a startup, then I think doing so would be one of the best ways to maximize this idea’s chances for success. (In large part because your brain probably contains tons of highly relevant info for making this product work at scale!)
You wrote: “most scientists do patent and keep everything secret within companies”—but I wonder if this indicates confusion, since usually patents don’t keep things secret, they are published. Patents just allow their owner a legal monopoly on technology for a limited time.
Can you get introduced to any food-manufacturing people (ideally folks at bigger companies, in charge of finding + investing in new food products), who you can talk to about your idea, even just to get advice? Or, founders of similar food tech companies who came up with a good idea and had to decide whether to patent it?
I’m a bit confused about this because “getting ambitious slowly” seems like one of those things where you might not be able to successfully fool yourself: once you can conceive that your true goal is to cure cancer, you are already “ambitious”; unless you’re really good at fooling yourself, you will immediately view smaller goals as instrumental to the big one. It doesn’t work to say I’m going to get ambitious slowly.
What does work is focusing on achievable goals though! Like, I can say I want to cure cancer but then decide to focus on understanding metabolic pathways of the cell, or whatever. I think if you are saying that you need to focus on smaller stuff, then I am 100% in agreement.
I avoid reading, and don’t usually respond to, comments on my posts, or replies to my own comments.
The reason is that it’s emotionally intense to do so: after posting something on the EA Forum, I avoid checking the forum at all for ~24h or so (for fear of noticing replies in the ‘recents’ area, or changes in my karma), and after that I mainly skim for people flagging major errors or omissions that need my input to be resolved.
Lizka’s You Don’t Have to Respond to Every Comment talks about this a bit (and was enormously helpful for me) - I am not strongly averse to posting stuff and having people read it in the abstract—I just don’t like the short term emotional swings that come with individual replies to things.
Can you give some evidence/an example for “unable to mentor many of the qualified applicants”?
I think this is a useful question and I’m glad to be discussing this.
I agree with many of your concerns—and would love to see a more culturally-unified EA on the axis of how conscious we are of our own impact—but I also think you’re failing to acknowledge something crucial: As much as EA is about altruism, it is also about focus on what’s important, and your post doesn’t acknowledge this as a potential trade-off for the folks you’re discussing.
You’ll find a lot of EA folks perceive climate change as a real problem but also perceive marginal carbon costs as not a thing worth focusing on given all the other problems in the world and the fact that carbon is offsetable. You are reading this as a “careless attitude” but I don’t think this is a fair characterization. There are real tradeoffs to be made here about how to use marginal attention; they may be offsetting and just not talking about it, or deciding that it’s not going to make enough difference in the short run, but regardless I think you have insufficient evidence to conclude that their attitude is wrong.
(I personally offset all my CO2 with Wren and think for at least 5 minutes about each plane flight I decide to take to decide if it is worth it; but have never written about this till now, and would have no reason to bother writing it down.)
I’m interested in the discussion of whether in fact we are at a hinge of history, maybe this is a good comments section for that. I agree that Will’s analysis barely scratches the surface and has some flaws.
Factors under consideration for me:
Existence of technologies that can have direct impacts on future society through making the world much better or much worse: computation and AI, the internet & social media, nanotech, biotech, the printing press, energy production / Dyson spheres
Do population/economic growth rates matter? i.e., if we are growing fast now vs slow, what would that imply?
Institutional attitudes: Do we have institutions that change behavior in controllable ways? What do people believe about the future impact of tech/ideas like money, life extension, social media, systems of government like the UN/democracy/Marxism/fascism, principles like liberalism/economics, strategies for national wealth like expansionism/colonialism/mercantilism, and so on?
Attitudes about change: are we able to convince people of things? Do people change their minds quickly or slowly? What systems exist to get information out, and what feedback mechanisms do they have?
Moral attitudes: How much do people care about others? To what degree do they care about those distant from them? Do people prioritize suffering, pleasure, satisfaction, etc? Do they believe they can change the world? Do they believe that there are moral errors that they or others are regularly making?
Satisfaction & Dissatisfaction attitudes: How much do people believe the world should be better than it is, and how motivated are they to “invest” to make things go better? e.g., Cold War & space exploration, colonialism era, building bridges and tunnels and other infra?
I see arguments for hingiest era being in the past, present or future:
-
arguments for past, eg 1780 or thereabouts: there were far fewer people, and they could have predicted (based on observing spread of religion) that the printing press, Industrial Revolution, European colonialism/mercantilism, and/or economic liberalism and democracy would have had huge impact. They also may have been able to predict moral progress eg slavery is bad. They probably would have been able to see that certain institutions had a ton of influence and were in turn influenceable.
Instinct is that they would have failed to predict as much progress in public health as we got, thereby expecting that future people would live in greater suffering than they do. Maybe this would have reduced their motivation to imagine a future with far more people.
They also could probably not have imagined computing and the internet in any particular detail.
-
arguments for this century (2000 to 2100): computing is going fucking crazy, there has never been a technology like this that has enabled such short feedback loops to society. Social media has shown that attitudes can change really quickly when info-consumption is addictive and anyone can publish widely. But these tech changes can’t go on; we will certainly reach the limits of physics this century and change will slow down dramatically, so whatever we settle on soon will greatly impact how the future shakes out.
Counter-argument is that we haven’t seen much popular moral progress, and it seems to me that there is far more to go here; our pace of tech development is outpacing moral development
Also, while institutions have a ton of power, they mostly seem like they are stuck in the past and hard to change; the institution which will impact the next thousand years probably doesn’t exist and it is not clear what it looks like.
-
arguments for the future: Essentially, that computing is just the beginning; if we survive this era then we’ll reach even more impactful tech, such as bio, nano, space, superluminal etc; new impactful institutions will arise that don’t depend too heavily on whatever we are doing today, or maybe we’ll be multiplanetary or in VR or whatever. Secondly, humans need to ‘catch up’ in moral development to our technological development and that just takes time and could easily stretch beyond 2100.
Overall I lean towards the present: tech is moving so fast now, faster than in any point in the past, and I see reasons for it to slow down by the end of the century. The slow pace of moral development idea pushes the hinginess into the future but I think the chance of surviving until then outweighs the changes in our morality and societal organization that I expect after that point. If I were certain we would survive another 100 years then I might be convinced that the future will be more hingey than the present.
The GiveDirectly founders (Michael Faye and Paul Niehaus) also founded TapTapSend (https://techcrunch.com/2021/12/20/taptap-send-raises-65m-to-build-cross-border-remittances-focused-on-the-most-underserved-markets/) which competes with Sendwave to keep remittance prices down.
Yeah. I just joined the board so I don’t exactly know why, but we are definitely aware of missing this deadline and the charity commission is as well, and I think it is caused by the ongoing investigation.
It’s a fair critique. I use “legible” in this way, and I don’t really want to give it up, and I think it’s not too bad jargon-wise because even non-EA people seem to understand it without too much prefixing with definition.
Your alternatives don’t quite capture the idea right:
If I were to set a “clear” or “understandable” goal, I would expect people to be able to make sense of the goal statement but not necessarily see what KPIs went into it.
“Verifiable” is the opposite: I would expect people to expect that they could check whether or not we made the goal, but not simply read it and get it
The closest is “clear and verifiable” but it’s three words—and “legible” is still better, because it points at a more system-1-ish implementation of clarity and verifiability.
Why does it make sense for Rethink Priorities to host research related to all five of the listed focus areas within one research org? It seems like they have little in common (other than, I guess, all being popular EA topics)?
You said in your “Five years” post that you are planning to do more self-eval and impact assessments, and I strongly encourage this. What are the most realistic bits of evidence you could get from an impact report of Rethink Priorities which would cause you to dramatically update your strategy? (or, another generator: what are you most worried about learning from such assessments?)
How has your experience as co-CEO been? How do you share responsibilities? Would you recommend it to other orgs?
Excellent piece! I agree with this mindset but regularly struggle to explain why it’s motivating / good to think this way, and I think you’ve done a nice job.
I don’t believe this is an unbelievably terrible idea; it makes sense to do this in some circumstances. That said, take resentment buildup seriously! If you feel that you are the sort of person who has even a small chance of feeling resentful about this choice later on, it is probably not worth it. You need to feel unambiguously good about this decision in the short and long term.
Yeah, sorry, I wrote the comment quickly and “resources” was overloaded. My first reference to resources was intended to be money; the second was information like career guides and such.
I think the critical-info-in-private thing is actually super impactful towards centralization, because when the info leaks, the “decentralized people” have a high-salience moment where they realize that what’s happening privately isn’t what they thought was happening publicly, they feel slightly lied-to or betrayed, lose perceived empowerment and engagement.
The tractability of further centralisation seems low
I’m not sure yet about my overall take on the piece but I do quibble a bit with this; I think that there are lots of simple steps that CEA/Will/various central actors (possibly including me) could do, if we wished, to push towards centralization. Things like:
Having most of the resources come from one place
Declaring that a certain type of resource is the “official” resource which we “recommend”
Running invite-only conferences where we invite all the people that are looked-up-to as leaders in the community, and specifically try to get those leaders on the same page strategically
Generally demonstrating intensely high levels of cooperativeness with people who are “trusted” along some shared legible axis, and much lower levels of cooperativeness with outsiders
Stop publishing critical info publicly, relying on whisper networks to get the word out about things
I didn’t start off writing this comment to be snarky, but I realized that we are, kind of, doing most of these things. Do we intend to? Should we maybe not do them if we think we want to push away from centralization?
While I broadly agree with Rocky’s list I want to push back a little vs. your points:
Re your (2): I’ve found that small entities are in a constant struggle for survival, and must move fast and focus on the most important problems unique to their ability to make a difference in the world. Small-seeming requirements like “new hires have to find their own housing” can easily make the difference between being able to move quickly vs. slowly on some project that makes or breaks the company. I think for new entities the risks of incurring large costs before you have ‘proven yourself’ are quite high.
My experience also disagrees with your (1): As my company has grown, we have had many forces naturally pushing in the direction of “more professional”: new hires tend to be much more worried about blame about doing things too quick-and-dirty rather than incurring costs on the business in order to do things the buttoned-up way; I’ve stepped in more often to accept a risk rather than to prevent one although I certainly do both!
(Side note: as a potential counterpoint to the above, I do note that Alameda/FTX was clearly well below professional standards at >200 employees—my assumption is that Sam/execs were constantly stepping in to keep the culture the way they wanted it. If I learned that somehow most of the 200 employees were pushing in the direction of less professionalism on their own, I would update to agree with you on (1).)