The EA Forum podcast has recorded an audio version of this post here: https://anchor.fm/ea-forum-podcast/episodes/Is-effective-altruism-growing—An-update-on-the-stock-of-funding-vs—people-e158mta
Thank you! It was really good to see those posts. I’ll get to work drafting something to post in the next week or so. (I don’t spend much time at my laptop these days, by design. :)
We should add the ability to convert posts to questions (or back to regular posts, but that’s tricky because answers would have to be converted to regular comments).
Also, the editor should automatically suggest converting your post to a linkpost or question post if the title or body text matches certain patterns. For example, if you write “Crossposted from X” or “This is a linkpost” at the top, it can infer that your post is most likely a linkpost. I see a lot of posts from inexperienced users that are classified as regular posts even though they’re intended to be linkposts or questions, so I think this would be helpful to them.
Haha, oh yes! I will do that. I didn’t feel comfortable at first as I didn’t want to come across as spam (even though I’m offering this for free). I’m not interested in consulting per se—I’m building these as DIY courses —but maybe that tag would still work. Thanks!
Excellent ideas! I learned to ask “who else should I talk to about this?” as a journalist interviewing experts for articles.
I have not considered a volunteer, but it’s worth thinking about. I just want to be careful not to get into a position where this project turns into a work-like situation for me; meaning it may be preferable to do outreach on my own as I have the time rather than having to manage someone else. But good point that they likely wouldn’t have to do it for long, especially as the word starts to spread on its own. So...any takers? :)
Appreciate the link to the report, it was great to read an in-depth analysis that went into the mechanics of how things went horribly wrong. If you want to skim some highlights you can CRTL+F “Notably” I always appreciate how you can see face-palming come through in writing. A similar scenario had happened to exactly the same people with the Malachite Fund, there were lessons learned, committee meetings were held, and then the same thing happened again.
“Notably, the CRM analyst for Malachite was also the CRM analyst for Archegos. His senior chain of reporting was also the same for Malachite as for Archegos.” (#95)
If anyone enjoyed or found this interesting, I would recommend When Genius FailedThe Rise and Fall of Long-Term Capital Management (link to NY Times review) about LTCM almost crashing most major US banks due to being over ledged when the Asian/Russian financial crisis happened in the late 1990s.
Thanks for writing back—and for the unnecessary complements of my inaugural posts :) -- Charles! I only know the context of mis-messaging around skills at a high level, so it is hard for me to respond without knowing what ‘bad outcomes’ look like. I don’t doubt that something like this could happen, so I now see the point you were trying to make.
I was responding as someone who read your (intentionally not fleshed out) hypothetical and thought the appropriate response might actually be for someone well-suited for ‘biology’ to work on building those broad skills even with a low probability of achieving the original goal.
No laptop! That’s even better :)
And yes, to build on your caveat, I meant to add one of my own recognizing ‘voluntarily having no connectivity because you have a nearby office, library, computer lab is much different than not having the option to be easily connected.’
That seems correct to me for the most part, though it might be less inevitable than you suspect, or at least this is my experience in economics. At my University they tried hiring two independent little ‘clusters’ (one being ‘macro-development’ which I was in) so I had a few people with similar enough interests to bounce ideas off of. A big caveat is that its a fragile setup: after 1 left its now just 2 of us with only loosely related interests. I have a friend in a similarly ranked department that did this for applied-environmental economics, so she has a few colleagues with similar interests. Everything said here is even truer of the top departments if you’re a strong enough candidate to land one of those.
My sense is that departments are wise enough to recognize the increasing returns to having peers with common interest at the expense of sticking faculty in teaching roles that are outside of their research areas. Though this will obviously vary job-to-job and should just be assessed when assessing whether to apply to a specific job; I just don’t think its universal enough to steer people away from academia.
regardless of what speed they traveled while moving between stars
regardless of what speed they traveled while moving between stars
Adding to my other reply to your other comment I just made, let me just clarify that the model I’m working with is the “fast colonization” model from 25:20 of this Stuart Armstrong FHI talk, in which von Nuemann probes are sent directly from their origin solar system to each other galaxy, rather than hopping from galaxy to galaxy (as in the “slow colonization” model used by Sagan/Newman/Fogg/Hanson according to Stuart’s slide).
So if >0.99c probes are possible, then I think the hypothesis I described is at least plausible, since civilizations indeed wouldn’t see other expanding civilizations until those civilizations reached them.
To clarify, I am pointing out that if extraterrestrials exist that are mining stars for energy and doing other large-scale things that we’d expect to be visbile from other solar systems or galaxies, and if those extraterrestrials are >X light-years away from us and only started doing those large-scale things <X years ago, then we would not expect to see them because the light from their civilization would not yet have had time to reach us.
So the speed of expansion of their civilization isn’t a necessary aspect of why we can’t see them.
However, if the nature of our universe is such that extraterrestrials are likely to have arisen elsewhere in our galaxy (meaning <100,000 ly from us), then what’s the explanation for why they arose in the last <100,000 years and not in the billions of years before that? That sould seem improbable a priori.
One (partial) explanation for that coincidence is if we hypothesize that the nature of our universe is such that any civilization that arises and reaches a point of doing large-scale things that would be visible from many light-years away also expands at near the speed of light beginning as soon as it starts having those large-scale effects. If we further assume that such expansion reaching our solar system before now would have prevented us from existing today (e.g. by extinguishing life on Earth and replacing it with something else), then this serves as a (partial) explanation for the above coincidence by introducing an observation selection effect where we only exist in the first place because no other extraterrestrials have arisen within X ly of us in the last X years.
Note that I called this (“intelligence expands at (near) light speed once it starts having effects that would be visible from light years away”) hypothesis a “partial” explanation above (for lack of a better word) to note that while it could explain why it’s not surprising that we don’t see signs of extraterrestrials mining stars (even conditional on them existing), it is also a hypothesis that we find ourselves in a very rare world (simulation possibilities aside)--one in which intelligence arose more than once in our vaccinity, but at almost exactly the same time (e.g. 13.79995 and 13.8 billion years after the big bang if some other civilization in our galaxy started expanding 50,000 years ago), which a priori is unlikely.
As an extension to this model, I wrote a solver that finds the optimal allocation between the AI portfolio and the global market portfolio. I don’t think Google Sheets has a solver, so I wrote it in LibreOffice. Link to download
I don’t know if the spreadsheet will work in Excel, but if you don’t have LibreOffice, it’s free to download. I don’t see any way to save the solver parameters that I set, so you have to re-create the solver manually. Here’s how to do it in LibreOffice:
Go to “Tools” → “Solver...”
Click “Options” and change Solver Engine to “LibreOffice Swarm Non-Linear Solver”
Set “Target cell” to D32 (the green-colored cell)
Set “By changing cells” to E7 (the blue-colored cell)
Set two limiting conditions: E7 ⇒ 0 and E7 ⇐ 1
Given the parameters I set, the optimal allocation is 91.8% to the global market portfolio and 8.2% to the AI portfolio. The parameters were fairly arbitrary, and it’s easy to get allocations higher or lower than this.
But I’d guess the ability to do this sort of tweak would follow pretty quickly.
After reading your latest post on temporary copies, I’m thinking that this would quickly become the #1 priority for brain simulation research. In a real life analogy, humans very quickly abandoned horses in favor of cars, as having a tool that works 24⁄7 without complaint is much better than a temperamental living being. So the phase of copies being treated with dignity would be relatively short-lived up until the underlying circuitry could be tweaked to make it morally okay to force simulations to work 24⁄7 without them “suffering” in any way, as they would be incapable of negative emotion.
Now, allowing for unlimited tweaking of brain circuitry does make for bad science fiction (i.e. the mmacevedo short story breaks down in a world where its possible) but I suspect it would be the ultimate endpoint for virtual workers.
I saw your excellent posts as an economics professor and also cutting WIFI.
Both were great. It’s great to hear from your perspective as an economics professor and hear about your work!
Also, thanks for your comment. I think I get what you’re saying:
(It’s not clear why anyone should listen to my opinions about their life choices) but yes, it seems perfectly valid to go into any discipline, and you can have a huge value and generate impact in many paths of life.
Also, there’s a subthread here about elitism that is difficult to unpack, but it seems healthy to discuss “production functions”, skill and related worldviews explicitly at some point.
To be frank, by giving my narrative example, I was trying to touch on past messaging issues that actually happened.
These messaging issues are alluded in this article, also by Benjamin Todd:
Basically, the problem is as suggested in my example—in the past, the need for very specific skills or profiles was misinterpreted as a need for general talent. This did result in bad outcomes.
I chose to give my narrative instead of directly pointing to a past instance of the issue.
By doing this, I hoped to be more approachable to those less familiar with the history. It is also less confrontational while making the same point.
For an analogy, imagine making a statement about the EA movement needing more “skill in biology”. In response, this updates conscientious, strong EAs who change careers. However, what was actually needed was world class leaders in biology whose stellar careers involve special initial conditions. Unfortunately, this means that the efforts made by even very strong EAs were wasted.
This doesn’t immediately strike me as a bad outcome, ex-ante. It’s very hard to know (1) who will become world class researchers or (2) if non-world-class people move the needle by influencing the direction of their field ever-so-slightly (maybe by increasing the incentives to work on an EA-problem by increasing citations here, peer-reviewing these papers, etc.). I, by no means, am world class, but I’ve written papers that (I hope) pave the way for better people to work on animal welfare in economics; participate in and attend conferences on welfare economics; signed a consensus statement on research methodology in population ethics; try to be a supportive/encouraging colleague of welfare-economists working on GPR topics; etc. I also worked under a world-class researcher in grad school and now sometimes serve as a glorified assistant (i.e., coauthor) who helps him flesh out and get more of his ideas to paper. In your example, if the community ‘needs more people in biology’ I think the scaffolding of the sorts I try to provide, is probably(?) still impactful. (Caveat: I’m almost certainly over-justifying my own impact, so take this with a grain of salt.)
If 80K was pushing people into undesirable careers with little earnings potential, this might be a legitimate problem. But I think most of the skills built in these HITS based careers are transferrable and won’t leave you in a bad spot.
Co-authors on posts should also share the karma of the post.
I don’t know how they should, whether it’s equal split, or some percentage of the whole (e.g. if there’s 100 karma each person gets 75 or something).
(I noticed this on 1 account for a post the person had co-written ~6 months ago)
I know this is a super old comment, but I read it as just “imagine it were $100”, where $100 is just a random number. The precise number doesn’t really impact the article’s point, but I’d still be interested to hear some calculations for that value...
Thank you for sharing these — I may pick up the Clarke book as summer reading!
In a similar vein I enjoyed these two books with case studies of disasters:
Flirting with Disaster: Why Accidents are Rarely Accidental
Warnings: Finding Cassandras to Stop Catastrophes
Cheers- thanks for the comment!
I’m using the term zero marginal cost colloquially as is common parlance in the tech sector.
Your app might spread through word of mouth, the server costs are trivial and then you can scale at ~zero marginal cost.
As you say in practise tech firms often spend a few dollars on acquiring new users/customers.