You can give me anonymous feedback here: https://forms.gle/q65w9Nauaw5e7t2Z9
Charles He
Uh,
There’s a large, high effort comment chain surrounding the wording of “autism” in the title, but the differences in opinion seemed modest, and both original positions seemed reasonable and humble.
On the other hand, there’s a much more substantive issue: two of the leaders mentioned in the post have terrible traits that I don’t think any effective altruists like.
These traits make them bad people and also bad “leaders” in a way that undermines the post.
In fact, the issues are so severe that I have to temper my views and omit details in this comment because one of the people mentioned is manipulative and personally vindictive on social media.
These two people are indeed far more influential and successful than almost any other human being. They’re much smarter than me too.
However, I doubt they are more intelligent or technically able than many effective altruists and their success does not make them good leaders or role models. This is because they are successful and well known because of the outcome of the tech boom, more specifically, financial engineering and ability to use narratives to extract talented labor in this environment.
Even with these tailwinds, these people are so self destructive that their excesses could have crushed them in a slightly different realization of events, such as missing a single key deal.
I don’t know them personally, but I believe I have inside knowledge of their behavior that strongly supports the idea that they are systemically predatory. I also have other information, such as direct accounts from financiers who view the leadership of one as an absolute negative.
By the way, the speaking style of one of these people is so pronounced that it is suspected of being a ploy specifically to extract goodwill for “non-NT” people. This seems plausible to me and makes their use of an example for “autism” succeeding particularly misleading.
There’s dozens of people, and popular figures, such as Bill Gates or Jeff Bezos, would be much better examples for this post (setting aside the apparently large effort needed to drag around social constructs).
I know my comment seems unbecoming, but it is relevant to the author’s point in more than one way.
I would be interested in a counterpoint, but it seems difficult to get an informed opinion except from someone senior in these industries who have experience with senior leadership.
Hi Ula,
Can you describe the problems more specifically and consider the thoughts below?
I think abuse or exploiting someone’s gender, race, or outright sexual misconduct is an abomination.
But what about the perspective that “bad work environments” are a separate, distinct issue from this kind of abuse?
Work environments in any organization can become terrible and this comes from mismanagement or predatory management.
Unfortunately, these issues might be systemic at all nonprofit organizations because management ability and resources are low, and the “business model” is very performative and this can reduce intellectual honesty. Also a key resource is a stream of both passionate and pliable volunteers, who are both difficult to manage and less able to resist abuse.
If this perspective is correct, it could be difficult to solve because these are root causes. For example, even if you could richly fund and staff a few organizations with great difficulty, you cannot police all organizations that would pop up.
I think my thought in my comment is basic and I may lack knowledge of the specific events.
What do you think about what I said?
Since my comment yesterday at 10:14 AM PST, there have been changes to your top level comment.
For example, your question, asking about a safe, global space with three specific goals, did not exist, and you have added a caveat saying that you do not know if these issues are widespread or common.
Other comments have appeared, such as from Daniela Waldhorn, who has described appalling abuse and who has designated current management for illegal and discriminatory practices.
I think my comment was reasonable because it was hard to understand what if any changes could be effected in response to your comment, or frankly, what the underlying situation was/is.
Despite your caveat, based on the comments, this abuse seems appalling and widespread. This appears to be a public issue that affects everyone in this space.
I think, based on some of the things you said about a lack of discussion or engagement related to respected or powerful people, engaging with certain comments here might support the objectives that you may be aiming for.
Also, unfamiliarity with your or Daniela’s experiences does not mean personal unfamiliarity with similar experiences outside the space of animal welfare.
Consider the premise that the current instantiation of Effective Altruism is defective, and one of the only solutions is some action by Open Philanthropy.
By “defective”, I mean:
A. EA struggles to engage even a base of younger “HYPS” and “FAANG”, much less millions of altruistic people with free time and resources. Also, EA seems like it should have more acceptance in the “wider non-profit world” than it has.
B. The precious projects funded or associated with Open Philanthropy and EA often seem to merely “work alongside EA”. Some constructs or side effects of EA, such as the current instantiation of Longtermism and “AI Safety″ have negative effects on community development.
Elaborations on A:
Posts like “Bycatch”, “Mistakes on road” and “Really, really, hard” seem to suggest serious structural issues and underuse of large numbers of valuable and highly engaged people
Interactions in meetings with senior people in philanthropy indicates low buy-in: For example, in a private, high-trust meeting, a leader mentions skepticism of EA, and when I ask for elaboration, the leader pauses, visibly shifts uncomfortably in the Zoom screen, and begins slowly, “Well, they spend time in rabbit holes…”. While anecdotal, it also hints perhaps that widespread “data” is not available due to reluctance (to be clear, fear of offending institutions associated with large amounts of funding).
Elaboration on B:
Consider Longtermism and AI as either manifestations or intermediate reasons for these issues:
The value of present instantiations of “Longtermism” and “AI″ is far more modest than they appear.
This is because they amount to rephrasing of existing ideas and their work usually treads inside a specific circle of competence. This means that no matter how stellar, their activities contribute little to execution of the actual issues.This is not benign because these activities (unintentionally) are allowing backing in of worldviews that encroach upon the culture and execution of EA in other areas and as a whole. It produces “shibboleths” that run into the teeth of EA’s presentation issues. It also takes attention and interest from under-provisioned cause areas that are esoteric and unpopularized.
Aside: This question would benefit from sketches of solutions and sketches of the counterfactual state of EA. But this isn’t workable as this question is already lengthy, may be contentious, and contains flaws. Another aside: causes are not zero-sum and it is not clear the question contains a criticism of Longtermism or AI as a concern, even stronger criticism can be consistent with say, ten times current funding.
In your role in setting strategy for Open Philanthropy, will you consider the above premise and the three questions below:
To what degree would you agree with the characterizations above or (maybe unfair to ask) similar criticisms?
What evidence would cause you to change your mind to the answer to question #1 (e.g. if you believed EA was defective, what would make disprove this in your mind? Or, if you disagreed with the premise, what evidence would be required for you to agree?)
If there is a structural issue in EA, and in theory Open Philanthropy could intervene to remedy it, is there any reason that would prevent intervention? For example, from an entity/governance perspective or from a practical perspective?
Question:
What would you personally want to do with 10x the current level of funds available? What would you personally like to do with 200x the current level of funds available?
Some context:
There is sentiment that funding is high and that Open Philanthropy is “not funding constrained”.
I think it is reasonable to question this perspective.
This is because while Open Philanthropy may have access to $15 billion dollars of funds from the main donors, annual potato chip spending in the US may be $7 billion dollars, and bank overdraft fees may be about $11 billion.
This puts into perspective the scale of Open Philanthropy next to the economic activity of just one country.
Many systemic issues may only be addressable at this scale of funding. Also, billionaire wealth seems large and continues to grow.
Is there an “API” for this forum, in order to access comments, posts and meta data?
If not, what is your perspective about scraping (purposes of performing analysis, extracting content, or use in other ways that might be “near-EA”)?
Answers can be operational, personal opinion, legal, etc.
Hi Brian,
As a good faith question, can you elaborate on your UX interests or concerns about the website or forum, if any?
It seems your background gives you both strong knowledge and you have repeated a similar question.
Basically, what can the UX do better? What does your ideal UX, or UX improvements look like? On another level, what are the goals it would achieve over the current design?
I use a variety of websites across a number of industries. I think there are drawbacks to slick and trendy websites. To make a point of it, sometimes websites that look old but “just work”, can be great. They signal confidence and longevity. EA is far from a brutalist style or something, but the design seems like good “packaging” for the resulting EA experience (which frankly is a lot of reading).
It seems to work, be approachable and it’s not clear to me why it’s not “basically optimal”.
Hi Brian,
Thanks for the thoughtful reply:
There’s some comments below. They verge on debate, but I am not trying to be contentious.
Comments on #1-3:
I think your points #1-#3 are more like along the lines of a specific “business choice”. Importantly, choices have drawbacks. Promoting one aspect or feature in a limited space is a choice to use a limited resource.
Based on what you said, it seems like #1 and #2 are important and valuable. If one of EA’s core activities are its communities, that should be emphasized and adding it would be an huge improvement. If EA’s contributors are substantially from non-white people, this can’t be neglected in photos.
Now, I personally like the idea of promoting communities and genuinely reflecting on the population. However, it also verges into what I might call “politics” or at least non-UX improvements.
Comments on #4:Overall I think the visual design of CEA and GWWC’s current websites are better than the EA website. I think CEA’s is really good currently, mainly because of their use of nice photos, especially of people.
Below are the top of these pages and maybe what you are referring to:
The pages are excellent, but also are not what I would call “UX design” as I imagined.
They use visual principles that I see commonly on many websites made in the last 5 years.
To try to emphasize this, for a side project, someone I know created a similar page (similar, I think, in every sense, performance, design, and high quality photos) in a few hours and it took off. I might be brutalizing/offending UX designers here.
Also, the main difference in design is simplicity of the elements, in particular CEA design is an extremely simple and effective “landing page”. Also, simple, GWWC, presents a strong narrative in a top down scroll. (I might be messing up terms of art.)
The current EA website is busier, having a few more elements, does not really use scrolling, and has more words. Again, as in my previous comment, it’s not clear this is a bad thing and I might prefer it.
Theme
The theme of this comment is that your reply is different than what expected. I might have expected to learn of a “UX improvement” as some strictly better design choice (“stop using garish colors”), or a better mode of use in some sense (“swiping right on Tinder”).
I agree that design (e.g. “minimalism” or something) might help EA and I wanted to learn what this is.
But my bias is to avoid technological solutions unless it’s clearly needed.
Also, if you have a distinct goal, “we need more non-white people in photos as it better reflects and welcomes the actual community”, I prefer to just state it instead of risking conflating a distinct objective.
Also, really going off topic here, I would like to know more about your experiences with your ethnicity if you have them (note that technically I might have the same ethnicity as you).
You seem to have a lot of thoughtful content and this would be an interesting perspective.
I don’t have the energy to fully engage with these, but maybe we just misunderstand each other in terms of what we define as UI/UX design. To me, and many other UI/UX designers, the UI/UX design is the end-to-end experience of using a website, product, or service, so I think everything I pointed out still falls into the realm of UI/UX design. It’s not just about better interactions. And I think content choices / tradeoffs still can be considered part of the UI/UX design.
The control or selection of specific content, especially the choices you illustrated, being under the purview of UX seems improbable.
It unworkably expands into decisions that are basically always controlled by other parts of the organization (e.g. exec).
To see this another way with examples: we would not accept exec blaming their UX designers for racist or inappropriate content. Similarly, a board would find it ridiculous if a CEO said their “community groups” initiative failed because their UX designer decided it did not belong on the front page.
I know someone who worked adjacent to this space (e.g. hiring and working with the people who hire UX designers).
Someone presenting a UX design that then comprised of the choices in your upper level comment would risk being perceived to be advancing an agenda.
Hi Brian,
(Uh, I just interacted with you but this is not related in any sense.)
I think your are interpreting Open Phil’s giving to “Scientific research” to mean it is a distinct cause priority, separate from the others.
For example, you say:
… EA groups and CEA don’t feature scientific research as one of EA’s main causes—the focus still tends to be on global health and development, animal welfare, longtermism, and movement building / meta
To be clear, in this interpretation, someone looking for an altruistic career could go into “scientific research” and make an impact distinct from “Global Health and Development” and other “regular” cause areas.
However, instead, is it possible that “scientific research” mainly just supports Open Philanthropy’s various “regular” causes?
For example, a malaria research grant is categorized under “Scientific Research”, but for all intents and purposes is in the area of “Global Health and Development”.
So this interpretation, funding that is in “Scientific Research” sort of as an accounting thing, not because it is a distinct cause area.
In support of this interpretation, taking a quick look at the recent grants for “Scientific Research” (on March 18, 2021) shows that most are plausibly in support of “regular” cause areas:
Similarly, sorted by largest amount of grant, the top grants seem to be in the areas of “Global Health”, and “Biosecurity”.
Your question does highlight the importance of scientific research in Open Philanthropy.
Somewhat of a digression (but interesting) are secondary questions:
Theories of change related, e.g. questions about institutions, credibility, knowledge, power and politics in R1 academia, and how could these be edited or improved by sustained EA-like funding.
There is also the presence of COVID-19 related projects. If we wanted to press, maybe unduly, we could express skepticism of these grants. This is an area immensely less neglected and smaller in scale (?)—many more people will die of hunger or sanitation in Africa, even just indirectly from the effects of COVID-19, than the virus itself. The reason why this is undue is that I could see why people sitting on a board donating a large amount of money would not act during a global crisis in a time with great uncertainty.
This seems like great points and of course, your question stands.
I wanted to say that most R1 research is problematic for new grads: this is because of difficulty of success, low career capital, and frankly “impact” can also be dubious. It is also hard to get started. It typically requires PhD, post-doc(s), all poorly paid—contrast with say, software engineering.
My motivation for writing the above is for others, akin to the “bycatch” article—I don’t think you are here to read my opinions.
Thanks for responding thoughtfully and I’m sure you will get an interesting answer from Holden.
Hi Rasmus,
This seems fantastic, both for doing the work itself and sharing it!
I know someone who built multiple orgs, based on this, startups seem to be a dizzying mess, basically you need to do everything at 1000% quality (e.g. early hires and strategic decisions) and have 1000% more tasks than time/energy to do.
This makes writing about the process difficult (it’s a surreal situation where it’s hard to know what reality is, writing itself can create ideas and structures that may not be Truth).
It’s impressive that you are writing about it!
Now, I’m writing this comment because of what you said here:
Most of the writings are going to be very basic for most of you guys here, as it really is intended for a non-EA audience, but it might still be interesting to take a look behind the scenes.
I don’t know what this means, or more honestly, I disagree with it.
Even if someone has seen 100 startups, they would still benefit from learning about your instance of startup and your experiences, especially shared honestly as you seem to.
Also, in your post you say:
How looking for inspiration in other countries helps shape the idea, building an MVP, getting together the initial team (and how we find unexpected help). There will for sure also be an article on how our egos almost got in the way of the easiest solution to the problem (and how it still might) and what we do to get around it.
How do you start a nonprofit? Why should you start a nonprofit? How do you have the biggest impact you can? How do you raise donations? How do you get a bank account? How do you become a registered charity?
It’s not clear why the EA community would have advanced skills for building startups, non-profits, MVPs or teams.
Really, I’m saying is that it could be the opposite, this knowledge is new and valuable (if so, an easy read would be particularly valuable).
(But maybe I’m wrong, maybe I’m misrepresenting your views, and maybe someone else reading this will correct me.)
Thanks for your great post!
Heyo Heyo!
C-dawg in the house!
I have concerns about how this post and research is framed and motivated.
This is because its methods imply a certain worldview and is trying to help hiring or recruiting decisions in EA orgs, and we should be cautious.
Star systems
Like, I think, loosely speaking, I think “star systems” is a useful concept / counterexample to this post.
In this view of the world, someone’s in a “star system” if a small number of people get all the rewards, but not from what we would comfortably call productivity or performance.
So, like, for intuition, most Olympic athletes train near poverty but a small number manage to “get on a cereal box” and become a millionaire. They have higher ability, but we wouldn’t say that Gold medal winners are 1000x more productive than someone they beat by 0.05 seconds.
You might view “Star systems” negatively because they are unfair—Yes, and in addition to inequality, they have may have very negative effects: they promote echo chambers in R1 research, and also support abuse like that committed by Harvey Weinstein.
However, “star systems” might be natural and optimal given how organizations and projects need to be executed. For intuition, there can be only one architect of a building or one CEO of an org.
It’s probably not difficult to build a model where people of very similar ability work together and end up with a CEO model with very unequal incomes. It’s not clear this isn’t optimal or even “unfair”.
So what?
Your paper is a study or measure of performance.
But as suggested almost immediately above, it seems hard (frankly, maybe even harmful) to measure performance if we don’t take into account structures like “star systems”, and probably many other complex factors.
Your intro, well written, is very clear and suggests we care about productivity because 1) it seems like a small number of people are very valuable and 2) suggests this in the most direct and useful sense of how EA orgs should hire.
Honestly, I took a quick scan (It’s 51 pages long! I’m willing to do more if there’s specific need in the reply). But I know someone is experienced in empirical economic research, including econometrics, history of thought, causality, and how various studies, methodologies and world-views end up being adopted by organizations.
It’s hard not to pattern match this to something reductive like “Cross-country regressions”, which basically is inadequate (might say it’s an also-ran or reductive dead end).
Overall, you are measuring things like finance, number of papers, and equity, and I don’t see you making a comment or nod to the “Star systems” issue, which may be one of several structural concepts that are relevant.
To me, getting into performance/productivity/production functions seems to be a deceptively strong statement.
It would influence cultures and worldviews, and greatly worsen things, if for example, this was an echo-chamber.
Alternative / being constructive?
It’s nice to try to end with something constructive.
I think this is an incredibly important area.
I know someone who built multiple startups and teams. Choosing the right people, from a cofounder to the first 50 hires is absolutely key. Honestly, it’s something akin to dating, for many of the same reasons.
So, well, like my 15 second response is that I would consider approaching this in a different way:
I think if the goal is help EA orgs, you should study successful and not successful EA orgs and figure out what works. Their individual experience is powerful and starting from interviews of successful CEOs and working upwards from what lessons are important and effective in 2021 and beyond in the specific area.
If you want to study exotic, super-star beyond-elite people and figure out how to find/foster/create them, you should study exotic, super-star beyond-elite people. Again, this probably involves huge amounts of domain knowledge, getting into the weeds and understanding multiple world-views and theories of change.
Well, I would write more but it’s not clear there’s more 5 people who will read to this point, so I’ll end now.
Also, here’s a picture of a cat:
This is so well written, so thoughtful and so well structured.
BE VERY CAREFUL NOT TO GET SUCKED INTO HORRIBLE PUBLISHING INCENTIVES.
This theme or motif has come up a few times. It seems important but maybe this particular point is not 100% clear to the new PhD audience you are aiming for.
For clarity, do you mean:
On an operational or “gears-level”, avoid activity due to (maybe distorted) publication incentives? E.g. do not pursue trends, fads or undue authority, or perform busy work that produces publications. Maybe because these produce bad habits, infantilization, distractions.
or
Do not pursue publications because this tends to put you down a R1 research track in some undue way, perhaps because it’s following the path of least resistance.
Also, note that “publications” can be so different between disciplines.
A top publication in economics during a PhD is rare, but would basically be worth $1M in net present value over their career. It’s probably totally optimal to tag such a publication, even in business, because of the signaling value.
Note that my academic school is way below you in academic prestige/rank/productivity. It would be interesting to know more about your experiences at MIT and what it offers.
Hey Ramiro,
I’m sorry but I just saw this comment now. My use of the forum can be infrequent.
I think your point is fascinating and your shift in perspective and using history is powerful.
I take your point about this figure and how disruptive (in the normal, typical sense of the word and not SV sense) he was.
I don’t have much deep thoughts. I guess that it is true that institutions are more important now, at least for the reason since there’s 8B people so single people should have less agency.
I am usually suspicious about stories like this since it’s unclear how institutions and cultures are involved. But I don’t understand the context well (classical period Greece). I guess they had https://en.wikipedia.org/wiki/Ostracism#Purpose for a reason.
Hi,
The identification of EA with a small set of cause areas has many manifestations, but the one I’m mostly worried about is the feeling shared by many in the community that if they work on a cause that is not particularly prioritized by the movement (like feminism) then what they do “is not really EA”, even if they use evidence and reason to find the most impactful avenues to tackle the problem they try to solve….However, this calculus can be somewhat incomplete, as it doesn’t take into account the personal circumstances of the particular biologist debating her career.
I think I strongly agree with this and I expect most EA do too.
My interpretation is that EA as a normative, prescriptive guide for life doesn’t seem right. Indeed, if anything, there’s evidence that EA doesn’t really do a good job, or maybe even substantively neglects this while appearing to do so, in a pernicious way. From a “do no harm” perspective, addressing this is important. This seems like a “communication problem” (which seems historically undervalued in EA and other communities).
From the perspective of the entire EA movement, it might be a better strategy to allocate the few individuals who possess the rare “EA mindset” across a diverse set of causes, rather than stick everyone in the same 3-4 cause areas. Work done by EAs (who explicitly think in terms of impact) could have a multiplying effect on the work and resources that are already allocated to causes. Pioneer EAs who choose such “EA-neglected” causes can make a significant difference, just because an EA-like perspective is rare and needed in those areas, even in causes that are well-established outside of EA (like human rights or nature conservation). For example, they could carry out valuable intra-cause prioritization (as opposed to inter-cause prioritization).
This is a really different thought than your other above and I want to comment more to make sure I understand.
While agreeing with the essence, I think I differ and I want to get at the crux of the difference:
Overall, I think “using data”, cost effective analysis, measurement and valuation, aren’t far from mainstream in major charities. To get a sense of this, I have spoken (worked with?) to leaders in say, environmental movements and they specifically “talk the talk”, e.g. there’s specific grants for “data science” like infrastructure, for example. However, while nominally trying, many of these charities don’t succeed—the reason is an immense topic beyond the scope of this comment or post.
But the point is that it seems hard to make these methodological or leadership changes that motivates you dissemination.
Note that it seems very likely we would agree and trust any EA who reported that any particular movement or cause area would benefit from better methods.
However, actually effecting change is really difficult.
To be tangible, imagine trying to get the Extinction Rebellion to use measurement and surveys to regularly interrogate their theory of change.
For another example, the leadership and cohesion of many movements can be far lower than they appear. Together with the fact that applying reasoning might foreclose large sections of activity or initiatives, this would make implementation impractical.
While rational, data driven and reasoned approaches are valuable, it’s unclear if EA is the path to improving this, and this is a headwind to your point that EAs should disseminate widely. I guess the counterpoint would be that focus is valuable and this supports focus on cause areas closer to the normal sense that you argue against.
Thanks for the thoughtful reply.
I think we are probably agreed that we should be cautious against prescribing EAs to go to charities or cause areas where the culture doesn’t seem welcoming. Especially given the younger age of many EAs, and lower income and career capital produced by some charities, this could be a very difficult experience or even a trap for some people.
I think I have updated based on your comment. It seems that having not just acceptance but also active discussion or awareness of “non-canonical” cause areas seems useful.
I wonder, to what degree is your post or concerns addressed if new cause areas were substantively explored by EAs to add to the “EA roster”? (even if few cause areas were ultimately “added” as a result, e.g. because they aren’t feasible).
I also expect the average non-American household to be larger than the average American household, not smaller (so there will be <6 B households worldwide).
Indeed, my own qualitative research suggests US households are smaller than average—and maybe even consist mainly of single individuals.
My research involves parsing the details in this video and this video.
This is incredibly good.
I think this “operational” and “implementation” content, filled with on the ground experiences, is critically valuable.
I have a question/suggestion:
Have you considered publishing the “sequence” in successive posts, maybe a few days or a week apart?Due to the algorithm, if people don’t make it to the other parts in the sequence, it gets buried off the front page as it ages. By posting a new section each week, you can get more total eyeballs and attention. This is a little rhetorical/gamey but is basically used here.
You can iterate and change on content each week, especially in response to feedback—so with the same amount of effort, produce content and connect to people more.
Hey, yo, Mark, It’s me, Charles.
What’s up?
So I’ve read this post and there’s a lot of important thoughts you make here.
Focusing on your takeaways and conclusion, you seem to say that earning to give is bad because buying talent is impractical.
The reasoning is plausible, but I don’t see any evidence for the conclusion you make, and there seems to be direct counterpoints you haven’t addressed.
Here’s what I have to say:
It seems we can immediately evaluate “earning to give” and the purchasing of labor for EA
There’s a very direct way to get a sense if earning to give is effective, and that’s by looking at the projects and funds where earning to give goes, such as in the Open Phil and EA Funds grants database.
Looking at these databases, I think it’s implausible for me, or most other people, to say a large fraction of projects or areas are poorly chosen. This, plus the fact that many of these groups probably can accept more money seems to be an immediate response to your argument.
These funds are particularly apt because they are where a lot of earning to give funds go to.
It seems that following your post directly implies that people who earn to give and have donated haven’t been very effective. This seems implausible, as these people are often highly skilled and almost certainly think of their donations carefully. Also, meta comment or criticism is common.
It seems easy to construct EA projects that benefit from monies and purchasable talent
We know with certainty that millions of Africans will die of malnutrition and lack of basic running water. These causes are far greater than say, COVID deaths. In fact, the secondary effects of COVID are probably more harmful than the virus itself to these people.
The suffering is so stark that projects like simply putting up buckets of water to wash hands would probably alleviate suffering. In addition to saving lives, these projects probably help with demographic transition and other systemic, longer run effects that EAs should like.
Executing these projects would cost pennies per person.
This doesn’t seem like it needs unusual skills that are hard to purchase.
Similarly, I think we could construct many other projects in the EA space that require skills like administrative, logistic, standard computer programming skills, outreach and organizational skills. All of these are available, probably by most people reading this post.
It seems implausible that market forces are ineffective
I am not of the “Chicago school of economics”, but this video vividly explains how money coordinates activity.
While blind interpretations of this idea are stupid, it seems plausible that money that causes effective altruistic activities in the same way that buying a pencil does.
Why wouldn’t we say that everyone in an organization and even the supply chain that provides clean water or malaria nets is doing effective altruism?
I also don’t get this section “Talent is very desirable”:
But in my mind, the idea of earning to give is that we have a pool of money and a pool of ex-Ante valuable EA projects. We take this money and buy labor (EA people or non-EA people) to do these projects.
The fact that this same labor can also earn money in other ways, doesn’t create some sort of grid lock, or undermine the concept of buying labor.
So, when I read most of your posts, I feel dumb.
I read your post “The Solomonoff Prior is Malign”. I wish I could also write awesome sentences like “use the Solomonoff prior to construct a utility function that will control the entire future”, but instead, I spent most of my time trying to find a wikipedia page simple enough to explain what those words mean.
Am I missing something here?
What is Mark’s model for talent?
I think one thing that would help clarify things is a clearly articulated model where talent is used in a cause area, and why money fails to purchase this.
You’re interested in AI safety, of like, the 2001 kind. While I am not right now, and not an expert, I can imagine models of this work where the best contributions would supersede slightly worse work, making even skilled people useless.
For these highest tier contributors, making sure that HAL doesn’t close the pod bay doors, perhaps all of your arguments apply. Their talent might be very expensive or require intrinsic motivation that doesn’t respond to money.
Also, maybe what you mean is another class, of an exotic “pathfinder” or leader model. These people are like Peter Singer, Martin Luther King or Stacey Abrams. It’s debatable, but perhaps it may be true these people cannot be predicted and cannot be directly funded.
However, in either of these cases, it seems that special organizations can find ways to motivate, mentor or cultivate these people, or the environment they grow up in. These organizations can be funded for money.