Reflecting on the piece as a whole, I think there are some very legitimate concerns being brought up, and I think that Cremer mostly comes across well (as she has consistently imo, regardless of whether you agree or disagree with her specific ideas for reform) - with the exception of a few potshots laced into the piece[1]. I think it would be a shame if she were to completely disavow engaging with the community, so I hope where this is disagreement we can be constructive, and where there is agreement we can actually act rather than just talk about it.
Some specific points from the article:
She does not think that longtermism or utilitarianism was the prime driver behind SBFâs actions, so please update towards her not hating longtermism. Where she is against it is because itâs difficult to have good epistemic feedback loops to deciding whether our ideas and actions actually are doing the most good (or even just doing good better):
âFuturism gives rationalization air to breathe because it decouples arguments from verification.â[2]
Another underlying approach is to be wary of the risks of optimisation which shouldnât be too controversial? It reminds me of Galefâs âStraw Vulcanâ - relentlessly optimising towards your current idea of The Good doesnât seem like a plausibly optimal strategy to me. It sounds very parsimonious with the âMoral Uncertaintyâ approach.
âa small error between a measure of that which is good to do and that which is actually good to do suddenly makes a big difference fast if youâre encouraged to optimize for the proxy. Itâs the difference between recklessly sprinting or cautiously stepping in the wrong direction. Going slow is a feature, not a bug.â
One main thrust around the piece is her concern with the institutional design of the EA space:
âInstitutional designs must shepherd safe collective risk-taking and help navigate decision-making under uncertainty...â
In what direction would she like EA to move? In her own words:
âEA should offer itself as the testing ground for real innovation in institutional decision-making.â
We have a whole cause area about that! My prior is that it hasnât had as much sunlight as other EA cause areas though.
There are some fairly upsetting quotes about people who have contacted her because they donât feel like they can voice those doubts openly. I wish we could find a way to remove that fear asap.
âIt increasingly looks like a weird ideological cartel where, if you donât agree with the power holders, youâre wasting your time trying to get anything done.â
Summary:
On a second reading, there were a few more potshots than I initially remembered, but I suppose this a Vox article and not an actually set of reform proposal - something more like that can probably be found in the Democratising Risk article itself.
But I genuinely think that thereâs a lot of value here for us to learn from. And I hope that we can operationalise some ways to improve our own communityâs institutions so that the EA at the end of 2023 looks much healthier than the one right now.
In particular, the shot at Cold Takes being âincomprehensibleâ didnât sit right with meâHoldenâs blog is a really clear presentation of the idea that misaligned AI can have significant effects on the long-run future, regardless of whether you agree with it or not.
In particular, the shot at Cold Takes being âincomprehensibleâ didnât sit right with meâHoldenâs blog is a really clear presentation of those concerned by the risk misaligned AI plays to the long-run future, regardless of whether you agree with it or not.
Agree that her description of Holdenâs thing is uncharitable, though she might be describing the fact that he self-describes his vision of the future as âradically unfamiliar⊠a future galaxy-wide civilization⊠seem[ing] too âwildâ to take seriously⊠we live in a wild time, and should be ready for anything⊠This thesis has a wacky, sci-fi feel.â
(Cremer points to this as an example of an âoften-incomprehensible fantasy about the futureâ)
The quality of reasoning in the text seems somewhat troublesome. Using two paragraphs as example
On Halloween this past year, I was hanging out with a few EAs. Half in jest, someone declared that the best EA Halloween costume would clearly be a crypto-crash â and everyone laughed wholeheartedly. Most of them didnât know what they were dealing with or what was coming. I often call this epistemic risk: the risk that stems from ignorance and obliviousness, the catastrophe that could have been avoided, the damage that could have been abated, by simply knowing more. Epistemic risks contribute ubiquitously to our lives: We risk missing the bus if we donât know the time, we risk infecting granny if we donât know we carry a virus. Epistemic risk is why we fight coordinated disinformation campaigns and is the reason countries spy on each other.
Still, it is a bit ironic for EAs to have chosen ignorance over due diligence. Here are people who (smugly at times) advocated for precaution and preparedness, who made it their obsession to think about tail risks, and who doggedly try to predict the future with mathematical precision. And yet, here they were, sharing a bed with a gambler against whom it was apparently easy to find allegations of shady conduct. The affiliation was a gamble that ended up putting their beloved brand and philosophy at risk of extinction.
It appears that a chunk of Zoeâs epistemic risk bears a striking resemblance to financial risk. For instance, if one simply knew more about tomorrowâs stock prices, they could sidestep all stock market losses and potentially become stupendously rich.
This highlights the fact that gaining knowledge in certain domains can be difficult task, with big hedge funds splashing billions and hiring some of the brightest minds just to gain a slight edge in simply knowing a bit more about asset prices. It extends to having more info about which companies may go belly up or engage in fraud.
Acquiring more knowledge comes at a cost. Processing knowledge comes at cost. Choosing ignorance is mostly not a result of recklessness or EA institutional design but a practical choice given the resources required to process information. Itâs actually rational for everyone to ignore most information most of the time (this is standard econ, check rational inattention and extensive literature on the topic).
One real question in this space is if EAs have allocated their attention wisely. The answer seems to be âmostly yes.â In case of FTX, heavyweights like Temasek, Sequoia Capital, and SoftBank with billions on the line did their due diligence but still missed what was happening. Expecting EAs to be better evaluators of FTXâs health than established hedge funds is somewhat odd. EAs, like everyone else, face the challenge of allocating attention and their expertise lies in âusing money for goodâ rather than âevaluating the health of big financial institutionsâ. For the typical FTX grant recipient to assume they need to be smarter than Sequoia or SoftBank about FTX would likely not be a sound decision.
l question in this space is if EAs have allocated their attention wisely. The answer seems to be âmostly yes.â In case of FTX, heavyweights like Temasek, Sequoia Capital, and SoftBank with billions on the line did their due diligence but still missed what was happening. Expecting EAs to be better evaluators of FTXâs health than established hedge funds is somewhat odd.
Two things:
Sequoia et al. isnât a good benchmark â
(i) those funds were doing diligence in a very hot investing environment where there was a substantial tradeoff between depth of diligence and likelihood of closing the deal. Because EAs largely engaged FTX on the philanthropic side, they didnât face this pressure.
(ii) SBF was inspired and mentored by prominent EAs, and FTX was incubated by EA over the course of many years. So EAs had built relationships with FTX staff much deeper than what funds would have been able to establish over the course of a months-long diligence process.
The entire EA project is premised on the idea that it can do better at figuring things out than legacy institutions.
a. Sequoia led FTX round B in Jul 2021 and had notably more time to notice any irregularities than grant recipients.
b. I would expect the funds to have much better expertise in something like âevaluating the financial health of a companyâ.
Also it seem you are somewhat shifting the goalposts: Zoeâs paragraph with âOn Halloween this past year, I was hanging out with a few EAs.â It is reasonable to assume the reader will interpret it as hanging out with basically random/âtypical EAs, and the argument should hold for these people. Your argument would work better if she was hanging out with âEAs working at FTXâ or âEAs advising SBFâ who could have probably done better than funds on evaluating stuff like how the specific people work.
The EA project is clearly not promised on the idea that it should, for example, âfigure out stuff like stock price better than legacy institutionsâ. Quite the contraryâthe claim is while humanity actually invests decent amount of competent effort in stock, in comparison, it neglects problems like poverty or xrisk.
It seems like weâre talking past each other here, in part because as you note weâre referring to different EA subpopulations:
Elite EAs who mentored SBF & incubated FTX
Random/âtypical EAs who Cremer would hang out with at parties
EA grant recipients
I donât really know who knew what when; most of my critical feeling is directed at folks in category (1). Out of everyone weâve mentioned here (EA or not), they had the most exposure to and knowledge about (or at least opportunity to learn about) SBF & FTXâs operations.
I think we should expect elite EAs to have done better than Sequoia et al. at noticing red flags (e.g. the reports of SBF being shitty at Alameda in 2017; e.g. no ring-fence around money earmarked for the Future Fund) and acting on what they noticed.
Which quality? I really liked the first part of of your comment and even weakly upvoted it on both votes for that reason, but I feel like the second point has no substance. (Longtermist EA is about doing things that existing institutions are neglecting; not doing the work of existing institutions better.)
I read Cremer as gesturing in these passages to the point Tyler Cowen made here (a):
Hardly anyone associated with Future Fund saw the existential risk toâŠFuture Fund, even though they were as close to it as one could possibly be.
I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant. And, it turns out, many of the real sources of existential risk boil down to hubris and human frailty and imperfections (the humanities remain underrated). When it comes to existential risk, I generally prefer to invest in talent and good institutions, rather than trying to fine-tune predictions about existential risk itself.
If EA is going to do some lesson-taking, I would not want this point to be neglected.
But this changed fast. In 2019, I was leaked a document circulating at the Centre for Effective Altruism, the central coordinating body of the EA movement. Some people in leadership positions were testing a new measure of value to apply to people: a metric called PELTIV, which stood for âPotential Expected Long-Term Instrumental Value.â It was to be used by CEA staff to score attendees of EA conferences, to generate a âdatabase for tracking leadsâ and identify individuals who were likely to develop high âdedicationâ to EA â a list that was to be shared across CEA and the career consultancy 80,000 Hours. There were two separate tables, one to assess people who might donate money and one for people who might directly work for EA.
Individuals were to be assessed along dimensions such as âintegrityâ or âstrategic judgmentâ and âacting on own direction,â but also on âbeing value-aligned,â âIQ,â and âconscientiousness.â Real names, people I knew, were listed as test cases, and attached to them was a dollar sign (with an exchange rate of 13 PELTIV points = 1,000 âpledge equivalentsâ = 3 million âaligned dollarsâ).
What I saw was clearly a draft. Under a table titled âcrappy uncalibrated talent table,â someone had tried to assign relative scores to these dimensions. For example, a candidate with a normal IQ of 100 would be subtracted PELTIV points, because points could only be earned above an IQ of 120. Low PELTIV value was assigned to applicants who worked to reduce global poverty or mitigate climate change, while the highest value was assigned to those who directly worked for EA organizations or on artificial intelligence.
The list showed just how much what it means to be âa good EAâ has changed over the years. Early EAs were competing for status by counting the number of mosquito nets they had funded out of their own pocket; later EAs competed on the number of machine learning papers they co-authored at big AI labs.
When I confronted the instigator of PELTIV, I was told the measure was ultimately discarded. Upon my request for transparency and a public apology, he agreed the EA community should be informed about the experiment. They never were. Other metrics such as âhighly engaged EAâ appear to have taken its place.
On CEA gathering information from EA conference attendees:
Can someone from CEA clarify what information, if any, is currently being gathered on EA members
which of these, if any, is being used for assessing individuals,
for what purpose (e.g. for EAGs, other CEA opportunities, âidentifying individuals who were likely to develop high dedication to EAâ), and
which organizations these are shared with, if relevant?
Given CEA had a leadership change in 2019, the same year as the most recent leadership change, can someone from CEA clarify the timing of this measure of value (i.e. was this under Larissa Hesketh-Rowe or Max Dalton as CEO)?
Can someone from CEA also justify the reasoning behind these two claims in particular, and the extent to which this represents the views of CEA leadership at present?
For example, a candidate with a normal IQ of 100 would be subtracted PELTIV points, because points could only be earned above an IQ of 120.
Low PELTIV value was assigned to applicants who worked to reduce global poverty or mitigate climate change, while the highest value was assigned to those who directly worked for EA organizations or on artificial intelligence.
Will also note the apparent consistency with the previous case of CEA over-emphasizing longtermism and AI over global health and animal welfare in earlier versions of the handbook, despite claiming to not take an organizational stance on any cause areas specifically.
Relevant info: this is essentially a CRM database (Customer Relations Management), which very commonly used by companies and non-profits. Your name is likely on hundreds of different CRM databases.
Letâs imagine for example, my interaction with Greenpeace. I signed a petition for Greenpeace when I was a teenager, which input my phone number, email and name into a Greenpeace CRM. Greenpeace then might have some partners who match names and email address with age and earning potential. They categorise me as a student, with low earning potential but with potential to give later, so they flag me for a yearly call to try to get me to sign up to be a member. If I was flagged as being a particularly large earner, I imagine more research would have been done on me, and I would receive more intensive contact with Greenpeace.
CRMs are by design pretty âcreepyâ, for example, if you use Hub Spot for newsletters, it shows de-anonymised data for who viewed what, and for how long. I imagine CRMs that have access to browser cookies are 100x more âcreepyâ than this.
Iâm not well-versed on how CRMs work, so this is useful information, thanks. Though my guess is that CRMs probably donât typically include assessments of IQ?
I am still interested in the answers to the above questions though, and potentially other follow-up Qs, like how CEA staff were planning on actually measuring EAG participants or members on these axes, the justifications behind the inputs in the draft, and what the proposed ideas may reflect in terms of the values and views held by CEA leadership.
Iâm not claiming measuring IQ is morally bad (I donât think Iâve made any moral claims in this comment thread?), but based just on âIt was to be used by CEA staff to score attendees of EA conferencesâ, I think there is a range of executions that could make me think âthis is a ridiculous thing to even consider trying, how on earth is this going to be reliableâ to âthis might be plausibly net positiveâ, and itâs hard to know what is actually going on just by reading the vox article.
Would you be happy if a CEA staff member had a quick chat with you at EAG, wrote down âIQ 100â based on that conversation on an excel sheet, and this cost you opportunities in the EA space as a result?
Would you be happy if a CEA staff member had a quick chat with you at EAG, wrote down âIQ 100â based on that conversation on an excel sheet, and this cost you opportunities in the EA space as a result?
Yes. Iâm in EA to give money/âopportunities, not to get money/âopportunities.
Edit: I do think some people (in and outside of EA) overvalue quick chats when hiring, and Iâm happy that in EA everyone uses extensive work trials instead of those.
Iâm glad that this will not affect you in this case, but folks interested in the EA space because it provides an avenue for a more impactful career may disagree, and for a movement that is at least partly about using evidence and reason to create more positive impact, Iâd be surprised if people genuinely believed that operationalization listed above is a good reflection of those ideals.
Yeah I think measuring IQ is a stupid idea but suppose you were to do it anywayâsurely youâd want to measure IQ through an actual test and not just through guessing, right?
The fact that the PELTIV score involves measurement of IQ without using a proper test, combined with the separate Bostrom controversy revealing that at least some EAs take questions of ârace and IQâ very seriously, makes me very deeply concerned that there may have been racial discrimination in some EA decision-making at some point.
I second the call for more information about this particular issue.
Just to add to this, Iâve had conversations about expanding EA around the world whereby it felt the (negative) response had elitist/âracist undertones.
Reflecting on the piece as a whole, I think there are some very legitimate concerns being brought up, and I think that Cremer mostly comes across well (as she has consistently imo, regardless of whether you agree or disagree with her specific ideas for reform) - with the exception of a few potshots laced into the piece[1]. I think it would be a shame if she were to completely disavow engaging with the community, so I hope where this is disagreement we can be constructive, and where there is agreement we can actually act rather than just talk about it.
Some specific points from the article:
She does not think that longtermism or utilitarianism was the prime driver behind SBFâs actions, so please update towards her not hating longtermism. Where she is against it is because itâs difficult to have good epistemic feedback loops to deciding whether our ideas and actions actually are doing the most good (or even just doing good better):
Another underlying approach is to be wary of the risks of optimisation which shouldnât be too controversial? It reminds me of Galefâs âStraw Vulcanâ - relentlessly optimising towards your current idea of The Good doesnât seem like a plausibly optimal strategy to me. It sounds very parsimonious with the âMoral Uncertaintyâ approach.
One main thrust around the piece is her concern with the institutional design of the EA space:
In what direction would she like EA to move? In her own words:
We have a whole cause area about that! My prior is that it hasnât had as much sunlight as other EA cause areas though.
There are some fairly upsetting quotes about people who have contacted her because they donât feel like they can voice those doubts openly. I wish we could find a way to remove that fear asap.
Summary:
On a second reading, there were a few more potshots than I initially remembered, but I suppose this a Vox article and not an actually set of reform proposal - something more like that can probably be found in the Democratising Risk article itself.
But I genuinely think that thereâs a lot of value here for us to learn from. And I hope that we can operationalise some ways to improve our own communityâs institutions so that the EA at the end of 2023 looks much healthier than the one right now.
In particular, the shot at Cold Takes being âincomprehensibleâ didnât sit right with meâHoldenâs blog is a really clear presentation of the idea that misaligned AI can have significant effects on the long-run future, regardless of whether you agree with it or not.
I think this is similar to criticism that Vaden Masrani made of the philosophy underlying longtermism.
Agree that her description of Holdenâs thing is uncharitable, though she might be describing the fact that he self-describes his vision of the future as âradically unfamiliar⊠a future galaxy-wide civilization⊠seem[ing] too âwildâ to take seriously⊠we live in a wild time, and should be ready for anything⊠This thesis has a wacky, sci-fi feel.â
(Cremer points to this as an example of an âoften-incomprehensible fantasy about the futureâ)
The quality of reasoning in the text seems somewhat troublesome. Using two paragraphs as example
It appears that a chunk of Zoeâs epistemic risk bears a striking resemblance to financial risk. For instance, if one simply knew more about tomorrowâs stock prices, they could sidestep all stock market losses and potentially become stupendously rich.
This highlights the fact that gaining knowledge in certain domains can be difficult task, with big hedge funds splashing billions and hiring some of the brightest minds just to gain a slight edge in simply knowing a bit more about asset prices. It extends to having more info about which companies may go belly up or engage in fraud.
Acquiring more knowledge comes at a cost. Processing knowledge comes at cost. Choosing ignorance is mostly not a result of recklessness or EA institutional design but a practical choice given the resources required to process information. Itâs actually rational for everyone to ignore most information most of the time (this is standard econ, check rational inattention and extensive literature on the topic).
One real question in this space is if EAs have allocated their attention wisely. The answer seems to be âmostly yes.â In case of FTX, heavyweights like Temasek, Sequoia Capital, and SoftBank with billions on the line did their due diligence but still missed what was happening. Expecting EAs to be better evaluators of FTXâs health than established hedge funds is somewhat odd. EAs, like everyone else, face the challenge of allocating attention and their expertise lies in âusing money for goodâ rather than âevaluating the health of big financial institutionsâ. For the typical FTX grant recipient to assume they need to be smarter than Sequoia or SoftBank about FTX would likely not be a sound decision.
Two things:
Sequoia et al. isnât a good benchmark â
(i) those funds were doing diligence in a very hot investing environment where there was a substantial tradeoff between depth of diligence and likelihood of closing the deal. Because EAs largely engaged FTX on the philanthropic side, they didnât face this pressure.
(ii) SBF was inspired and mentored by prominent EAs, and FTX was incubated by EA over the course of many years. So EAs had built relationships with FTX staff much deeper than what funds would have been able to establish over the course of a months-long diligence process.
The entire EA project is premised on the idea that it can do better at figuring things out than legacy institutions.
a.
Sequoia led FTX round B in Jul 2021 and had notably more time to notice any irregularities than grant recipients.
b.
I would expect the funds to have much better expertise in something like âevaluating the financial health of a companyâ.
Also it seem you are somewhat shifting the goalposts: Zoeâs paragraph with âOn Halloween this past year, I was hanging out with a few EAs.â It is reasonable to assume the reader will interpret it as hanging out with basically random/âtypical EAs, and the argument should hold for these people. Your argument would work better if she was hanging out with âEAs working at FTXâ or âEAs advising SBFâ who could have probably done better than funds on evaluating stuff like how the specific people work.
The EA project is clearly not promised on the idea that it should, for example, âfigure out stuff like stock price better than legacy institutionsâ. Quite the contraryâthe claim is while humanity actually invests decent amount of competent effort in stock, in comparison, it neglects problems like poverty or xrisk.
It seems like weâre talking past each other here, in part because as you note weâre referring to different EA subpopulations:
Elite EAs who mentored SBF & incubated FTX
Random/âtypical EAs who Cremer would hang out with at parties
EA grant recipients
I donât really know who knew what when; most of my critical feeling is directed at folks in category (1). Out of everyone weâve mentioned here (EA or not), they had the most exposure to and knowledge about (or at least opportunity to learn about) SBF & FTXâs operations.
I think we should expect elite EAs to have done better than Sequoia et al. at noticing red flags (e.g. the reports of SBF being shitty at Alameda in 2017; e.g. no ring-fence around money earmarked for the Future Fund) and acting on what they noticed.
I think your comment wouldâve been a lot stronger if you had left it at 1. Your second point seems a bit snarky.
I donât think snark cuts against quality, and we come from a long lineage of it.
Which quality? I really liked the first part of of your comment and even weakly upvoted it on both votes for that reason, but I feel like the second point has no substance. (Longtermist EA is about doing things that existing institutions are neglecting; not doing the work of existing institutions better.)
I read Cremer as gesturing in these passages to the point Tyler Cowen made here (a):
I previously addressed this here.
Thanks. I think Cowenâs point is a mix of your (a) & (b).
I think this mixture is concerning and should prompt reflection about some foundational issues.
Some questions for CEA:
On CEA gathering information from EA conference attendees:
Can someone from CEA clarify what information, if any, is currently being gathered on EA members
which of these, if any, is being used for assessing individuals,
for what purpose (e.g. for EAGs, other CEA opportunities, âidentifying individuals who were likely to develop high dedication to EAâ), and
which organizations these are shared with, if relevant?
Given CEA had a leadership change in 2019, the same year as the most recent leadership change, can someone from CEA clarify the timing of this measure of value (i.e. was this under Larissa Hesketh-Rowe or Max Dalton as CEO)?
Can someone from CEA also justify the reasoning behind these two claims in particular, and the extent to which this represents the views of CEA leadership at present?
Will also note the apparent consistency with the previous case of CEA over-emphasizing longtermism and AI over global health and animal welfare in earlier versions of the handbook, despite claiming to not take an organizational stance on any cause areas specifically.
Relevant info: this is essentially a CRM database (Customer Relations Management), which very commonly used by companies and non-profits. Your name is likely on hundreds of different CRM databases.
Letâs imagine for example, my interaction with Greenpeace. I signed a petition for Greenpeace when I was a teenager, which input my phone number, email and name into a Greenpeace CRM. Greenpeace then might have some partners who match names and email address with age and earning potential. They categorise me as a student, with low earning potential but with potential to give later, so they flag me for a yearly call to try to get me to sign up to be a member. If I was flagged as being a particularly large earner, I imagine more research would have been done on me, and I would receive more intensive contact with Greenpeace.
CRMs are by design pretty âcreepyâ, for example, if you use Hub Spot for newsletters, it shows de-anonymised data for who viewed what, and for how long. I imagine CRMs that have access to browser cookies are 100x more âcreepyâ than this.
Iâm not well-versed on how CRMs work, so this is useful information, thanks. Though my guess is that CRMs probably donât typically include assessments of IQ?
I am still interested in the answers to the above questions though, and potentially other follow-up Qs, like how CEA staff were planning on actually measuring EAG participants or members on these axes, the justifications behind the inputs in the draft, and what the proposed ideas may reflect in terms of the values and views held by CEA leadership.
Why is including an assessment of IQ morally bad to track potential future hires? Or do you think itâs just a useless thing to estimate?
Iâm not claiming measuring IQ is morally bad (I donât think Iâve made any moral claims in this comment thread?), but based just on âIt was to be used by CEA staff to score attendees of EA conferencesâ, I think there is a range of executions that could make me think âthis is a ridiculous thing to even consider trying, how on earth is this going to be reliableâ to âthis might be plausibly net positiveâ, and itâs hard to know what is actually going on just by reading the vox article.
Would you be happy if a CEA staff member had a quick chat with you at EAG, wrote down âIQ 100â based on that conversation on an excel sheet, and this cost you opportunities in the EA space as a result?
Yes. Iâm in EA to give money/âopportunities, not to get money/âopportunities.
Edit: I do think some people (in and outside of EA) overvalue quick chats when hiring, and Iâm happy that in EA everyone uses extensive work trials instead of those.
Iâm glad that this will not affect you in this case, but folks interested in the EA space because it provides an avenue for a more impactful career may disagree, and for a movement that is at least partly about using evidence and reason to create more positive impact, Iâd be surprised if people genuinely believed that operationalization listed above is a good reflection of those ideals.
Yeah I think measuring IQ is a stupid idea but suppose you were to do it anywayâsurely youâd want to measure IQ through an actual test and not just through guessing, right?
The fact that the PELTIV score involves measurement of IQ without using a proper test, combined with the separate Bostrom controversy revealing that at least some EAs take questions of ârace and IQâ very seriously, makes me very deeply concerned that there may have been racial discrimination in some EA decision-making at some point.
I second the call for more information about this particular issue.
Just to add to this, Iâve had conversations about expanding EA around the world whereby it felt the (negative) response had elitist/âracist undertones.