Objections to Value-Alignment between Effective Altruists

With this post I want to encourage an examination of value-alignment between members of the EA community. I lay out reasons to believe strong value-alignment between EAs can be harmful in the long-run.

The EA mission is to bring more value into the world. This is a rather uncertain endeavour and many questions about the nature of value remain unanswered. Errors are thus unavoidable, which means the success of EA depends on having good feedback mechanisms in place to ensure mistakes can be noticed and learned from. Strong value-alignment can weaken feedback mechanisms.

EAs prefer to work with people who are value-aligned because they set out to maximse impact per resource expended. It is efficient to work with people who agree. But a value-aligned group is likely intellectually homogenous and prone to breed implicit assumptions or blind spots.

I also noticed particular tendencies in the EA community (elaborated in section: homogeneity, hierarchy and intelligence), which generate additional cultural pressures towards value-alignment, make the problem worse over time and lead to a gradual deterioration of the corrigibility mechanisms around EA.

Intellectual homogeneity is efficient in the short-term, but counter-productive in the long-run. Value-alignment allows for short-term efficiency, but the true goal of EA – to be effective in producing value in the long- term – might not be met.

Disclaimer

All of this is based on my experience of EA over the timeframe 2015-2020. Experiences differ and I share this to test how generalisable my experiences are. I used to hold my views lightly and I still give credence to other views on developments in EA. But I am getting more, not less worried over time, particularly because others members have expressed similar views and worries to me but have not spoken out about them because they fear losing respect or funding. This is precisely the erosion of critical feedback mechanism that I point out here. I have a solid but not unshakable belief about the theoretical mechanism I outline is correct but I do not know to what extent it takes effect in EA. But I’m also not sure whether those who will disagree with me will know to what extent this mechanism is at work in their own community. What I am sure of however (on the basis of feedback from people who have read this post pre-publication) is that my impressions of EA are shared by others within the community, that they are the reason why some have left EA or never quite dared to enter. This alone is reason for me to share this—in the hope that a healthy approach to critique and a willingness to change in response to feedback from the external world is still intact.

I recommend the impatient reader to skip forward to the section on Feedback Loops and Consequences.

Outline

I will outline reasons that lead EAs to prefer value-alignment and search for definitions of value-alignment. I then describe cultural traits of the community which play a role in amplifying this preference and finally evaluate what effect value-alignment might have on EAs feedback loops and goals.


Axiomaticity

Movements make explicit and obscure assumptions. They make explicit assumptions: they stand for something and exist with some purpose. An explicit assumption is, by my definition, one that was examined and consciously agreed upon.

EA explicitly assumes that one should maximise the expected value of one’s actions in respect to a goal. Goals differ between members but mostly do not diverge greatly. They may be a reduction of suffering, the maximisation of hedons in the universe or the fulfilment of personal preferences, and others. But irrespective of individual goals EAs mostly agree that resources should be spent effectively and thus efficiently. Having more resources available is better because more can be done. Ideally, every dollar spent should maximise its impact, one should help as many moral patients as possible, and thus do the most good that one can.

Obscure assumptions in contrast are less obvious and more detrimental. I divide obscure assumptions into two types: mute and implicit. A mute assumption as one which its believer does not recognise as one. They are not aware they hold it and thus do not see alternatives. Mute assumptions are not questioned, discussed or interrogated. An implicit assumption is, by my definition here, one which the believer knows to be one of several alternatives, but which they believe without proper examination anyhow. Communities host mute and implicit assumptions in additional to explicit, agreed upon assumptions. I sometimes think of these as parasitic assumptions: they are carried along without choosing and can harm the host. Communities can grow on explicit assumptions, but obscure assumptions deteriorate a group’s immunity to blind spots and biases. Parasitic assumptions feed off and nurture biases which can eventually lead to false decisions.

The specific implicit assumptions of EA are debateable and should be examined in another post. To point out examples, I think there are good reasons to think that many members share assumptions around for example transhumanism, neoliberalism, the value of technological progress, techno-fixes, the supreme usefulness of intelligence, IR realism or an elite-focused theory of change.

From Axioms to Value-Alignment

The explicit axiom of maximising value per resource spent, turns internal value-alignment into an instrumental means to an end for EAs. A value-aligned group works friction-less and smoothly. They keep discussion about methodology, skills, approach, resources, norms, etc. to a minimum. A group that works well together is efficient and effective in reaching their next near-term goal. Getting along in a team originates in liking each other. We humans tend to like people who are like us.

The Meaning of Value-Alignment

A definition of internal value-alignment is hard to come by despite frequent use of the term to describe people, organisations or future generations. There appears to be some generally accepted notion of what it means for someone to be value-aligned, but I have not found a concise public description.

Value-alignment is mentioned in discussions and publications by central EA organisations. CEA published this article where they state that ‘it becomes more important to build a community of talented and value-aligned people, who are willing to flexibly shift priorities to the most high value causes. In other words, growing and shaping the effective altruism community into its best possible version is especially useful’. CEA speaks of a risk of ‘dilution’, which they define as ‘An overly simplistic version of EA, or a less aligned group of individuals, could come to dominate the community, limiting our ability to focus on the most important problems’. Nick Beckstead’s post from 2017 refers to value-aligned persons and organisations repeatedly. He says CEA leadership is value-aligned with OpenPhil in terms of helping EA grow, and other funders of CEA appear ‘fairly value-aligned’ with them. This document by MacAskill at GPI loosely refers to value-alignment as: ‘where the donors in question do not just support your preferred cause, but also share your values and general worldview’. This article again calls for getting value-aligned people into EA, but it too lacks a definition (“…promoting EA to those who are value aligned. Weshould be weary of promoting the EA movement to those who are not value aligned, due to the downside of flooding the EA movement with non value-aligned people”). Value-aligned appears not necessarily fixed: “it is likely that if people are persuaded to act more like EA members, they will shift their values to grow more value aligned”.

According to the public writing I found, value-alignment could mean any of the following: supporting and spreading EA, having shared worldviews, focussing on the most important problems or doing the most high-value thing. Importantly, not being value-aligned is seen as having downsides: it can dilute, simplify or lead to wrong prioritisation.

It is probably in the interest of EA to have a more concise definition of value-alignment. It must be hard to evaluate how well EAs are aligned if a measure is lacking. Open questions remain: on what topics and to what extent should members agree in order to considered ‘aligned’? What fundamental values should one be aligned on? Is there a difference between being aligned and agreeing to particular values? To what degree must members agree? Does one agree to axioms, a way of living, a particular style of thinking? What must one be like to be one of the people ‘who gets it’?

I get the impression that value-alignment means to agree on a fundamental level. It means to agree with the most broadly accepted values, methodologies, axioms, diet, donation schemes, memes and prioritisations of EA. The specific combination of adopted norms may differ from member to member but if the number of adopted norms is sufficiently above an arbitrary threshold, then one is considered value-aligned. These basic values include an appreciation of critical thinking, which is why those who question and critique EA can still be considered value-aligned. Marginal disagreement is welcome. Central disagreements however, can signal misalignment and are more easily considered inefficient. They dilute the efficacy and potential of the movement. There is sense in this view. Imagine having to repeatedly debate the efficacy of the scientific method with community members. It would be hard to get much done. Imagine in turn to work with your housemates on a topic that everyone is interested in, cares about and has relevant skills for. Further efficiency gains can be made if the team shares norms in eating habits, the use of jargon and software, attend the same summer camps and read the same books. Value alignment is correctly appreciated for its effect on output per unit work. But for these reasons I also expect value-alignment to be highly correlated with intellectual and cognitive homogeneity.

A Model of Amplifiers – Homogeneity, Hierarchy and Intelligence

The axioms of EA generate the initial preference for value-alignment. But additional, somewhat contingent cultural traits of EA amplify this pressure towards alignment with passing time. These traits are homogeneity, hierarchy and intelligence. I will explain each trait and try to show how they foster the preference for value-alignment.

Homogeneity

EA is notably homogenous by traditional measures of diversity. Traditional homogeneity is not the same as cognitive homogeneity (this is why I treat them separately), but the first is probably indicative of the latter. For the purpose of this article I am only interested in cognitive diversity in EA, but there is little data on it. Advocates for traditional diversity metrics such as race, gender and class do so precisely because they track different ways of thinking. The data on diversity in EA suggests that decision-makers in EA do not see much value in prioritising diversification, since it remains consistently low.

Founding members of EA have similar philosophical viewpoints, educational backgrounds, the same gender, and ethnicity. That is neither surprising or negative but informative since recent surveys (2017, 2018, 2015, 2014) show homogeneity in most traditional measures of diversity, including gender (male), ethnicity (white), age (young) education (high and similar degrees), location (Bay and London), and religion (non-religious). Survey results remain stable over time and it seems that current members are fairly similar to EA’s founders with respect to these traits.

EAs could however have different worldviews and thus retain cognitive diversity, despite looking similar. In my own experience this is mostly not the case, but I cannot speak for others and without more data, I cannot know how common my experience is. This section is thus by no means attempting to provide conclusive evidence for homogeneity—but I do hope to encourage future investigations and specifically, more data collection.

Surveys show that EAs have similar views on normative ethics. This is interesting, because EA’s axioms can be arrived at coming from many ethical viewpoints and because philosophers have comparatively distributed opinions (see meta-ethics and normative ethics in Phil Papers survey). 25% of philosophers lean towards deontology, 18% to virtue ethics, 23% to consequentialism, but EA’s give only 3% (2015, 2019), 2% (2014), 4% (2017) to deontology, 5% (2015, 2014, 2017), 7% (2019) to virtue ethics and 69% (2015 and 2014), 64% (2017), 81% (2019) to consequentialism. This is a heavy leaning towards consequentialism in comparison to another subgroup of humans who arguably spend more time thinking about the issue. One explanation is that consequentialism is correct and EAs are more accurate than surveyed philosophers. The other explanation is that something else (such as pre-selection, selected readings or group think) leads EAs to converge comparatively strongly.

In my own experience, EAs have strikingly similar political and philosophical views, similar media consumption and leisure interests. My mind conjures the image of a stereotypical EA shockingly easily. The EA range of behaviours and views is more narrow in comparison to the behaviours and views found in a group of students or in one nation. The stereotypical EA will use specific words and phrases, wear particular clothes, read particular sources and blogs, know of particular academics and even use particular mannerism. I found that a stereotyped description of the average EA, does better at describing the individuals I meet than I would expect it should, if the group were less homogenous.

The group of EA is still small in comparison to a nation of course, so naturally the range of behaviours will be smaller. This narrow range is only significant because EA hopes to act on behalf of and in the interest of humanity as a whole. Humanity happens to be a lot more diverse.

That being said, it is simply insufficient to evaluate the level of cognitively homogeneity of EA, on the basis of sparse data and my own experience. It would be beneficial to have more data on degrees of intellectual homogeneity across different domains.

Hierarchy

EA is hierarchically organised via central institutions. They donate funds, coordinate local groups, outline research agenda, prioritise cause areas and give donation advice. These include the Centre for Effective Altruism, Open Philanthropy Project, Future of Humanity Institute, Future of Life Institute, Giving What We Can, 80.000 Hours, the Effective Altruism Foundation and others. Earning a job at these institutions comes with earning a higher reputation.

EA members are often advised to donate to central EA organisations or to a meta-fund, which then redistributes money to projects that adhere to and foster EA principles. Every year, representative members from central organization gather in what is called a ‘leaders forum’, to cultivate collaboration and coordination. The forums are selective and not open to everyone. Reports about the forums or decisions that were taken there are sparse.

Individuals who work at these institutions go through a selection process which selects for EA values. Individuals sometimes move jobs between EA institutions, first being a recipient of funding, then donating funds to EA organisations and EA members. I’m not aware of data about job traffic in EA, but it would be useful both for understanding the situation and to spot conflicts of interest. Naturally, EA organisations will tend towards intellectual homogeneity if the same people move in-between institutions.

Intelligence

Below I outline three significant cultural norms in EA that relate to intelligence. The first is a glorification of intelligence. The second is a susceptibility to be impressed and intimidated by individuals with a perceived high intelligence and to thus form a fan base around them. The third is a sense of intellectual superiority over others, which can lead to epistemic insularity.

I do not expect all readers to share all my impressions and evidence for cultural traits will always be sparse and inconclusive. But if some of the below is true, then other EAs will have noticed these cultural trends as well and they can let this article be a nudge to give voice to their own observations.

The Conspicuous Roles of Intelligence in EA

Intelligence, as a concept and an asset, plays a dominant role in EA. Problems of any kind are thought solvable given enough intelligence: solve intelligence, solve everything else. Many expect that superintelligence can end all suffering, because EAs assume all suffering stems from unsolved problems. Working on artificial general intelligence is thus a top priority.

Intelligence is a also highly valued trait in the community. Surveys sometimes ask what IQ members have. I appeared to notice that the reputation of an EA correlates strongly with his perceived intelligence. Jobs which are considered highly impactful tend to be associated with a high reputation and the prerequisite to possess high intelligence. It is preferred that such jobs, like within technical AI safety or at Open Philanthropy, be given to highly intelligent members. When members discuss talent acquisition or who EA should appeal to, they refer to talented, quick or switched-on thinkers. EAs also compliment and kindly introduce others using descriptors like intelligent or smart more than people outside EA.

The Level Above [i]

Some members, most of which work at coordinating institutions, are widely known and revered for their intellect. They are said to be intimidatingly intelligent and therefore epistemically superior. Their time is seen as particularly precious. EAs sometimes showcase their humility by announcing how much lower they would rank their own intelligence underneath that of the revered leaders. I think there is however no record of the actual IQ of these people.

Most of my impression come from conversations with EA members, but there’s some explicit evidence for EA fandom culture (see footnotes for some pointers)[ii] A non-exhaustive subset of admired individuals I believe includes: E. Yudkowsky, P. Christiano, S. Alexander, N. Bostrom, W. MacAskill, Ben Todd, H. Karnowsky, N. Beckstead, R. Hanson, O. Cotton-Barratt, E. Drexler, A. Critch, … As far as I perceive it, all revered individuals are male.

The allocation of reputation and resources is impacted by leaders, even beyond their direct power over funding and talent at central institutions. For example, “direct action”, colloquially equated with working at EA organisations (which is what the leaders do), has a better reputation than “earning to give”. Leaders also work on or prioritise AI safety – the cause area, which I believe has been allocated the highest reputation. It is considered the hardest problem to work on and thus requires the highest IQs individuals. The power over reputation allocation is soft power but power nonetheless.

EAs trust these leaders and explicitly defer to them because leaders are perceived as having spent more time thinking about prioritisation and as being smarter. It is considered epistemically humble to adjust one’s views to the views of someone who is wiser. This trust also allows leaders to keep important information as secret, with the justification of it being an information hazard.

Epistemic Insularity

EAs commonly place more trust in other EAs than in non-EAs. Members are seen as epistemic peers, and thereby the default reference class. Trust is granted to EAs by virtue of being EA and because they likely share principles of inquiry and above-average intelligence. The trust in revered EA leaders is higher than the trust in average EAs, but this trust is higher still than the trust in average academic experts.

These trust distributions allow EAs to sometimes dismiss external critics without a thorough investigation. Deep investigations depend on someone internal and possibly influential to find the critique plausible enough to bid for resources to be allocated to the investigation. Homogeneity can reduce the number of people who see other views as plausible and can lead to the insulation from external corrections.

A sense of rational and intellectual superiority over other communities can strengthen insulation. It justifies preferencing internal opinions to external opinions, even if internal opinions are not verified by expertise or checked against evidence. Extreme viewpoints can propagate in EA, because intellectual superiority acts as a protective shield. Differences with external or common-sense views can be attributed to EAs being smarter and more rational. Thus, the initial sense of scepticism that inoculates many against extremism is dispelled. It seems increasingly unlikely that so many people who are considered intelligent could be wrong. A vigilant EA forecast will include extreme predictions if they are predicted by an EA, because it is vigilant to give some credence to all views within one’s epistemic peer group.

Feedback Loops

I see what I describe here as observed tendencies, not consistent phenomena. What I describe happens sometimes, but I do not know how often. The model does not depend on having bad actors or bad intentions, it just needs humans.

Leaders at central organisations have more influence over how the community develops. They select priorities, donate funds and select applicants. The top of the hierarchy is likely homogenous because leaders move between organisations and were homogenous to begin with, resulting in more homogeneity as they fund people who are value-aligned and think like them. Those who are value-aligned agree on the value of intelligence and see no problem with a culture in which intelligence marks your value, where high IQ individuals are trusted and intellectual superiority over others is sometimes assumed.

Cultural norms around intelligence keep diversification at bay. A leader’s position is assumed justified by his intelligence and an apprehension to appear dim, heightens the barrier to voicing fundamental criticism. If one is puzzled by a leader’s choice it may either be because one disagrees with the choice or one doesn’t understand the choice. Voicing criticism thus potentially signals one’s lack of understanding or insight. It is welcome to show one’s capacity for critical thinking, in fact shallow disagreements are evidence for good feedback mechanisms and they boost everyone’s confidence in epistemic self-sufficiency. It is harder to communicate deep criticism of commonly held beliefs. One runs the risk to be dismissed as slow, as someone who ‘doesn’t get it’ or an outsider. High barriers retrain the hierarchy and shield internal drastic views.

As it becomes more evident what type of person is considered value-aligned a natural self-selection will take place. Those who seek strong community identity, who think the same thoughts, like the same blogs, enjoy the same hobbies… will identify more strongly, they will blend in, reinforce norms and apply to jobs. Norms, clothing, jargon, diets and a life-style will naturally emerge to turn the group into a recognisable community. It will appear to outsiders that there is one way to be EA and that not everyone fits in. Those who feel ill-fitted, will leave or never join.

The group is considered epistemically trustworthy, to have above average IQ and training in rationality. This, for many, justifies the view that EAs can often be epistemically superior to experts. A sense of intellectual superiority allows EAs to dismiss critics or to only engage selectively. A homogenous and time-demanding media diet —composed of EA blogs, forum posts and long podcasts — reduces contact hours with other worldviews. When in doubt, a deference to others inside EA, is considered humble, rational and wise.

Consequences

Most of the structural and cultural characteristics I describe are common characteristics (hierarchy and homogeneity, fandom culture) and often positive (deference to others, trusting peers, working well in a team). But in combination with each other and the gigantic ambition of EA to act on behalf of all moral patients, they likely lead to net negative outcomes. Hierarchies can be effective at getting things done. Homogeneity makes things easy and developing cultural norms has always been our human advantage. Deferring to authority is not uncommon. But if the ambition is great, the intellectual standards must match it.

EA is reliant on feedback to stay on course towards its goal. Value-alignment fosters cognitive homogeneity, resulting in an increasing accumulation of people who agree to epistemic insularity, intellectual superiority and an unverified hierarchy. Leaders on top of the hierarchy rarely receive internal criticism and the leaders continue to select grant recipients and applicants according to an increasingly narrow definition of value-alignment. A sense of intellectual superiority insulates the group from external critics. This deteriorates the necessary feedback mechanisms and makes it likely that Effective Altruism will, in the longterm, make uncorrected mistakes, be ineffective and perhaps not even altruistic.

EA wants to use evidence and reason to navigate towards the Good. But its ambition stretches beyond finding cost-effective health interventions. EA wants to identify that which is good to then do the most good possible.

Value-alignment is a convergence towards agreement and I would argue it has come too early. Humanity lacks clarity on the nature of the Good, what constitutes a mature civilization or how to use technology. In contrast, EA appears to have suspiciously concrete answers. EA is not your average activist group on the market-place on ideas on how to live. It has announced far greater ambitions: to research humanity’s future, to reduce sentient suffering and to navigate towards a stable world under an AI singleton. It can no longer claim to be one advocate amongst many. If EA sees itself as acting on behalf of humanity, it cannot not settle on an answer by itself. It must answer to humanity.

EA members gesture at moral uncertainty as if all worldviews are considered equal under their watch, but in fact the survey data reveals cognitive homogeneity. Homogeneity churns out blind spots. Greg put it crisply in his post on epistemic humility: ‘we all fall manifestly short of an ideal observer. Yet we all fall short in different aspects.’ I understand the goal of EA correctly, it is this ideal observer which the EA theory is in desperate need of. But alas, no single human can adopt the point of view of the universe. Intuitions, feedback mechanisms and many perspectives are our best shot at approximating the bird’s-eye view.

Our blind spots breed implicit and mute assumptions. Genuine alternative assumptions and critiques are overlooked and mute assumptions remain under cover. A conclusion is quickly converged upon, because no-one in the group thought the alternative plausible enough to warrant a proper investigation. EA of course encourages minor disputes. But small differences do not suffice. One member must disagree sufficiently vehemently to call for a thorough examination. This member and their view must be taken seriously, not merely tolerated. Only then can assumptions be recognised as assumptions.

To stay on course amidst the uncertainty which suffuses the big questions, EA needs to vigilantly protect its corrective feedback mechanisms. Value-alignment, the glorification of intelligence and epistemic insularity drive a gradual destruction of feedback mechanisms.

Here are some concrete observations that I was unhappy with:

EAs give high credence to non-expert investigations written by their peers, they rarely publish in peer-review journals and become increasingly dismissive of academia, show an increasingly certain and judgmental stance towards projects they deem ineffective, defer to EA leaders as epistemic superiors without verifying the leaders epistemic superiority, trust that secret google documents which are circulated between leaders contain the information that justifies EA’s priorities and talent allocation, let central institutions recommend where to donate and follow advice to donate to central EA organisations, let individuals move from a donating institution to a recipient institution and visa versa, strategically channel EAs into the US government, adjust probability assessments of extreme events to include extreme predictions because they were predictions by other members…

EA might have fallen into the trap of confusing effectiveness with efficiency. Value-alignment might reduce friction and add speed and help reach intermediate goals that seem like stepping stones towards a better world at the time. But to navigate humanity towards a stable, suffering-free life, EA must answer some of the biggest philosophical and scientific questions. The path towards their goal is unknown to them. It will likely take time, mistakes and conflict. This quest is not amenable to easy-wins. I struggle to see the humility in the willingness with which EAs rely on a homogenous subset of humanity to answer these questions. Without corrective mechanisms they will miss their target of effectively creating an ethical existence for our species.

Propositions

I have tried to express my concern that EA will miss its ambitious goal by working with only an insular subset of the people it is trying to save. I propose one research question and one new norm to address and investigate this concern.

First, I would encourage EAs to define what they mean by value-alignment and to evaluate the level of value-alignment that is genuinely useful. I have described what happens when the community is too value-aligned. But greater heterogeneity can of course render a group dysfunctional. It remains to be empirically analysed how value-aligned the community really is or should be. This data, paired with a theoretical examination of how much diversity is useful, can verify or refute whether my worries are justified. I would of course not have written this article if I was under the impression that EA occupies the sweet spot between homogeneity and heterogeneity. If others have similar impressions it might be worth trying to identify that sweet spot.

Second, I wish EA would more visibly respect the uncertainty they deal in. Indeed, some EAs are exemplary—some wear uncertainty like a badge of honour: as long as ethical uncertainty persists, they believe the goal they optimise towards is open to debate. It is an unsettling state of mind and it is admirable to live in recognition of uncertainty. For them, EA is a quest, an attempt to approach big questions of valuable futures, existential risk and the good life, rather than implementing an answer.

I wish this would be the norm. I wish all would enjoy and commit to the search, instead of pledging allegiance to preliminary answers. Could it be the norm to assume that EA’s goal has not been found? EAs could take pride in identifying sensible questions, take pride in a humble aspiration to make small contributions to progress and take pride in an endurance to wait for answers. I wish it were common knowledge that EA has not found solutions yet. This does not mean that EAs have been idle. It is a recognition that improving the world is hard.

I do not propose a change to EAs basic premise. Instead of optimising towards a particular objective, EA could maximise the chance of identifying that objective. With no solutions yet at hand, EA can cease to prioritise efficiency and strong internal value-alignment. Alignment will not be conducive to maximising the chances of stumbling upon a solution in such a vast search space. It is thus possible to take time to engage with the opposition, to dive into other worldviews, listen to deep critics, mingle with slow academia and admit that contrasting belief systems and methods could turn out to be useful from the perspective of unknown future values.

There is no need to advertise EA as having found solutions. Not if one wants to attract individuals that are at ease with the real uncertainty that we face. I believe it is people like that, who have the best chance of succeeding in the EA quest.

For feedback feel free to email me in private or of course comment.


[i] Yudkowsky mentions his intelligence often, such as in the article ‘the level above mine’. He has written an autobiography, named ‘Yudkowsky’s Coming of Age’. Members write articles about him in apparent awe and possibly jest (“The game of “Go” was abbreviated from ‘Go Home, For You Cannot Defeat Eliezer Yudkowsky’”, “Inside Eliezer Yudkowsky’s pineal gland is not an immortal soul, but another brain.”). His intelligence is a common meme between members.

[ii] Some queries to MacAskill’s Q&A show reverence here, (“I’m a longtime fan of all of your work, and of you personally. I just got your book and can’t wait to read it.”, “You seem to have accomplished quite a lot for a young person (I think I read 28?). Were you always interested in doing the most good? At what age did you fully commit to that idea?”).