Thank you! I think quantitative approaches should be given greater attention.
brb243
AMF net use may be overestimated by about 10 percentage points
Hits- or misses-based giving
7 Fast & Careful thought development formats
Resource constraints and liking constraints: Addressing two issues with impact certificates
1) Are you interested in increasing diversity of the longtermist community? If so, alongside what lines?
One possibility is to increase shares of minorities according to US Census Bureau topics: race, sex, age, education, income, etc. Ways of thinking about EA, one’s (static or dynamic) comparative advantages, or roles naturally/nurturally taken in a team would be irrelevant. The advantage of this diversification is its (type 1 thinking) acceptance/endorsement among some decisionmaking environments in EA, such as the Bay Area or London. The disadvantage is that diversity of perspectives may not necessarily be gained (for example, students of different race, sex, and parents’ income studying at the same school may think alike).
Another possibility is to focus on the ways of thinking about EA, one’s current comparative advantage and that which they can uniquely develop, and roles that they currently or prospectively enjoy. In this case, Census-type demographics would be disregarded. The disadvantage is that diversity might not be apparent (for example, affluent white people, predominantly males, who think in very different ways about the long-term future and work well together could constitute the majority of community members). The advantage is that things would get done and different perspectives considered.
These two options can be combined in a narrative-actual or actual-narrative ways: Census-type diversity could be an instrument for thinking/action/roles diversity, while only the former is narrated publicly. Or, vice versa, people of various thinking/comparative advantages/preferred roles would be attracted to increase Census-type fractions. Is either necessary or a great way to mitigate reputational loss risk? Do you have an available strategy on the longtermist community growth?
2) Is it possible to apply for a grant without collaborators but with a relevant experience or strategy of finding them?
For example, can one apply if they had previously advertised and interviewed others for a similar EA-related opportunity but have not initiated an advertisement process for the application?
Do you award grants or vary their amount conditional on others’ interest? For example, is it possible to apply for a range depending on a collaborator’s compensation preference or experience? Is it possible to forgo a grant if no qualified candidate is interested?
This is so cool. I had a similar idea about an ethical game a while ago! The idea was that:
The objective is to improve decisionmakers’ ethics
More points are gained for impact-maximization decisions in places and at times of large important meetings
The game settings/new developments are unrelated to the actual meetings but inspire thinking alongside similar lines[1]
At places and at times without large important meetings, on the other hand, points are gained for more deontological and active-listening-based decisions—the greater diversity of places of engagement, the better
This should motivate the consideration of a broader variety of groups, also though confirming that individuals should be nice to others[2]
Traditional social hierarchy shortcuts are played with in the design
For example, any gender person or entity can save another entity from a tower/pond/etc, if that task is included in the game
Authority characters exhibit some of the same body language as traditional[3] and non-traditional[4] authorities but are of any identities (traditionally more and less powerful, such as people of any gender, race, and background) who express themselves individually
Body shaming is entirely replaced by spirit and skill-based judgment but it is still possible to in some cases confirm one’s biases about body hierarchies
Hierarchies related to territory, self and partners’ objectification according to commerce, disregard in intimacy, ownership of items expensive due to marketing not function, fight that hurts someone, gaining attention by threat, showcasing unapproachability, and other negative standards are not used to motivate players’ progress or present a hierarchy—there is not really a hierarchy since the game is cooperative
These hierarchies can be used for critical engagement/discourse
The environment and tasks are continuously created, also by the players
Players gain points/perks for suggesting quests and settings that motivate impact-maximization decisionmaking and active listening to a diversity of individuals
The explicit objective point/perk award criteria includes an ethical ‘passing’ standard (relatively easy to get approved by friends, as long as one is friends from at least someone from various teams/groups/experience) but is otherwise based on something exclusively game-relevant (such as the number of blocks used)
The developers check on the ethical developments and intervene as necessary
For example, if a new ethical norm that was just accepted starts being overemphasized, as if to make a point by some groups, an interesting less ethics-intense challenge is introduced
If the dark triad traits become prominent among malevolent actors, points are associated with actions that counter the reinforcement of these traits
If anything becomes too repetitive or boring, new possibilities of playing are introduced
Friendships are formed
Players can participate in various teams at the same time. There is no better and worse affiliation, point maximization depends on one’s skills. Players can change affiliations freely, which can be beneficial to their score.
Chat function is engaging and concisely informative, providing the delight of having all info available in a useful format. Sincere reactions can be exhibited (rather than e. g. stickers or memes that confirm biases or optimize for non-critical engagement)
Players can be recognized at large decisionmaker meetings and outside.
Coding challenges
Make it difficult to trick the GPS
Or not, if there may be a sufficiently small number of sufficiently cool non-decisionmaker players who can inspire the decisionmakers
Feel free to use this for inspiration.
Are you soliciting ideas for the games in any way? For example, will you have Essay Contests or ideation days? There may be high interest from the EA community.
Another question is if you seek to actually engage the players in the alignment or more so make them comfortable[5] so that you can slip any thinking to them, even if they ‘wanted spaceships and it is animal welfare?’[6]
- ^
For example, to acquire a bounty pirates have to critically engage parrots while finding a way to make swords when iron is not on the map.
This can be very entertaining to the attendees of the OPEC and non-OPEC Ministerial Meeting, if it seemed that everyone is parroting phrases. The no natural resource on the map can be a fun way to attract attention in a kind way and gain friendly understanding of fellow Meeting participants. This is a hypothetical example.
- ^
The way to motivate the decisionmakers to engage non-humans can be through analogous game challenges (this blob flying around you is trying to communicate something—what do you do to understand?) or marking some places with those who understand non-humans (e. g. neuroscience researchers or sanctuary farmers) as high-point for active-listening decisionmaking.
- ^
For example, leaning on a table with one’s fingers or including someone in their seat
- ^
- ^
I am not sure if I am emotionally explaining the difference adequately, but this relates to the feeling 1) from the stomach up, palms going up, the person seeks to engage and is positively stimulated or 2) slight relaxation in the lower back, hands close, the person seeks to repeat ideas and avoid personal interaction.
- ^
Engaging the players may be necessary, otherwise problems that need extensive engagement will not get resolved and efficiency may be much lower compared to when everyone actually tries to solve the overall inclusive alignment and continue to optimize for greater wellbeing, efficiencies, and other important objectives.
The example is that a 60-hen cage can be better for chickens than open barns (according to EconTalk) - and that is just one aspect of life of one of the almost 9 million species and many more individuals. If people were to be ‘tricked’ into opening cages, a lot would remain unresolved.
The discussion can be more unified (interpreted as organized with better-searchable ideas) if the comments are in-line and one does not need to search (the same) quotes and their responses in the comments. One would look in-line for comments relevant to the quotes that they like/seek to discuss or learn further perspectives on and under the article they would look for general comments. This is similar to how one would comment on a Google Docs draft that someone asked them to proofread.
Possibly, most commented on quotes could be highlighted - ‘community highlighting.’ By number of comments, their length, or post part upvote. Are there any bias confirmation/perpetuation on first-come basis risks?
I wonder what searchability (of annotations and linked notes) optimal for the Forum would be. Currently, it seems somewhat difficult to search articles by keyword by the Forum search function, because of the recommendation algorithm that may disproportionately show specific posts.
Can this be not only comments but also upvotes/downvotes (as you suggest with ‘+1’), questions, and polls relevant to specific parts, quotes, or sections of the post?
One could find it easier to orient themselves in the community responses to different parts of the text when they can hover over a highlighted part and see its karma and reactions. The reactions could also be categorized and users could choose to see only some type of reactions (e. g. not on typos or clarification questions or polls but yes on complementary or contradictory evidence, challenging questions, and idea advancement).
The community rather than the author should select segment that they wish to comment on. Otherwise, the author could ‘hide’ a contentious conclusion in a generally agreeable block of text. However, this has the disadvantage that someone can be responding to the key word in the sentence and another person to the entire sentence. Then, comments that could be consolidated would be split, which would reduce the text orientation efficiency.
I have not seen this on the EA Forum feature suggestion thread, which you may be interested in mentioning it on.
[Question] How should others’ donations be accounted for in cost-effectiveness analyses?
It seems alarming that GiveWell bases their significant donation recommendations only on one study[1] that, furthermore, does not seem to understand beneficiaries’ perspectives but rather estimates metrics that relate to performance within hierarchies that historically privileged people set up: school attendance[2], hours worked[3], and income.
GiveWell’s reports should align more closely with academic norms where authors are expected to fully explain their data, methods, and analysis, as well as the factors that their conclusions are sensitive to
I disagree that GiveWell’s reports should align more closely with academic norms, because these norms do not engage intended beneficiaries.
Explanations can help differentiate the actually most helpful programs from those made prestige by big/small numbers and convoluted analyses.
Allowing GiveWell’s audience tweak the factors and see how conclusions change would show the organization’s confidence in their (moral) judgments.
‘Data’ should not be confused with ‘numbers.’ Focus group data may be invaluable compared to quantitative estimates when a solution to a complex problem is being found.
- ^
The only evidence GiveWell uses to estimate the long-term effects of deworming comes from a study of the Primary School Deworming Project (PSDP) using the Kenya Life Panel Survey (KLPS) (Miguel & Kremer, 2004) and its follow-ups (Baird et al., 2016; Hamory et al., 2021). (HLI, Appendix: Calculations of Deworming Decay )
- ^
School curricula in developing contexts may include post-colonial legacy, select elites while leaving most behind, or optimize for raising industrial workforce that may prevent global value chain advancement of industrializing nations but make the countries an instrument for affordable consumption of foreign-made goods.
- ^
I am unsure whether unpaid domestic and care work was considered within hours worked—excluding this would imply greater value of paid over unpaid work, a standard set up by the historically privileged.
- ^
Zotero creates a bibliography if you click on all the links and then click on the browser extension icon on each page. It does not always work perfectly—but e. g. data from academic articles get usually copied well.
OK! I cannot find #Title on LessWrong but based on your description it seems analogous to linking a post or using a tag?
If a user is a fan of someone who they do not have an actual connection with (usually did not meet in person for 1-on-1 and have not shared common interests), they would use the professional tag (for example, one could tag Joel McGuire if they write something that they think that he would find useful, based on his posts). The friendly tag (that has to be authorized by the tagged person) should be used when people are confident that they know their friend’s interests so well that they would recommend something that the friend would enjoy (while they may also find it useful). So, the intent difference is inform based on the user’s professional presentation vs. notify of enjoyable content based on the users’ friendly connection.
Tagging users to notify them (@[username]). People should be able to ‘authorize’ friendly tags but ‘professional’ tags should be possible by default. Users should be able to turn on-off notifications for ‘friendly’ and ‘professional’ tags. In this way, people could make and maintain connections via the Forum.
Also, orgs (or departments) could have their own tags. For example, if someone does not make a writing contest deadline, they should still be able to notify the org about an idea. Organizations could be also able to filter their tag and another set of tags or keywords (for example, ‘Open Philanthropy, Worldview Diversification, DALY’ could allow an OPP researcher to skim collective intelligence related to their calculation methodology and possibly delegate further research to people who had thought about it already).
I was just about to suggest that. Reasoning explanations behind a vote could be also valuable.
Should max upvote be associated also with factors other than user karma, such as self-assessed professional expertise (according to broad criteria)? For example, someone who works in the EU Commission on Internet of Things could assess themselves as an ‘expert’ on a question that relates to valuable actions related to a new draft of the EU AI White Paper.
Voting can also seek to ameliorate biases by highlighting underrepresented perspectives. For instance, if there is a poll about priorities related to wild animal welfare, the vote of an AI safety researcher could be weighted more heavily if the majority of other votes are of wild animal welfare researchers. Voters’ organizational affiliations, professional and cause area expertise, and relevant demographics could be considered.
Unnecessary positive discrimination should be avoided. For instance, US college graduate male and female votes on an issue that does not relate to gender or gender norms should be weighted the same while the vote of Afghani women should be weighted more than that of Afghani men on any Afghanistan-related topic. This is based on the assumptions of equal opportunities for male and female students at US colleges but historically and currently unequal decisionmaking opportunities for women and men in Afghanistan.
For sure! I think so and actually I am thinking more axes could be used—for example, one scale for ‘relaxation’ other for ‘pain’ other for ‘energy’ etc.
Investing for a cause
But is only computational sentience computational? As in the ability to make decisions based on logic—but not making decisions based on instinct—e. g. baby turtles going to the sea without having learned such before?
Yeah! maybe high-levels of pleasure hormones just make entities feel pleasant! Versus matters not known to be associated with pleasure don’t. Although we are not certain what causes affects, some biological body changes should be needed, according to neuroscientists.
It is interesting to think what happens if you have superintelligent risky and security actors. It is possible that if security work is advanced relatively rapidly while risk activities enjoy less investments, then there is a situation with a very superintelligent AI and ‘only’ superintelligent AI, assuming equal opportunities of these two entities, risk is mitigated.
Yes, changing digital minds should be more facile because it is easily accessible (code) and understood (developed with understanding and possibly specialists responsible for parts of the code).
The meaningful difference relates to the harm vs. increased wellbeing or performance of the entity and others.
Ok, then healthy should be defined in the way of normal physical and organ function, unless otherwise preferred by the patient, while mental wellbeing is normal or high. Then, the AI would still have an incentive to reduce cancer risk but not e. g. make an adjustment when inaction falls within a medically normal range.
Elitism in EA usually manifests as a strong preference for hiring and funding people from top universities, companies, and other institutions where social power, competence, and wealth tend to concentrate.
What do you mean by competence? Is it the skills, knowledge, connections, and presentation that advance these institutions? Does the advancement include EA-related innovation? Is this competence generalizable to EA-related projects?
Is social power the influence over acceptable norms due to representing that institution or having an identity that motivates others to make a mental shortcut for such ‘deference to authority’? Could social power be gained without appealing to traditional power-related biases?
Traits that elitism tends to select against (or neutral) … - Critical thinking
Critical thinking in solving problems related to achieving the institutions’ objectives are supported while critical engagement with these objectives may be deselected against. This also implies that no one thinks about the objectives, which can be boring/make people feel lacking meaning: companies could be glad to entertain conversations about the various possible objectives.
Traits that elitism tends to select against (or neutral) … - Altruism/desire to help others
Effective altruism—desire to help others the most while valuing all, even those outside of one’s immediate circles, more equally. Elite decisionmaking is to an extent based on favors and dynamics among friends and colleagues.
Traits that elitism tends to select for—Ambition/desire for power
I’d say acceptance/internalization of the specific traditional hierarchical structure and understanding oneself as competent to progress within this structure.
In EA, there’s a pretty solid correlation between people who have started big and impactful projects and their origins in elite environments (Sam Bankman-Fried, Will MacAskill, Holden Karnofsky, etc.). Some of the most successful companies in the world (e.g. Google, Apple, Paypal) have historically also been quite selective and operate within a sphere of prestige.
I am assuming that you are assuming the ‘eliteness’ metric as a sum of school name, parents’ income, and Western background? Please reduce my bias.
Is the correlation apparent? For example, imagine that instead of (elite) Rob Mather gaining billions for a bednet charity a (non-elite) thoughtful person with high school education and $5/day started organizing their (also non-elite) friends talking about cost-effective solutions to all issues in sub-Saharan Africa in 2004 and was gaining the billions since, as solutions were developed. Maybe, many more problems would have been solved better.
Counter-examples (started big and impactful projects from non-elite background) may include Karolina Sarek, William Foege (Wiki), and Jack Rafferty. It can be interesting to see this percentage in the context of the % of elite vs. non-elite people in EA (%started impactful projects from elite/%elite in EA)/(%started impactful projects from non-elite/%non-elite in EA). Further insights on the relative success of top vs. median elite talent can be gained by controlling for equal opportunities (which can be currently assumed if funding is awarded on the basis of competence).
It’s far easier to consider earning to give if you’re making $100k+ a year.
So, while EA was funding constrained, it used to make sense to attract elites. Now, this argument applies to a lesser extent.
It can be incredibly demotivating being told that your potential for impact is far less than a select few.
Unless it is true, such as if impact is interpreted as representing an institution that aspires for normative change, in which case you realize that speaking with elite people in an elite way is not really for you anyway and do something else, such as running projects or developing ideas. This is an equal dynamic where potential for impact is a phrase.
Recruiting from the same 10-20 universities who all have similar demographics makes it more likely to end up engaging in groupthink.
Thinking diversity norms can be more influential in having vs. not having issues with groupthink than the composition of the group, considering that people interact with others. For example, if the norm is prototyping solutions with intended beneficiaries, engaging them in solving the issues and stating their priorities in a way which mitigates experimenter bias and motivates thoughtful sincerity, and considering a maximally expanded moral circle, then the quality of solutions should not be reduced if people from only 10-20 schools are involved. On the other hand, if the norm is, for instance, that everyone reads the same material and is somewhat motivated to donate to GiveWell and spread the word, then even a diverse group engages in groupthink.
Prestige doesn’t select for people who want to do the most good. This can be counteracted by recruitment processes that select more heavily for altruism and the self-selection effects of EA as a movement, but given the importance of strong value-alignment within EA, this is potentially damaging in the long-term.
Prestige selects for people of whom the highest share wants to do the most good when being offered reasoning and evidence on opportunities, at least if prestige is interpreted as such. Imagine, for instance, a catering professional being presented with evidence on doing the most good by vegan lunches. Their normative background may not much allow for impact consideration if that would mean forgone profit, unless it does. If EA should keep value by altruistic rather than other (e. g. financial) motivation, then recruitment should attract altruistic people who want to be effective and discourage others.
Senior-level positions
So, it depends on the senior-level positions. If you want to make changes in an authoritarian government, an (elite) insider will be very helpful. Similarly, a (non-elite) insider would be helpful if they need to develop solutions within a non-elite context, such as solve priorities in Ghana under $100m. It does not matter if normative solution developers (such as AI strategy researchers) are elite or not, as long as they understand and equally weigh everyone’s interests. Positive discrimination for roles that elites may have better background in (e. g. due to specialized school programs), such as technical AI safety research, may be counterproductive to the success of the area, because less competent people would lead the organizations and since the limited number of applicants from non-elite roles is not caused by unwelcomingness but limited opportunities to develop background skills, positive discrimination would not further increase diversity.
Cofounder searches
Complementarity can be considered. For example, someone who can find the >$100m priorities in Ghana and someone who can get the amount needed. However, own network funding can also prevent the entire network fund a much better project in the future, so not all elite people should be supported in advancing their own projects, since there is so relatively many elites and so few elite networks—unless offering an opportunity to fund a relatively less unusual project first enables the support of a more unusual (and impactful) project later. If the project objective is well-defined and people receive training, then anyone who can understand the training and will make sure that it gets done can qualify.
“The original Mac team taught me that A-plus players like to work together, and they don’t like it if you tolerate B-grade work.” — Steve Jobs
You are grading ‘playing with Macs.’ I think Bill Gates dropped out of college. And, just based on these two examples—if you compare their philanthropy … This means that whoever is not cool cannot participate? Also, if students get used to upskilling others (and tolerating or benefiting from that), then EA can get less skills-constrained later and create more valuable opportunities for the engagement of people who score around the 70th(95th) percentile on standardized exams.
Field-specific conferences—such as an AI safety or a biosecurity conference—benefit from restricting the conference to those with expertise. This ensures that everyone in attendance can contribute to the conversations or otherwise will benefit greatly from being exposed to the content.
While a biosecurity conference should probably only ‘benefit’ people who are ‘vetted’ by elite (if so defined) institutions that they will not actually think about making pathogens since biosecurity is currently relatively limited, an AI safety conference can be somewhat more inclusive in including ‘possibly risky’ people. This assumes that making an unaligned superintelligence is much more difficult than creating a pathogen.
AI safety conferences should exclude people who would make the field non-prestigious/without the spirit of ‘the solution to a great risk,’ for example, seem like an appeal of online media users for the platforms to reduce biases in the algorithms because they are affecting them negatively. Perhaps even more so than one’s elite background, the ability to keep up that spirit can be correlated with traditionally empowered personal identity (such as gender and race) and internalization of these norms of power (rather than critical thinking about them). Not everyone with that ability of ‘upholding a unique solution narrative’ must be from that demographic and not everyone has to have this ability in that group (only a critical mass has to). This applies as long as people negatively affected by traditional power structures perceive a negative emotion which would prevent them from presenting objective evidence and reasoning to decisionmakers.
Project funding and entrepreneurship
So everything except community building and entry-level employment? Should there be community building in non-elite contexts (while elites (in some way) within or beyond these contexts may or may not be preferred)? A counterargument is similar to the AI safety ‘spirit’ one above: people would be considered suffering by disempowerment and thus appeal less effectively and to your standards one: people who would slack with Bs in impact would just be ok with some problems unresolved. Arguments for include epistemic, problem awareness, and solution-relevant insights diversity and facilitating mutually beneficial cooperation (e. g. elites gain the wellbeing of people who have more time for developing non-strategic relationships and non-elites gain the standards of perfecting solutions), in EA and as project outcomes.
Entry-level employees
It may depend on the org. Some orgs (e. g. high-profile fundraising) that generally prefer people from elite backgrounds can prefer them also for entry-level positions. This can be accounting for the ‘As are disgraced by Bs and would not do a favor for them since they do not gain acknowledgement from other As but can be perceived as weak or socially uncompetitive’ argument of the ‘target audiences’ of these orgs.
If doing nothing and waiting for social norms to change is appropriate, non-elites should excluded from these entry-level roles. The org can actively change the norms by training non-elites to resemble elites (which can be suboptimal due to exhibiting the acceptance of the elite standard, which is (thus) exclusive) or by accepting anyone who can make the target audience elites realize that their standard is not absolute. In that case, the eliteness of one’s background should not contribute to hiring decisions.
EAGx conferences and some EAGs
Depending on the attitude of the key decisionmakers at EAGs/EAGxs, such as large funders, eliteness should be preferred, not a selection criterion, or dis-preferred. It is possible that anyone who demonstrates willingness and potential to make high impact can be considered elite in this context.
For example, traits such as critical thinking and a sharp intuition are useful for generalists.
Is it that elites have less sharp intuition than non-elites? An argument for is that elites are in their positions because they reflect the values of their institution without emotional issues, which requires the reduction of one’s intuitive reasoning. If an institution values critical thinking, gaining information from a diversity of sources, and forming opinions without considerations of one’s acceptance in traditional hierarchies, then elites can develop intuition.
I am not saying that Alex is anything like a traitor or supports YIMBY for nefarious reasons. I am saying that there can be better candidates for his job. For example, I identified this Aravind Eye Care hospitals, a profitable investment, which treats blindness at a large scale and for free for 70% of patients. Or, training surgeons to do a hernia repair with a bednet ($12.88/DALY averted) can be quite suitable for a cool personal tip. A fistula surgeon in Uganda recommended transportation stipend fund for children at the risk of disability (otherwise families do not spend the $15 and the people then have issues). That can be a somewhat touching (and also highly cost-effective) recommendation. These three opportunities should also increase or prevent the decrease of wellbeing and improve productivity, in addition to improving health. And, even a stereotypical Californian could be excited about them.
This idea is not associated with one person, on the other hand, I somewhat arbitrarily used the example of Mr. Berger to criticize the broader issue that one could see: it is not that the most cost-effective interventions are identified by extensive critical dialogue with multiple affected and unaffected stakeholders, but bombastic narratives are used to trick people to keep loyalty to GiveWell without critically thinking about how they can actually benefit the world the most. (I am not arguing that if people give to literally shooting the moon, because, for example, the US military makes them excited about it, it is better than if they support GiveWell in conjuction with some projects, which can be interpreted as attention-captivating or -keeping. I am saying that if we can presume that funders actually want to hear and think with others about smart tips in Global Health and Wellbeing and do not need to be entertained by something that resembles a stereotypical popular Californian TV channel, then we should have that attitude. Sometimes, the interests of a group may seem similar to what is portrayed on the TV. But, TV does not resemble reality.)
It is disrespectful and uncaring to confirm people’s biases and do not do any thinking for them, especially if it is your job.
I am not criticizing that the justification was short, or unsupported by scientific evidence, I am pointing out that impact cost-effectiveness analysis was not conducted properly, because impact was in the wrong units and cost was not considered (and compared to other programs that bring comparable benefits or it was not self-evident that this is cost-effective like magic, like high blood pressure screening in upper middle-income countries). The paper that you cite may be ‘tricking’ decisionmakers into giving this issue importance, because of fancy math and formal tone, but it does not say anything about cost-effectiveness. I think that it discusses the price elasticity. That is why I was suggesting the (introductory) Virtual Program: You need to consider impact and support projects on the tail of cost-effectiveness is maybe Day 2 after introductions.
I do not think that the productivity is affected, because it (possibly) pushes away low-income people, who can be assumed not more productive, in real terms, in an affluent neighborhood than in a less affluent one (for example, a shop assistant does the same work in LA and some smaller city but is paid more in LA). However, if people move away, labor supply decreases, then the price of labor increases and people get higher income. The issue can be that high-income people then need to pay more for relatively low-skilled services, which may decrease their productivity. Thus, this may enable the redistribution of income from property owners to affluent service payers.
Is it that the house owners lobbied for this policy in the first place (clearly, property owners, who may have significant policy leverage, have not advocated for YIMBY approaches)? Or, the jurisdiction decided to limit housing to make the place more feel exclusive and attract prestige-seeking innovators? Or, are the rents at market rate and YIMBY is trying to reduce them (that would cause decreased productivity)?
Also, the impacts of the change of this policy would probably be relatively limited, maybe price decreased by 7%? This could have been resolved better by room sharing where people actually get along well, because the room is set up in that way (e. g. sound barrier) or/and they enjoy being with others. Enjoying being with others increases wellbeing and may be also associated with increases in health. What about dignity or fanciness for people who stay in a very small share of the room (fancy pods—the LED lights cost dollars)? That could solve the problem with much higher cost-effectiveness. People would be cooperating more—innovativeness and productivity would increase. Room sharing could be even welcome by both affluent service buyers and property owners (who could benefit from higher total if they manage to fill a room with people who get along well and each pays more than rent/the number of tenants).
Why is camping not the best economic outcome? If low-income people, instead of paying already affluent property owners stay for free, then that is effectively redistribution, which creates utility, according to the logarithmic model. Issues that are associated with camping may be not the best economic outcome. For example, if people are disturbed from studying by cars or do not have lights. That could be resolved by earplugs and solar lamp. If people are taking drugs, because they are nudged into it on the streets, that could be resolved by relevant programs, such as nudging into commercial upskilling; cool sports/dance or any other art or physical activity self-development; relationship building based on mutual understanding, respect, care, and love; or drug cessation therapy. Even the highly productive people would just pay these people to be there, dancing in a way that leads the emotion very well, according to the busy passerby’s liking. So, some of the campers would remain economically unproductive but their (extra-economic) contribution to wellbeing would be priceless.
With regards to the example of your criticism, I think that the book is trying to make you do exactly that: come up ‘yourself’ with the idea that we need to think about issues now, so that we can solve them. So, even though you may be indirectly criticizing the author’s (or their collaborators’) narrative, you are not criticizing the author’s approach itself (because they are in control of how they want to contribute to the advancement of EA thinking—get people behave predictably or encourage them to develop innovative solutions now).
Actually, this thinking about your criticism makes me wonder:
Maybe it is necessary to criticize Mr. Berger.