Bara on EA Hub
brb243
Ok, that can be a better interpretation: adding the audience’s capacity to commit harm into info hazards considerations.
That makes sense that the information about the existence of potentially harmful info can be shared also with people who can hold decisionmakers accountable to use their knowledge positively.
Whether this will succeed can depend on the attitude of the public toward the topic, which can depend on the ‘spirit’ of those who share the info. Using your examples, it an info comes from a resource such as the EA Forum, where the norm is to focus on impact and prevent harm, then even public who would normatively influence decisionmakers can have a similarly safe preferences regarding the topic.
However, one can also imagine that the public will seek to present that the info can be used for selfish gain or harm (since people may want to ‘side’ with a harmful entity due to fear, seek to gain standing or attention on social media due to posting about a threat, or aim to gain privilege for their group by harming others). Since the general public is not trained in double-thinking the possible impacts of their actions and since risk memes can spread faster than safety ones, publicly sharing the existence of risky topics, in good faith, can normalize and expedite harmful advancement of these subjects.
Crowd wisdom can apply when solutions are not already developed, only decisionmakers need to implement them, and when the public has the skills to come up with these solutions. For example, if only a treaty needs to be signed and budget spent on lab safety, then a few individuals can complete it. Or, people untrained in universal values research can have a limited ability to contribute to it.
Cybersecurity is an example of a field that requires cooperation of many experts who are not more likely to engage in a risky use of the info. Bomb recipes info, on the other hand, does not extensively help safety experts (who may specialize in legislation and regulations to prevent harm due to explosives) and could motivate otherwise uninterested actors to research this topic further. In this, cybersecurity can be analogous to AI safety and explosives info to biosecurity.
Spirit hazard can also make (empower or inspire) bad actors. The lower the cost of involvement (e. g. due to consequences, financial and other resources cost), the riskier it can be to share the info (and not necessarily more likely that (potentially) bad actors could have it already). So, risky info with low cost of negative involvement should not be shared.
Risky info should be shared if i) the cost of involvement is high, ii) it is highly unlikely that the group would use it to increase the riskiness of norms, iii) it is likely that not sharing this security info with the group would make decisionmakers advance risk, and iv) this topic is not subject to the unilateralist’s curse (e. g. if one person tries to make an explosive many others would prevent them from doing so).
Introducing spirit hazards
What do you think of taking the log of neuron count dividing that by neural complexity and adding the total wellbeing impact of the individual to get the relative moral value? Intuitively, this can make sense:
1) The more neurons, the more the individual can feel (but the intensity of perception can increase slower than the number of neurons).
2) The higher the neural complexity—which can correlate with one’s ability to feel better about exteroceptive stimuli because they have more, either rational or emotional/intuitive experience (that is either due to ancestors’ experience or the individual’s life) to ‘deal with them,’ the less intensely the individual perceives.[1]
3) The impact of the individual on net wellbeing[2] should be added. I am suggesting this weighting.
For humans, especially privileged ones, 3) could make the contribution from individual’s wellbeing negligible in the total because they can have much more influence on others. For individuals with less choices, including confined non-human animals,[3] on the contrary, the contribution of 3) can be neglected because these animals do not influence others.
How does this compare with what you found (and how is either finding more accurate)?
This can correlate with 3) but due to a sum, there should be no double-counting.
The devil can be in determining the counterfactual to use. For humans, this can be i) e. g. impact due to action, ii) due to inaction (to what extent that should be understood in a utilitarian way), iii) due to unfulfilled potential—e. g. someone did not study to be able to influence decisionmakers even though they could, or iv) due to unfulfilled capacity—e. g. someone who studied and can influence decisionmakers choses another job. For animals, this can be similar, except that animal’s free will is intuitively understood as lower. For example, if a chicken in a crammed barn chose to try killing others instead of upskilling them in teaching young chicken to prevent diseases, it can be attributed to norms and environment set by the human caretakers rather than the choice of the chicken.
Assuming that they cannot influence the wellbeing of others, e. g. by presenting positive attitude.
That is one way to look at this that organizations look at different aspects to hire the best fit candidate. Another way is that the constraint is that there is not really anyone sincerely interested in working for that specific organization in a particular capacity. This is what I am trying to address: by filling an application people should better define their interests, and these, alongside with their skills/background, should be readily available for organizations (who may thus start looking to hire a specific skillset), plus funds that may be seeking people to advance projects, and people looking to just contract others for some tasks or for collaborators. So, it can be argued that this can help organizations find what they are looking for.
Yes, that makes sense. That would be many organization-specific parts, but that can be done relatively easily, maybe adding a few questions per organization, and people can choose which ones to fill. Role-specific parts can be relatively more challenging as the application would have to keep changing but that is also possible.
Then, this person would be only marginally better off than if he filled 3 applications and just copied-pasted the organization-specific for CEA (and filling name and e-mail, .. takes almost no time). The improvement here is if he fills the role-specific info for recruiter only once. Of course, a recruiter at CEA is different from recruiter at OpenPhil but if there is just one/few common questions about a recruiter then he can get to a better-fit role because he cannot be tailoring answers based on role descriptions/etc. - I actually wonder if then people would be more sincere or more biased in a different way (e. g. try to optimize for attention).
1. Ok, maybe actually getting sincere feedback on rejected offers seems like an additional project.
2. Ok.
OK, there should be a minimal option (e. g. just upload a CV)
I could be interested in speaking with people, maybe I can test via a Calendly link for a test period (speaking can still be the most efficient)
eyes on the ball: after speaking with people, they have to fill out the form?
There should be the option to just link a CV. After, people could answer more questions or schedule a call.
Ok, so getting people upload a CV may be key.
Oh, well, they have to upload something. They can always update or delete it and will not be penalized for any earlier uploads as these are overwritten. Maybe asking about priorities that they think progress should be made in can provide similar information to what they want to make progress in but make people less nervous.
1. I think that feedback regarding rejected offers can be valuable and low marginal effort (e. g. adding a column). Some CV writing support could be taken care of by Career Centers (that are sometimes available also to alumni). EA community members could further assist with CV specifics if they are familiar with what different (competitive) positions are looking for that the candidate can highlight. As an MVP, comments on linked docs can be used.
2. I mean, of the people who you spoke with and who had idea of a personal project
a) How many applied for EA-related funding to work on this project and how many did not?
b) What percentage tried to find someone with a similar idea in mind to work with them on the project?
I am asking to assess to what extent people with personal project ideas could be constrained by encouragement to apply for funding and by being connected with someone else. If they applied and were rejected then integrating funds can be less of a value. If they looked for collaborators but could not find any, then increasing the number of skilled people should be prioritized over recommending connections.
3. Tested. Realizing that writing can motivate engagement/action.
1. I think that the 80k board can be best improved by a greater variety of opportunities, not only those related to EA-labeled organizations and governance in large economies but also opportunities that develop win-win solutions useful to the decisionmakers, understand fundamentals of wellbeing, share already developed solutions with networks where top-down decisionmaking possibilities are limited, motivate positive norms within institutions that can have large positive or negative impact (such as developing nations’ governments), possibly develop comparative advantage in positive-externality sectors (such as crop processing vs. industrial animal farming), increase private sector efficiencies in a way that benefits large numbers of individuals (e. g. agricultural machinery leasing to smallholders, traffic coordination in cities, medical supplies distribution that considers bottlenecks, etc), implement solutions or conduct research for local prices, introduce impactful additions to existing programs (e. g. hypothetically central micronutrient fortification of food aid), offer shorter personal projects contracts, understand intended beneficiary actual preferences, etc.
This increased variety of opportunities can be conditional to a Common App bringing value by increasing the efficiency of hiring for a specific set of opportunities. Some of these additional opportunities are on the EA Forum or in the minds of community members. Since 80k could appear informal if it included these opportunities, it may be best to list them on a spreadsheet or/and refer individuals to others with ideas/let others find collaborators or contractors.
Integration of EA funding opportunities, including less formal more counterfactual funding (one would not donate to bednets but they would give a stipend to a fellow group member to learn over the summer and produce a practice project), can be key. Risk should be considered with this approach, for example, funding should not be given to projects that relate to info hazards or could make decisionmakers enthusiastic about risky topics. This should be specified and checked in a risk-averse way by some responsible people (who also have time), such as group organizers.
One way that existing 80k resources could be added to is using the career planning resource where people write answers and then based on these answers some career considerations are recommended. Just enabling people to (edit and) post their answers online can be valuable. The added value is that others can hire them or make recommendations based on their interests. I would still add more questions, because they can paint a more comprehensive picture of the candidate without the need to interview or interact with them or ask for a reference.
I think that getting project ideas even from a well-written post of an engaged community member onto the 80k board can be a challenge due to the scope of opportunities that are considered.
2. I mean before they start needing a job, not after they get one. For example, if someone is looking in March for a 3-month internship starting June 1, they should not be getting offers that extend before June 1 or after September 1. Of course, if someone is hired they (or anyone) should update that otherwise others will be wasting time reviewing their application.
3. Yes, maybe there should be a balance between distracting busy professionals and enabling them to save time by hiring others. Ideally the community would pre-filter the applications. Bias in this process can be limited by asking people to make recommendations in a non-preferential way and include their reasoning for recommending a particular opportunity. While there should be an option to get a list of applicants filtered by some criteria specified by the professional sent periodically, a greater value can be from reviewing others’ reasoning why candidates can be a great fit for a role that one posts (and providing feedback on the reasoning).
Thank you for the useful tip on importrange.
Yes, I mean to use maybe a Google Form. Ah hah, it makes sense that all can be optional (name, sure) but even no way of contacting the candidate can be possible (maybe just writing in the form—hm here is where digital people enter haha).
Ok, what about some interview-like questions, such as
Describe a time you were resolving an important problem.
What are you currently working on improving and what should you be?
How do you go about prioritization at work?
Describe a time you received or gave feedback. How did you feel?
How would you summarize your unique skillset?
How did you became interested in applying for the employment that you are specifying?
What is your role in a team? What should it be?
Or, questions relevant to the specific candidate’s preferences
What would an ideal employment look like for you?
Describe a collaborative working arrangement that you especially like or dislike.
What offers would you likely turn down?
Or, something that shows the applicants’ interests more broadly, such as
What is an article that you recently read? What do you think about it?
What article did you change your mind about? How?
What course did you take but realized that is irrelevant to what you want to do?
Axiological, moral value, and risk attitude questions can add information on the candidate’s fit, such as
How would you negotiate between scientific progress and wellbeing research of entities that do not contribute to progress, under scarce resources?
When is the Repugnant or Sadistic Conclusion (Population axiology, Greaves, 2017) permissible? Find a situation.
In his “All animals are equal,” Peter Singer argues that “Equal consideration for different beings may lead to different treatment and different rights.” How can this go optimally and badly?
When would you friends describe you as risk-averse or risk-seeking? How would you feel about their description?
10. orgs multiselect: for non-EA orgs (recommended by 80k), it can be interesting to just copy general interest app fields and then (if it would not constitute a reputational loss risk for the applicant) paste the responses and see what happens. Founders Pledge orgs make sense—have not thought of these.
Maybe I can go through some applications of EA-related orgs, Funds, 80k orgs, Founders Pledge ventures, opportunities relevant to Probably Good profiles, etc to synthesize questions.
Yes, there should be enough actually interesting opportunities (for developers) ranging from AI safety research and increasing NGO, impact sector, and public infrastructure efficiencies to developing products that apply safety principles, communicating with hardware manufacturers, informing AI strategy and policy, or upskilling in an area that they have not explored and pivoting. It should not be scary to apply, management by fear reduces thriving.
From the link/your writing, feedback of a candidate who rejected an offer can be also valuable. General support with CV writing can be valuable, as long as it highlights candidates’ unique background and identities rather than standardizes the documents.
What is the percentage of people interested in something who applied for funding and who tried to find someone interested in a similar project, as an estimate?
What if this recommendation was not done as part of a discussion but written, would people who you spoke with still be enthusiastic about the recommendations?
Oh yes, add a checkbox! I think the wording can be:
I want my application responses to be copied to a public online spreadsheet for the purpose of connecting me to employment, contract, or grant opportunities; potential collaborators; and/or relevant resources. I agree to be contacted via e-mail for this purpose. I will be able to modify or delete my responses by editing the spreadsheet using the e-mail address used in this application.
(default unchecked)
This should be GDPR compliant. The list should be comprehensive and terms clear. It is possible that it is excessive but only “for the purpose of connecting me to potential collaborators and/or relevant resources” that is used by EA Events can exclude the instances when someone is recommending a grant opportunity or seeking to fund a personal project.
I wonder if ‘for the purpose of connecting me to’ implies that they agree to be contacted or if an additional ‘I agree to be contacted via e-mail for this purpose’ should be added. I added it just in case.
I think that some questions can be used universally across seniority levels and cause areas. For example, something on ‘describe an important problem that you resolved in the past few months.’ Other questions can be applicable to similar types of roles (e. g. research manager) even in different fields (maybe ‘a researcher has a great idea that another one disagrees with, how do you go about making a decision’). Then, some questions can be applicable to any job within a cause area (‘what draws you to hen welfare?’) and some particular to a type of organizations (‘what interests you about research’).
It could be noted what role type, cause area, and/or organization type the question is pertinent to. Then, organizations could see responses of candidates who interviewed for the role/cause/organization type. Bias could be introduced by candidates tailoring their responses to a particular position. This can be mitigated either by having questions independent of position or recruiters looking beyond the context on the actual skills (e. g. if someone resolved a disagreement in ML research, they could resolve a disagreement also in math research).
Ok, that is great. What do you think about giving some of these pieces of feedback
(Unique) skillset perspective
Skills that you would recommend to gain if they apply for a similar position
Description of a position that could be ideal for the candidate (including cause area, role, environment, management collaboration/style) (with organizations tips, if known)
What is different about the candidate ‘on paper’ vs. ‘live?’
This alone can direct candidates to better roles and provide feedback on presentation while adding only a few minutes per candidate and, in conjunction with other application material, can inform referrers what to recommend more accurately.
Hmm .. who is hiring is dynamically updated via 80k (opportunities for a specific audience), Impact CoLabs, the Job listing (open) tag, Fellowships and internships tag, EA Internships board, Animal Advocacy Careers, EA Work club, some posts that look for collaborators or contract work (maybe Requests (open) and Bounty (open) tags). Additional lists of organizations (that have openings on their websites) are on AI Safety Support, the EA-related organizations list, and probably elsewhere. Sometimes, people post projects that they would like to see (Take action tag?). Some EA Funding opportunities are interested in specific ventures. It could be worth to aggregate the opportunities. One way can be copying and pasting or writing a bot to copy-paste and then just edit. Categorization could also be useful for filtering. (The filtering can be also be collated, for example deadline is on AAC but not on 80k or Internships board. I am wondering about what filters can be even more informative.) This could be available in real time but also sent periodically.
Who wants to be hired can be also resolved in real time, by some type of a tag, or maybe a checkbox on the spreadsheet. I would argue for adding it where there is a lot of information about the person and many people look, so the EA Forum, maybe LessWrong, AI Alignment Forum, and possibly EA events profiles (there is less written information but the person is there). A hiring/contract timeline could also be useful (e. g. if someone is looking for an internship or has a contract that expires in 3 months and is not looking to renew it). Recruiters would probably check this when they need to hire but getting regular notifications can be valuable to people who are employed and have ideas (and ideally funding) and are looking for someone with specific skills to do something that they do not have the time for.
Thank you very much.
I should keep 1. in mind when communicating with EA orgs representatives. This can be relatively easy to implement, for example by a Sheets formula that displays all responses with the checkbox TRUE and leaves rows with sharing checkbox FALSE blank. Then, copying all non-blank rows to a spreadsheet that combines data from different applications. Applicants could even suggest edits (e. g. delete or modify their responses) using the gmail account associated with their application.
2. OK, that is a great idea. Writing on the Forum can give a very detailed picture about the person’s actual interests. I suggested it via the EA Forum feature suggestion thread.
3. OK, let me take a bit of time to develop this. This can be added to the responses from the other applications. I argue for adding these in one row, maybe index matching by e-mail or other unique identifier. Rewriting responses that are edited can be taken care of by having the same fields from the original form always linked.
That makes sense, orgs should be able to add or modify questions, if anyone is bought in. The main bottleneck, in my perspective, is not enough jobs that are interesting enough for candidates who do not have their own projects in mind (for example, being an assistant of a researcher may be not so appealing to someone who wants to advance megaprojects). This can be resolved by supporting independent projects that also develop expertise (maybe ideal project question can be added) and more employee shadowing (cognizant of info hazards) (including for free or small stipend) opportunities. I could suggest this consideration when communicating with EA org employees.
Regarding modifying agreement about application steps to applying to many orgs at once and just copying the responses I agree. It is easier for everyone.
Yes, maybe the question can be whether interview responses should be included and if so, in what form. I think that full interview recordings can be biasing since the responses are tailored or pertinent to a specific job. Full interview notes can be also biasing, since they can pinpoint reasons for rejection for a particular role rather than describe general skillset. One way to ‘protect’ candidates while providing value can be adding skillset description and recommendations regarding applying to similar or different roles. So, for example, if someone fails a specific PA interview but has a PA skillset, that that is specified and further, maybe a recommendation is to ‘demonstrate ability to professionally answer many emails’ or ‘apply for a PA of a farm animal welfare researcher.’
It was suggested to add the ‘looking for a job’ checkbox on the EA Forum (see MVP ideas 2.)
EA Common App Development Further Encouragement
yes, that is the thing—the culture in EA is key—overall great intentions, cooperation, responsiveness to feedback, etc (alongside with EA principles) - can go long way—well, ok, it can be also training in developing good ideas by building on the ongoing discourse: ‘you mean like if animals with relatively limited (apparent) cognitive capacity are in power then AGI can never develop?’ or ‘well machines do not need to love knowledge, they can feel indifferent or dislike it. plus, machines do not need to recognize blue to achieve their objectives’ - this advances some thinking.
the quality of arguments, including those about crucial considerations, should be assessed on their merit of contributing to good idea development (impartially welfarist, unless something better is developed?).
yes but the de-duplication is a real issue. with the current system, it seems to me that there are people thinking in very similar ways about doing the most good so it is very inefficient
ok—yes, it is 5^3 (if you exclude a ‘facilitator’) .. yes, although some events are for even more people.
Hm .. but filtering can be biasing/limiting innovation and motivating by fear rather than support (further limiting critical thinking)? .. this is why overall brainstorming while keeping in mind EA-related ideas can be better (even initial ideas (e. g. even those that are not cost-effective!) can be valuable, because they support the development of more optimal ideas) - ‘curation’ should be exercised as a form of internal complaint (e. g. if someone’s responsiveness to feedback is limited - ‘others are offering more cost-effective solutions and they are not engaging in a dialogue’) - this could be prevented by great built-in feedback mechanism infrastructure (and addressed by some expert evaluation of ideas, such as via EA Funds, that already exist).
duplicative ideas should be identified—even complementary ideas. Then, people can 1) stop developing ideas that others have already developed and do something else, 2) work with others to develop these ideas further, 3) work with others with similar ideas on projects.
The onomatopoeia of the word “aptitude” alludes to a directive emphasis interaction.[1] The expression is used vaguely and inconsistently. Thus, readers may feel directed to something that has been insufficiently explained. This can worsen their subjective experience.
Alternatively, a more cooperatively sounding word, such as capacity, can be used and its meaning in the context clearly explained (such as knowledge, skills, and networks). This can better motivate readers to take steps in developing their ability to safeguard a positive long-term future.
What body language or interpersonal interaction would you use when talking about aptitude? Versus capacity, ability, etc?
Thank you. I encourage you to
1) Encourage authors of EA-related articles to make their work publicly accessible
2) Post summaries of relevant articles on the EA Forum to facilitate discussion without the need to register and further ease the work of gardeners