I made a separate comment for my thoughts on worst-case scenarios, because I have a lot to say on the subject.
I imagine the worst-case scenario is something like an advisor giving radically bad career advice to numerous advisees based on idiosyncratic priorities or beliefs about their own field, and then advisees waste significant amounts of their own time or money acting on that feedback, when they could have easily spent those same resources better. Of course that already happens in EA. So there already isn’t enough quality control in EA for this kind of thing. That isn’t to say I shouldn’t try to ensure greater quality control in my own project, but it’s important to know the pre-existing context in EA.
I should say one reason I haven’t thought about worst-case scenarios you’ve brought up so far is because I’ve taken for granted they’re unlikely to occur. It seems obvious to me people would tend to act in good faith if they were bothering to participate in this network, but even if they were to act in bad faith, anyone saying anything like your suggestions would disqualify them (in my eyes, at least) from participating as an advisor, for the simple reason none of those things have anything to do with careers.
If I include a survey, I definitely should include a feedback survey. I have intended to talk to 80,000 Hours to ask them questions about how they set up career coaching, and that will inform how I develop this network too. in the feedback survey for advisees I’ll include a question about whether their coach did anything inappropriate, especially including trying to push the conversation in a direction that had nothing to do with trying to figure out their careers. If a career coach recommended someone donate a kidney, invest in a dubious crypto startup, or try saving the world by taking a bunch of psychedelics, that would get flagged, and they would be removed from the pool of prospective advisors.
At the same time, effective altruists have written blog posts on the EA Forum about how to donate kidneys, or recommend people do so. Getting recruited for weird projects can happen at EA events, including official ones like EAG events. I can definitely ask others how they’ve minimized the risk of strange things happening. Yet all throughout the small risk of these averse experiences persist. I know the point you were making wasn’t about these specific examples, but my point is already in EA there is a small risk of things like this happening that are hard to eliminate. So I don’t know why someone would single out a career advising network to exploit, and that this is of all things is likelier to produce viral headlines about how bad this is. It just seems so unlikely, I would feel strange introducing a quality control measure like having advisors click a box or sign a digital form saying they were aware they were only doing career advising, and not scamming advisees or something.
Again, I will include a quality feedback survey, so anything like this should get caught.
I do take seriously concerns of possible sexual harassment. It also seems strange to me that is as likely to happen over an online session, but I will ask other EA groups if there is anything I should do to minimize these kinds of risks in the advising network. That would also get including in a quality feedback survey. I’m unsure if I should include a separate ask about sexual harassment. This is something I will definitely think a lot more before I set up any in-person advising sessions. In general, it seems like there’s a lot more risk with in-person advising sessions, so I will take longer to develop quality control measures before I set those up. By count, at most 18⁄71 possible pairings I could make now would result in in-person advising sessions. Chances are the number of in-person sessions it would make sense to set up at this point would be even lower still.
I made a separate comment for my thoughts on worst-case scenarios, because I have a lot to say on the subject.
I imagine the worst-case scenario is something like an advisor giving radically bad career advice to numerous advisees based on idiosyncratic priorities or beliefs about their own field, and then advisees waste significant amounts of their own time or money acting on that feedback, when they could have easily spent those same resources better. Of course that already happens in EA. So there already isn’t enough quality control in EA for this kind of thing. That isn’t to say I shouldn’t try to ensure greater quality control in my own project, but it’s important to know the pre-existing context in EA.
I should say one reason I haven’t thought about worst-case scenarios you’ve brought up so far is because I’ve taken for granted they’re unlikely to occur. It seems obvious to me people would tend to act in good faith if they were bothering to participate in this network, but even if they were to act in bad faith, anyone saying anything like your suggestions would disqualify them (in my eyes, at least) from participating as an advisor, for the simple reason none of those things have anything to do with careers.
If I include a survey, I definitely should include a feedback survey. I have intended to talk to 80,000 Hours to ask them questions about how they set up career coaching, and that will inform how I develop this network too. in the feedback survey for advisees I’ll include a question about whether their coach did anything inappropriate, especially including trying to push the conversation in a direction that had nothing to do with trying to figure out their careers. If a career coach recommended someone donate a kidney, invest in a dubious crypto startup, or try saving the world by taking a bunch of psychedelics, that would get flagged, and they would be removed from the pool of prospective advisors.
At the same time, effective altruists have written blog posts on the EA Forum about how to donate kidneys, or recommend people do so. Getting recruited for weird projects can happen at EA events, including official ones like EAG events. I can definitely ask others how they’ve minimized the risk of strange things happening. Yet all throughout the small risk of these averse experiences persist. I know the point you were making wasn’t about these specific examples, but my point is already in EA there is a small risk of things like this happening that are hard to eliminate. So I don’t know why someone would single out a career advising network to exploit, and that this is of all things is likelier to produce viral headlines about how bad this is. It just seems so unlikely, I would feel strange introducing a quality control measure like having advisors click a box or sign a digital form saying they were aware they were only doing career advising, and not scamming advisees or something.
Again, I will include a quality feedback survey, so anything like this should get caught.
I do take seriously concerns of possible sexual harassment. It also seems strange to me that is as likely to happen over an online session, but I will ask other EA groups if there is anything I should do to minimize these kinds of risks in the advising network. That would also get including in a quality feedback survey. I’m unsure if I should include a separate ask about sexual harassment. This is something I will definitely think a lot more before I set up any in-person advising sessions. In general, it seems like there’s a lot more risk with in-person advising sessions, so I will take longer to develop quality control measures before I set those up. By count, at most 18⁄71 possible pairings I could make now would result in in-person advising sessions. Chances are the number of in-person sessions it would make sense to set up at this point would be even lower still.