I definitely think I’ve received valuable feedback on my work on the EA Forum, as well as on LessWrong. This feedback came in the form of upvotes/downvotes, comments on my posts, and private messages/discussions that people had with me as a result of me having posted things.
It’s harder to say:
In what ways was that feedback valuable?
Precisely how valuable was it? How does that compare to other sources of feedback?
How did the value vary by various factors (e.g., EA Forum vs LessWrong, posts that are more like summaries vs posts that are more like “original research”)?
What proportion of that came from people in my existing network?
Some thoughts on those points follow. (But first I should flag that I think there are also good reasons to post to the EA Forum/LW other than to get feedback, including to share potentially useful ideas, to signal one’s skills/knowledge to aid in later job applications, and make connections with other EAs; more on this in this other comment of mine.)
Note that all of the following relates to posts I made before joining Rethink, as I haven’t yet posted anything related to my work with Rethink.
Q1: Valuable in what ways?
Maybe the main way the feedback was useful was in helpingme get an overall sense of how I was doing as an EA researcher, how I was doing as a macrostrategy researcher, and how valuable the kinds of work I was doing were, to inform whether to carry on with those things.
That said, I think votes and comments provided less useful feedback on these points than I’d have expected. That feedback basically just seemed to indicate “You’re probably neither a terrible fit for this nor an amazing wunderkind, but rather somewhere in the vast chasm in between.” Which I guess did narrow my uncertainty slightly, but not very much.
But there was one case in which my posts led to a more experienced researcher learning that I existed, perceiving me as having strong potential, and reaching out to me to chat, and I think that that conversation substantially informed my career plans. And since then I’ve had further conversations with that person that have also informed my career plans.
Another way the feedback was useful was via some comments on posts informing my specific choices about what research or write about next, or what shape those next posts should take.
If I recall correctly, this happened a few times with my first series of posts (on moral uncertainty).
Some comments helped me determine what threads it’d be interesting/necessary to explore more (e.g., because people were still confused about those things).
I think that’d happen less often now, because now I’m better at writing posts in general and I have more opportunities for feedback pre-posting (e.g., from my colleagues at Rethink).
Maybe I would’ve eventually ended up making such a database anyway, but I don’t think I’d explicitly thought of doing so before seeing that comment.
I think that that database is probably in the top 5 most valuable things I’ve publicly posted this year (out of probably ~35 posts, if we exclude things like question posts and link posts). And I think it was more valuable than the post of mine which MichaelStJules commented on.
I think this is an interesting case, because making that database required no special skills (it was just a weirdly overlooked low-hanging fruit that anyone could’ve plucked already), and the relevant part of MichaelStJules’ comment was just one sentence, and they just gestured in the general direction of what I ended up doing rather than clearly outlining it. So it feels sort-of like this was an “easy win” that just required a space for some accidental public brainstorming.
The way the feedback was most often valuable, but which is less important than the above two things, was via helping me improve specific posts. I often edited posts in response to comments.
Finally, I imagine feedback sometimes helped me improve my research or writing style.
Off the top of my head, I can’t remember that happening. But maybe it happened early on and I’ve just forgotten.
Q2: How valuable (compared to other things)?
I’d probably describe how useful the feedback was by saying “Maybe less valuable than I’d have idealistically expected, but valuable enough to be a noticeable extra perk of posting publicly.”
I think the two most valuable sources of feedback for me in 2019 and 2020, which I think were much more valuable than feedback from the Forum, were (1) results from job applications, and (2) conversations with people who were further along in various career paths.
This is partly because what I needed most was an overall sense of which pathway I should be heading in.
But as noted above, my Forum posts did lead to one instance of (2) - i.e., conversations about my career plans with a more experienced person.
Regular sources of feedback on things I wrote on the Forum also probably tended to be somewhat less useful than the results from a survey I ran about the quality and impact of my writing on the Forum and LessWrong.
Q3: How did the value vary?
I think I probably got a similar amount of value per unit of feedback on the EA Forum and LessWrong
But I think the case in which my posts prompted a useful conversation about my career plan was prompted by my Forum rather than LessWrong posts.
And feedback on LessWrong was less pleasant, on average (more often needlessly blunt or snarky—but still better than most of the internet, and still often the substance of what people were saying was useful).
Q4: What proportion was from my existing network?
I think almost all of the value came from people “outside of my existing network” (here meaning “people I hadn’t interacted with 1-1, though maybe I’d had public comment exchanges with them”).
This is probably partly because:
My network of EA researchers / Forum users / similar happened to be quite small at the start of this year
I wrote across a wide range of topics this year, so the set of people who’d be able to give useful input on something I wrote is quite wide and diverse, making it harder to have them all in my network and individually solicit their feedback
If people were already in my network (e.g. if they were coworkers), I’d be more likely to get feedback from them before/without posting to the Forum/LW
The first two of those points have become less true over time, so I imagine from now on I might tend to get a higher proportion of my feedback in ways that don’t require posting to the Forum.
Thanks! I found it very interesting that one of the most important feedback was on how you were doing as a researcher, and that the most important feedback was from the survey. I think that this probably applies widely and is a good reminder to interact well, especially with posts and people I appreciate (I think that I’ll try to send more PMs to people who I think are constantly writing well on the forum and may be under-appreciated).
Also, thinking on Q4, I think that I might be worried that as people’s personal network gets larger and more skilled, that they might post less publicly or only material that is heavily polished.
Generally, though, it seems like you didn’t find engagement with the content itself very useful, which is about what I’d have guessed but unfortunate to hear.
(btw, reminding you to link to this comment from here)
Also, thinking on Q4, I think that I might be worried that as people’s personal network gets larger and more skilled, that they might post less publicly or only material that is heavily polished.
Yeah. I think it’s great that people can build networks of people with relevant interests and expertise and get thoughtful feedback from those networks, but also a shame if that means that people don’t take the little bit of extra time to post work that’s already been done and written up.
I think that this sort of thing is why I wanted to say “But first I should flag that I think there are also good reasons to post to the EA Forum/LW other than to get feedback...”.
I plan to indefinitely continue posting publicly except in cases (which do exist) where there are specific reasons not to do so,[1] such as:
the piece of writing is likely to be more polished and useful in future, so I’m deferring posting it till then
In cases where the work isn’t fully polished but the writer has no plans to ever polish it, I’d say it’s often worth posting anyway with some disclaimers, and letting others just decide for themselves whether to bother reading it
there are reasons to believe the work will confuse or mislead people more than it informs them (see also)
(Tangentially, I also feel like it’s a shame when people do post EA-relevant work publicly, but just post it on their personal blog or their organisation’s website or something, without also crossposting it to the Forum. It seems to me that that unnecessarily increases how hard it can be for people to find relevant info.)
[1] This sentence used to say “I plan to indefinitely continue posting publicly unless there are specific reasons not to do so, such as:” (emphasis added). That was more ambiguous, so I edited it.
I think the reasons people don’t post stuff publicly isn’t out of laziness, but because there’s lots of downside risk, e.g. of someone misinterpreting you and getting upset, and not much upside relative to sharing in smaller circles.
I definitely agree that there are many cases where it does make sense not to post stuff publicly. I myself have a decent amount of work which I haven’t posted publicly. (I also wrote a small series of posts earlier this year on handling downside risks and information hazards, which I mention as an indication of my stance on this sort of thing.)
I also agree that laziness will probably rarely be a major reason why people don’t post things publicly (at least in cases where the thing is mostly written up already).
I definitely didn’t mean to imply that I believe that laziness is the main reason people don’t post things publicly, or that there are no good reasons to not post things publicly. But I can see how parts of my comment were ambiguous and could’ve been interpreted my comment that way. I’ve now made one edit to slightly reduce ambiguity.
So you and I might actually have pretty similar stances here.
But I also think that decent portions of cases in which a person doesn’t post publicly may fit one of the following descriptions:
The person sincerely believes there are good reasons to not post publicly, but they’re mistaken.
But I also think there are times when people sincerely believe they should post something publicly, and then do, even though really they shouldn’t have (e.g., for reasons related to infohazards or the unilateralist’s curse).
I’m not sure if people err in one direction more often than the other, and it’s probably more useful to think about things case by case.
The person overestimates the risks posting publicly posing to their own reputations, or (considered from a purely altruistic perspective) overweight risks to their own reputations relative to potential benefits to others/the world (basically because the benefits are mostly externalities while the risks aren’t).
That said, risks to individual EA-aligned researchers’ reputations could be significant from an altruistic perspective, depending on the case
Also, I don’t want to be judgemental about this, or imply that people are obligated to be selfless in this arena. It’s more like it’d be nice if they were more selfless (when this is the situation at hand), but understandable if they aren’t, because we’re only human.
It’s simply that the person’s default is to not post this publicly, and the person doesn’t actively think about whether to post, or don’t have enough pushing them towards doing so.
So it’s more out of something like inertia than out of weighing perceived costs and benefits.
Posting publicly would take up too much time (for further writing, editing, formatting, etc.) to be worthwhile, not because of laziness but because of other things worth prioritizing.
None of those cases primarily centre on laziness, and I wouldn’t want to be judgemental towards any of those people. But in the first three cases, it might be better if the person was nudged towards posting publicly.
(And again, to be clear, I do also think there are cases in which one shouldn’t post publicly.)
I didn’t mean to imply that laziness was the main part of your reply, I was more pointing to “high personal costs of public posting” as an important dynamic that was left out of your list. I’d guess that we probably disagree about how high those are / how much effort it takes to mitigate them, and about how reasonable it is to expect people to be selfless in this regard, but I don’t think we disagree on the overall list of considerations.
I think that this probably applies widely and is a good reminder to interact well, especially with posts and people I appreciate (I think that I’ll try to send more PMs to people who I think are constantly writing well on the forum and may be under-appreciated).
Yeah, that sounds to me like it could be handy!
It also would’ve been useful (or at least comforting) if I’d known that, if I was doing badly and seemed to be a bad fit, I’d get a clear indication of that. (It’d obviously suck to hear it, but thenI could move on to other pursuits.) Otherwise it felt hard to update in either direction. But I think it’s much easier and less risky to just make it more likely that people would get clear indications when they are doing well than when they aren’t, for a wide range of reasons (including that even people who are capable of being great at something might not clearly display that capability right away).
Generally, though, it seems like you didn’t find engagement with the content itself very useful, which is about what I’d have guessed but unfortunate to hear.
I think I agree with what you mean, but that this phrasing give someone the wrong impression. I definitely appreciated the engagement that did occur, and often found it useful. The problems were more that:
Often there just wasn’t much engagement. Maybe like some upvotes, 0-1 downvotes, 0-4 short comments.
It’s very hard to distinguish “These 3 positive comments are from the 3 out of (let’s say) 25 readers who had an unusually positive opinion about this or want to be welcoming, and the others thought this sort-of sucked but couldn’t be bothered saying so or didn’t want to be mean” from “These 3 positive comments are totally sincere, and the other (let’s say) 22 readers also thought this was great but didn’t bother commenting or felt it’d be weird to just comment ‘this is great!’ without saying more”
And that’s not the fault of those 3 commenters. And it would feel harsh to say it’s the fault of the (perhaps imagined) other 22 readers either.
(btw, reminding you to link to this comment from here)
Therefore, for most of this year I’ve seen myself as more in “explore” than “exploit” mode.
As I gradually move more towards the “exploit” end of that continuum, I’d guess that:
I’ll have less need of feedback that just gives me an overall sense of whether I’m a good fit for X
The value of feedback that improves a given piece of work (e.g., points out mistakes or angles that should be explored more or clarified) will rise, because the direct value of the individual pieces of work I’m doing is higher
This reminds me of some education researchers emphasising that the purpose of feedback in the context of high schools is to improve the student, not the piece of work. This makes sense, because the essay that 15 year old wrote isn’t going to affect any important decisions, but the 15 year old may later do useful things, and has a lot to learn in order to do so.
But in other contexts, a given piece of writing may be likely to influence important decisions, and the writer may already be more experienced. In those cases, it might make sense for feedback to focus on improving the piece of writing rather than the writing.
Hi Edo,
I definitely think I’ve received valuable feedback on my work on the EA Forum, as well as on LessWrong. This feedback came in the form of upvotes/downvotes, comments on my posts, and private messages/discussions that people had with me as a result of me having posted things.
It’s harder to say:
In what ways was that feedback valuable?
Precisely how valuable was it? How does that compare to other sources of feedback?
How did the value vary by various factors (e.g., EA Forum vs LessWrong, posts that are more like summaries vs posts that are more like “original research”)?
What proportion of that came from people in my existing network?
Some thoughts on those points follow. (But first I should flag that I think there are also good reasons to post to the EA Forum/LW other than to get feedback, including to share potentially useful ideas, to signal one’s skills/knowledge to aid in later job applications, and make connections with other EAs; more on this in this other comment of mine.)
Note that all of the following relates to posts I made before joining Rethink, as I haven’t yet posted anything related to my work with Rethink.
Q1: Valuable in what ways?
Maybe the main way the feedback was useful was in helping me get an overall sense of how I was doing as an EA researcher, how I was doing as a macrostrategy researcher, and how valuable the kinds of work I was doing were, to inform whether to carry on with those things.
That said, I think votes and comments provided less useful feedback on these points than I’d have expected. That feedback basically just seemed to indicate “You’re probably neither a terrible fit for this nor an amazing wunderkind, but rather somewhere in the vast chasm in between.” Which I guess did narrow my uncertainty slightly, but not very much.
But there was one case in which my posts led to a more experienced researcher learning that I existed, perceiving me as having strong potential, and reaching out to me to chat, and I think that that conversation substantially informed my career plans. And since then I’ve had further conversations with that person that have also informed my career plans.
Another way the feedback was useful was via some comments on posts informing my specific choices about what research or write about next, or what shape those next posts should take.
If I recall correctly, this happened a few times with my first series of posts (on moral uncertainty).
Some comments helped me determine what threads it’d be interesting/necessary to explore more (e.g., because people were still confused about those things).
I think that’d happen less often now, because now I’m better at writing posts in general and I have more opportunities for feedback pre-posting (e.g., from my colleagues at Rethink).
The best example was that this comment from MichaelStJules on a post of mine prompted me to make my database of existential risk estimates.
Maybe I would’ve eventually ended up making such a database anyway, but I don’t think I’d explicitly thought of doing so before seeing that comment.
I think that that database is probably in the top 5 most valuable things I’ve publicly posted this year (out of probably ~35 posts, if we exclude things like question posts and link posts). And I think it was more valuable than the post of mine which MichaelStJules commented on.
I think this is an interesting case, because making that database required no special skills (it was just a weirdly overlooked low-hanging fruit that anyone could’ve plucked already), and the relevant part of MichaelStJules’ comment was just one sentence, and they just gestured in the general direction of what I ended up doing rather than clearly outlining it. So it feels sort-of like this was an “easy win” that just required a space for some accidental public brainstorming.
The way the feedback was most often valuable, but which is less important than the above two things, was via helping me improve specific posts. I often edited posts in response to comments.
Finally, I imagine feedback sometimes helped me improve my research or writing style.
Off the top of my head, I can’t remember that happening. But maybe it happened early on and I’ve just forgotten.
Q2: How valuable (compared to other things)?
I’d probably describe how useful the feedback was by saying “Maybe less valuable than I’d have idealistically expected, but valuable enough to be a noticeable extra perk of posting publicly.”
I think the two most valuable sources of feedback for me in 2019 and 2020, which I think were much more valuable than feedback from the Forum, were (1) results from job applications, and (2) conversations with people who were further along in various career paths.
This is partly because what I needed most was an overall sense of which pathway I should be heading in.
But as noted above, my Forum posts did lead to one instance of (2) - i.e., conversations about my career plans with a more experienced person.
Regular sources of feedback on things I wrote on the Forum also probably tended to be somewhat less useful than the results from a survey I ran about the quality and impact of my writing on the Forum and LessWrong.
Q3: How did the value vary?
I think I probably got a similar amount of value per unit of feedback on the EA Forum and LessWrong
But I think the case in which my posts prompted a useful conversation about my career plan was prompted by my Forum rather than LessWrong posts.
And feedback on LessWrong was less pleasant, on average (more often needlessly blunt or snarky—but still better than most of the internet, and still often the substance of what people were saying was useful).
Q4: What proportion was from my existing network?
I think almost all of the value came from people “outside of my existing network” (here meaning “people I hadn’t interacted with 1-1, though maybe I’d had public comment exchanges with them”).
This is probably partly because:
My network of EA researchers / Forum users / similar happened to be quite small at the start of this year
I wrote across a wide range of topics this year, so the set of people who’d be able to give useful input on something I wrote is quite wide and diverse, making it harder to have them all in my network and individually solicit their feedback
If people were already in my network (e.g. if they were coworkers), I’d be more likely to get feedback from them before/without posting to the Forum/LW
The first two of those points have become less true over time, so I imagine from now on I might tend to get a higher proportion of my feedback in ways that don’t require posting to the Forum.
Thanks! I found it very interesting that one of the most important feedback was on how you were doing as a researcher, and that the most important feedback was from the survey. I think that this probably applies widely and is a good reminder to interact well, especially with posts and people I appreciate (I think that I’ll try to send more PMs to people who I think are constantly writing well on the forum and may be under-appreciated).
Also, thinking on Q4, I think that I might be worried that as people’s personal network gets larger and more skilled, that they might post less publicly or only material that is heavily polished.
Generally, though, it seems like you didn’t find engagement with the content itself very useful, which is about what I’d have guessed but unfortunate to hear.
(btw, reminding you to link to this comment from here)
Yeah. I think it’s great that people can build networks of people with relevant interests and expertise and get thoughtful feedback from those networks, but also a shame if that means that people don’t take the little bit of extra time to post work that’s already been done and written up.
I think that this sort of thing is why I wanted to say “But first I should flag that I think there are also good reasons to post to the EA Forum/LW other than to get feedback...”.
I plan to indefinitely continue posting publicly except in cases (which do exist) where there are specific reasons not to do so,[1] such as:
potential infohazards
the piece of writing is likely to be more polished and useful in future, so I’m deferring posting it till then
In cases where the work isn’t fully polished but the writer has no plans to ever polish it, I’d say it’s often worth posting anyway with some disclaimers, and letting others just decide for themselves whether to bother reading it
there are reasons to believe the work will confuse or mislead people more than it informs them (see also)
(Tangentially, I also feel like it’s a shame when people do post EA-relevant work publicly, but just post it on their personal blog or their organisation’s website or something, without also crossposting it to the Forum. It seems to me that that unnecessarily increases how hard it can be for people to find relevant info.)
[1] This sentence used to say “I plan to indefinitely continue posting publicly unless there are specific reasons not to do so, such as:” (emphasis added). That was more ambiguous, so I edited it.
I think the reasons people don’t post stuff publicly isn’t out of laziness, but because there’s lots of downside risk, e.g. of someone misinterpreting you and getting upset, and not much upside relative to sharing in smaller circles.
(Just speaking for myself, as always)
I definitely agree that there are many cases where it does make sense not to post stuff publicly. I myself have a decent amount of work which I haven’t posted publicly. (I also wrote a small series of posts earlier this year on handling downside risks and information hazards, which I mention as an indication of my stance on this sort of thing.)
I also agree that laziness will probably rarely be a major reason why people don’t post things publicly (at least in cases where the thing is mostly written up already).
I definitely didn’t mean to imply that I believe that laziness is the main reason people don’t post things publicly, or that there are no good reasons to not post things publicly. But I can see how parts of my comment were ambiguous and could’ve been interpreted my comment that way. I’ve now made one edit to slightly reduce ambiguity.
So you and I might actually have pretty similar stances here.
But I also think that decent portions of cases in which a person doesn’t post publicly may fit one of the following descriptions:
The person sincerely believes there are good reasons to not post publicly, but they’re mistaken.
But I also think there are times when people sincerely believe they should post something publicly, and then do, even though really they shouldn’t have (e.g., for reasons related to infohazards or the unilateralist’s curse).
I’m not sure if people err in one direction more often than the other, and it’s probably more useful to think about things case by case.
The person overestimates the risks posting publicly posing to their own reputations, or (considered from a purely altruistic perspective) overweight risks to their own reputations relative to potential benefits to others/the world (basically because the benefits are mostly externalities while the risks aren’t).
That said, risks to individual EA-aligned researchers’ reputations could be significant from an altruistic perspective, depending on the case
Also, I don’t want to be judgemental about this, or imply that people are obligated to be selfless in this arena. It’s more like it’d be nice if they were more selfless (when this is the situation at hand), but understandable if they aren’t, because we’re only human.
It’s simply that the person’s default is to not post this publicly, and the person doesn’t actively think about whether to post, or don’t have enough pushing them towards doing so.
So it’s more out of something like inertia than out of weighing perceived costs and benefits.
Posting publicly would take up too much time (for further writing, editing, formatting, etc.) to be worthwhile, not because of laziness but because of other things worth prioritizing.
None of those cases primarily centre on laziness, and I wouldn’t want to be judgemental towards any of those people. But in the first three cases, it might be better if the person was nudged towards posting publicly.
(And again, to be clear, I do also think there are cases in which one shouldn’t post publicly.)
Does this roughly align with your views?
I didn’t mean to imply that laziness was the main part of your reply, I was more pointing to “high personal costs of public posting” as an important dynamic that was left out of your list. I’d guess that we probably disagree about how high those are / how much effort it takes to mitigate them, and about how reasonable it is to expect people to be selfless in this regard, but I don’t think we disagree on the overall list of considerations.
Yeah, that sounds to me like it could be handy!
It also would’ve been useful (or at least comforting) if I’d known that, if I was doing badly and seemed to be a bad fit, I’d get a clear indication of that. (It’d obviously suck to hear it, but thenI could move on to other pursuits.) Otherwise it felt hard to update in either direction. But I think it’s much easier and less risky to just make it more likely that people would get clear indications when they are doing well than when they aren’t, for a wide range of reasons (including that even people who are capable of being great at something might not clearly display that capability right away).
I think I agree with what you mean, but that this phrasing give someone the wrong impression. I definitely appreciated the engagement that did occur, and often found it useful. The problems were more that:
Often there just wasn’t much engagement. Maybe like some upvotes, 0-1 downvotes, 0-4 short comments.
It’s very hard to distinguish “These 3 positive comments are from the 3 out of (let’s say) 25 readers who had an unusually positive opinion about this or want to be welcoming, and the others thought this sort-of sucked but couldn’t be bothered saying so or didn’t want to be mean” from “These 3 positive comments are totally sincere, and the other (let’s say) 22 readers also thought this was great but didn’t bother commenting or felt it’d be weird to just comment ‘this is great!’ without saying more”
And that’s not the fault of those 3 commenters. And it would feel harsh to say it’s the fault of the (perhaps imagined) other 22 readers either.
Thanks! Done.
It seems worth emphasising here that, before 2020:
I’d only done ~0.5FTE years of research before 2020, and it was in an area and methodology that’s not very relevant to what I’m doing now
I hadn’t started my “EA-aligned career”
(More on this in this comment)
Therefore, for most of this year I’ve seen myself as more in “explore” than “exploit” mode.
As I gradually move more towards the “exploit” end of that continuum, I’d guess that:
I’ll have less need of feedback that just gives me an overall sense of whether I’m a good fit for X
The value of feedback that improves a given piece of work (e.g., points out mistakes or angles that should be explored more or clarified) will rise, because the direct value of the individual pieces of work I’m doing is higher
This reminds me of some education researchers emphasising that the purpose of feedback in the context of high schools is to improve the student, not the piece of work. This makes sense, because the essay that 15 year old wrote isn’t going to affect any important decisions, but the 15 year old may later do useful things, and has a lot to learn in order to do so.
But in other contexts, a given piece of writing may be likely to influence important decisions, and the writer may already be more experienced. In those cases, it might make sense for feedback to focus on improving the piece of writing rather than the writing.