If this is your first time reading about Effective Altruism Policy Analytics, you can look at our website here for more information: https://eapolicy.wordpress.com/
Effective Altruism Policy Analytics has now been working for eight weeks, and is more than halfway through its experimental period. Our overall strategy for finding regulations to comment on has been refined with changes to how we search for feedback, but for the most part we are continuing our initial strategy with more skill and less distractions. Our current policy comment process typically proceeds as follows:
Select rules to comment on after considering difficulty, replaceability, feedback, and potential impact
Conduct research on:
How the regulation currently works, compared to the proposed rule
What the powers the agency has over the rule and how much control they have over the issue area
What useful information or support the regulatory agency needs
Which experts can assist us
Which papers can we learn from and cite
What are the best changes to make, and their expected benefits
Draft a comment, while seeking assistance, verification, and feedback
Revise the comment based on feedback
Submit the final comment
Send feedback surveys and request further feedback
Biggest changes we have made:
Increased specialization and division of labor among team members—Having project members focus their attention on one regulation at a time, and using interns to gather feedback and handle distracting tasks has saved us a lot of time, and allowed us to increase comment length and quality. By going more in-depth, we can address more issues and create more topics for agencies to consider.
Changing our feedback system, spending less time contacting more people—When we started the project, it became apparent that waiting for government feedback takes a long time (sometimes over a year), and final rules that guarantee fast feedback only make guarantees because they have already been thoroughly researched and considered. We attempted to use expert feedback as a mechanism for improvement, however our initial attempts involved many emails back and forth with little useful information acquired, despite iterating and changing our methods. After assigning an intern to work exclusively on this problem for roughly a week, we distributed a wide net of emails to many contacts and started getting lots of useful feedback. We are still facing difficulty convincing respondents to fill out the quantitative surveys, as they opt to write their own qualitative opinions of our comments.
Focusing more on producing comments than other efforts—Throughout the project, we encountered a lot of tasks that we tried to tackle, like registering as lobbyists, increasing our transparency, and obtaining feedback. Many of these ideas were not immediately actionable, and some were fairly time-consuming. Also, many of these ideas were less important than simply measuring the effectiveness of comments. This project is an experiment, the larger our sample size is, the more useful information we gain about making policy comments. Given how generalized the project already is, it is likely more valuable to focus on the goal of our experiment: to determine if policy comments are a cost effective way for effective altruists to create policy change.
Focusing more on increasing the number of points we make in comments—Regulatory agencies address points separately, which allows comments with more points to receive more feedback. As research time is a costly, diving deeper into a given policy which we have become familiar with often has higher returns than searching for entirely new things to comment on.
Progress: We have been producing about 1 policy comment per week, which is half as much as what we initially estimated. Our initial estimates were based on earlier comments submitted in the fall, and in retrospect it is easy to identify reasons why those comments were produced faster:
We had full time access to experts—Having Richard Bruns and Matt Dahlhausen free to work in 8 hour blocks on weekends allowed us to solve various knowledge problems very quickly. Matt had specialized knowledge on relevant aspects of both of the proposed rules, and Richard is very good at finding new ways to produce economic effect estimates when the relevant data is not available.
There was unaccounted for research time—Matt Dahlhausen had done some of his own research before we worked on comments in the fall. One of our current most time consuming activities is researching regulations to comment on, and figuring out if a comment is justified. There have been many times where we did hours of research and work on producing a comment only to find out a desired change was going to happen anyway, that a change cannot be made for legal reasons, or that an apparent mistake is only represented on Regulations.gov and not the official proposed rule.
There was unaccounted passive work—Hours worked were stretched out over multiple weekends, rather than all at once. This allowed us to sleep on ideas longer, and bring them up in discussions at DC Effective Altruism meetups. Having this much passive time allowed us to work very quickly once we had a formal meeting.
Effective Altruism Policy Analytics Update
If this is your first time reading about Effective Altruism Policy Analytics, you can look at our website here for more information: https://eapolicy.wordpress.com/
Effective Altruism Policy Analytics has now been working for eight weeks, and is more than halfway through its experimental period. Our overall strategy for finding regulations to comment on has been refined with changes to how we search for feedback, but for the most part we are continuing our initial strategy with more skill and less distractions.
Our current policy comment process typically proceeds as follows:
Find proposed rules on Regulations.gov
Select rules to comment on after considering difficulty, replaceability, feedback, and potential impact
Conduct research on:
How the regulation currently works, compared to the proposed rule
What the powers the agency has over the rule and how much control they have over the issue area
What useful information or support the regulatory agency needs
Which experts can assist us
Which papers can we learn from and cite
What are the best changes to make, and their expected benefits
Draft a comment, while seeking assistance, verification, and feedback
Revise the comment based on feedback
Submit the final comment
Send feedback surveys and request further feedback
Biggest changes we have made:
Increased specialization and division of labor among team members—Having project members focus their attention on one regulation at a time, and using interns to gather feedback and handle distracting tasks has saved us a lot of time, and allowed us to increase comment length and quality. By going more in-depth, we can address more issues and create more topics for agencies to consider.
Changing our feedback system, spending less time contacting more people—When we started the project, it became apparent that waiting for government feedback takes a long time (sometimes over a year), and final rules that guarantee fast feedback only make guarantees because they have already been thoroughly researched and considered. We attempted to use expert feedback as a mechanism for improvement, however our initial attempts involved many emails back and forth with little useful information acquired, despite iterating and changing our methods. After assigning an intern to work exclusively on this problem for roughly a week, we distributed a wide net of emails to many contacts and started getting lots of useful feedback. We are still facing difficulty convincing respondents to fill out the quantitative surveys, as they opt to write their own qualitative opinions of our comments.
Focusing more on producing comments than other efforts—Throughout the project, we encountered a lot of tasks that we tried to tackle, like registering as lobbyists, increasing our transparency, and obtaining feedback. Many of these ideas were not immediately actionable, and some were fairly time-consuming. Also, many of these ideas were less important than simply measuring the effectiveness of comments. This project is an experiment, the larger our sample size is, the more useful information we gain about making policy comments. Given how generalized the project already is, it is likely more valuable to focus on the goal of our experiment: to determine if policy comments are a cost effective way for effective altruists to create policy change.
Focusing more on increasing the number of points we make in comments—Regulatory agencies address points separately, which allows comments with more points to receive more feedback. As research time is a costly, diving deeper into a given policy which we have become familiar with often has higher returns than searching for entirely new things to comment on.
Progress:
We have been producing about 1 policy comment per week, which is half as much as what we initially estimated. Our initial estimates were based on earlier comments submitted in the fall, and in retrospect it is easy to identify reasons why those comments were produced faster:
We had full time access to experts—Having Richard Bruns and Matt Dahlhausen free to work in 8 hour blocks on weekends allowed us to solve various knowledge problems very quickly. Matt had specialized knowledge on relevant aspects of both of the proposed rules, and Richard is very good at finding new ways to produce economic effect estimates when the relevant data is not available.
There was unaccounted for research time—Matt Dahlhausen had done some of his own research before we worked on comments in the fall. One of our current most time consuming activities is researching regulations to comment on, and figuring out if a comment is justified. There have been many times where we did hours of research and work on producing a comment only to find out a desired change was going to happen anyway, that a change cannot be made for legal reasons, or that an apparent mistake is only represented on Regulations.gov and not the official proposed rule.
There was unaccounted passive work—Hours worked were stretched out over multiple weekends, rather than all at once. This allowed us to sleep on ideas longer, and bring them up in discussions at DC Effective Altruism meetups. Having this much passive time allowed us to work very quickly once we had a formal meeting.