What sort of of projects are you envisioning? AI research labs where there is a 50⁄50 chance as to whether they end up caring about AI safety? Retroactive funding means that one has the ability to assess the past impact of a particular project in a particular domain and then give out grants through quadratic voting. The ability to look at a project’s impact in the past would aid with setting priors for how likely something is to be harmful in the future. If a project has the potential to be incredibly harmful then this should be weighed up by the badge holders who vote and less (or no) votes should be assigned to projects, depending on the probability and severity of the potential negative impacts in the future.
From a practical standpoint, the continuous stream of funds which extends well into the future can be stopped by the expert voters if the project is deemed harmful. In general—as it stands—the retrox platform does not have any built in logic which prevents any projects from being funded in the first place, but this is something which needs to be carefully considered and weighed up by those who vote on where the funds are allocated. I think this is where a lot of the “heavy lifting” is done and more careful consideration on who should be eligible to vote is perhaps required. Maybe you have some interesting ideas. Ideally you’d have an immutable and accessible record of people’s qualifications, skills and past experiences which would allow one to pick out the right candidates.
Another idea would be to have a consensus mechanism between the expert voters which would allow projects to be “blacklisted” or blocked from being funded at all if the risk of them causing extreme harm is considered too great.
Suppose Alice is working on a dangerous project that involves engineering a virus for the purpose of developing new vaccines. Fortunately, the dangerous stage of the project is completed successfully (the new virus is exterminated before it has a chance to leak), and now we have new vaccines that are extremely beneficial. At this point, observing that the project had a huge positive impact, will Retrox retroactively fund the project?
That makes more sense now. Nothing inherent to the retrox platform would prevent this if the expert badgeholders agree to vote for the retroactive funding of the risky viral engineering project.
The fact that severe risks had to be taken should be factored into the assignment of the votes, i.e. how value was created. Incentivizing more high risk behavior with potentially extremely harmful impacts is undesireable. Retroactively funding a project of this nature would set a precedent for the types of projects which are funded in the future which I think would probably not lead to a pareto preferred future. The expected value trade-offs would be something like: value added for humanity by financially supporting successful but risky viral engineering project vs potential harm induced by incentivizing more people to pursue high risk endeavours into the future. I think the latter outweighs the former hence my previous hunch.
What sort of of projects are you envisioning? AI research labs where there is a 50⁄50 chance as to whether they end up caring about AI safety? Retroactive funding means that one has the ability to assess the past impact of a particular project in a particular domain and then give out grants through quadratic voting. The ability to look at a project’s impact in the past would aid with setting priors for how likely something is to be harmful in the future. If a project has the potential to be incredibly harmful then this should be weighed up by the badge holders who vote and less (or no) votes should be assigned to projects, depending on the probability and severity of the potential negative impacts in the future.
From a practical standpoint, the continuous stream of funds which extends well into the future can be stopped by the expert voters if the project is deemed harmful. In general—as it stands—the retrox platform does not have any built in logic which prevents any projects from being funded in the first place, but this is something which needs to be carefully considered and weighed up by those who vote on where the funds are allocated. I think this is where a lot of the “heavy lifting” is done and more careful consideration on who should be eligible to vote is perhaps required. Maybe you have some interesting ideas. Ideally you’d have an immutable and accessible record of people’s qualifications, skills and past experiences which would allow one to pick out the right candidates.
Another idea would be to have a consensus mechanism between the expert voters which would allow projects to be “blacklisted” or blocked from being funded at all if the risk of them causing extreme harm is considered too great.
Suppose Alice is working on a dangerous project that involves engineering a virus for the purpose of developing new vaccines. Fortunately, the dangerous stage of the project is completed successfully (the new virus is exterminated before it has a chance to leak), and now we have new vaccines that are extremely beneficial. At this point, observing that the project had a huge positive impact, will Retrox retroactively fund the project?
That makes more sense now. Nothing inherent to the retrox platform would prevent this if the expert badgeholders agree to vote for the retroactive funding of the risky viral engineering project.
The fact that severe risks had to be taken should be factored into the assignment of the votes, i.e. how value was created. Incentivizing more high risk behavior with potentially extremely harmful impacts is undesireable. Retroactively funding a project of this nature would set a precedent for the types of projects which are funded in the future which I think would probably not lead to a pareto preferred future. The expected value trade-offs would be something like: value added for humanity by financially supporting successful but risky viral engineering project vs potential harm induced by incentivizing more people to pursue high risk endeavours into the future. I think the latter outweighs the former hence my previous hunch.