A quick heads-up to say we’re extending the survey until the 15th December 23:59 if ever you get the time until then.
Thanks again
miller-max
Hi again!
A quick heads-up to say we’re extending the survey until the 15th December 23:59 if ever you get the time until then.
Thanks again
Open call: AI Act Standard for Dev. Phase Risk Assessment
Hi everyone,
I wasn’t sure this should be an entire post (Topics: AI, governance, survey):
We are inviting you to take part in a research study which Pour Demain and the Vrije Universiteit Amsterdam are conducting on development-phase risk assessments for AI systems. The link to the study is here. The survey will close on the 12th December 2023 at 23:59 CET.You can pick a time here to do the survey so you don’t have to remember/use up your mental RAM (this creates a calendar event for you).
About:
As you may know, the majority of a group of experts recently surveyed by Schuett et al. indicated that conducting a development-phase risk assessment is an important practice for AI labs.
Our study aims to better understand how development-phase risk assessments could be carried out in practice and what they would reveal. This will inform Pour Demain’s contribution to standard setting for the EU AI Act.
The study involves participating in an up to 30-minute creative exercise to identify and analyze potential risks. To minimise time requirements, we recommend doing this with a keyboard/voice-typing. There are no right or wrong answers—the goal is simply to generate ideas and think through different risk scenarios.
For every participant we’re donating a contribution to charity (also giving more you spend longer, to show this doesn’t go by unseen!).
We greatly appreciate you taking the time to support this research. We would further be very grateful if you would forward this survey to three other experts in your network who you think have something to contribute on the topic.
Hi James thanks for opening this up for feedback,
This is a tough one, overall it looks good!
My general point of feedback would be to be more cause-agnostic OR put higher emphasis on “priorities research”. For example I could suggest making 1/5th content about priorities research, promoting it as a category of its own, as seen below.
The reason for this is because I would argue that cause areas & meta have their own communities/conferences already, priorities research on the other hand may not so much. And priorities research represents EA’s mission of “where to allocate resources to do the most good” most holistically. Then again I haven’t done the thinking you have behind these weights!
It may be worth making a survey with 1-100 scales?
Neartermist 30% (-5)
Global Health & Dev 35%
Animal welfare 60%
Mental health 5%
Longtermist 40% (-5)
AI risk 50%
Biosec 30%
Nuclear 10%
General longtermist 5%
Climate change 5%
Priorities research 20%
Meta 10% (-10)
Priorities research 5%Entrepreneurship skills 85%
Community building 5%
Effective giving 5%