That’s a very interesting project. I’d be very curious to see the finished product. That has become a frequently discussed aspect of AI safety. One member of my panel is a significant advocate of the importance of AI risk issues and another is quite skeptical and reacts quite negatively to any discussion approaches the A*I word (“quite” may be a weak way of putting it).
But concerning policy communication, I think those are important issues to understand and pinpoint. The variance is certainly strange.
Side note: As a first-time poster, I realized looking at your project, I failed to include a TL;DR and a summary for the expected output on mine.I’ll try and edit, or on the next post, I suppose.
That’s a very interesting project. I’d be very curious to see the finished product. That has become a frequently discussed aspect of AI safety. One member of my panel is a significant advocate of the importance of AI risk issues and another is quite skeptical and reacts quite negatively to any discussion approaches the A*I word (“quite” may be a weak way of putting it).
But concerning policy communication, I think those are important issues to understand and pinpoint. The variance is certainly strange.
Side note: As a first-time poster, I realized looking at your project, I failed to include a TL;DR and a summary for the expected output on mine. I’ll try and edit, or on the next post, I suppose.