I help mission-driven organizations share their value in a way that resonates—with their communities, partners, and funders. Whether it’s through strategic communications, stakeholder engagement, or facilitation, I bring clarity and connection to complex environments.
Over the years, I’ve worked with organizations in research, advanced technology, and innovation—often in spaces where the audiences are diverse and the challenges are layered. My role is to make sure people not only understand the message, but also feel engaged, respected, and part of the solution.
I wanted to leave both the “changed my mind” and “made me laugh” emojis! :)
Originally, I didn’t like the idea that AIs should help everyone equally, including potentially terrorists or other bad actors.
While that seems problematic, I guess that would avoid having to make moral judgements at all about people, which would likely have good outcomes, generally. For everyone.
What I get from this piece is something that keeps coming up: AI is “just” teaching us about ourselves. (In quotes because it’s no small feat.) Which may just be my confirmation bias, but there are many signals here. And if that is true, does that mean that the answer to the threats from AI—the pathway to AI safety and governance—may have much to do with how we deal with human threats?
I recognize we don’t program humans in the same way, but our culture DOES train us to think and act in certain ways. And certain factors do incentivize us to act outside of those norms. And we are all, essentially, black boxes in these advanced-computational brains.
If that’s true, to take it further in a positive direction: Does that mean it will be EASIER to deal with the threats because it is faster to program the changes? With humans, it can take generations of cultural shift to change norms.
Lots of counter arguments I can think of. But just a curious thought to mull over.