Iâd like to more fully understand why youâve made this a for-profit company instead of a charity. From your other post:
If we believe we can commercialize a successful sub-project responsibly (without differential enhancing AI capabilities), it will be incorporated into our product and marketed to potential adopters (e.g. tech companies meeting regulatory requirements for fairness, robustness, etc).
Are there other roads to profit that youâre considering? Is this the main one? How much does the success of this approach (or others) hinge on governments adopting particular legislation or applying particular regulations? In other words, if governments donât regulate the thing youâre solving, why would companies still buy your product?
No worries if you donât want to say much at this time. Iâm excited for this project regardless, it seems like a novel and promising approach!
Iâd like to more fully understand why youâve made this a for-profit company instead of a charity.
When I Stuart and I were collaborating on AI safety research, Iâd occasionally ask him, âSo whatâs the plan for getting alignment research incorporated into AIs being build, once we have it?â Heâd answer that DeepMind, Open AI, etc would build it in. Then Iâd say, âBut what about everybody else?â Aligned AI is our answer to that question.
We also want to be able to bring together a brilliant, substantial team to work on these problems. A lot of brilliant minds choose to go the earning to give route, and we think it would be fantastic to be a place where people can both go that route and still work on an aligned organisation.
Are there other roads to profit that youâre considering? Is this the main one? How much does the success of this approach (or others) hinge on governments adopting particular legislation or applying particular regulations? In other words, if governments donât regulate the thing youâre solving, why would companies still buy your product?
The âetcâ here doesnât refer just to âother regulationsâ, but also to âother ways that unsafe AI cause costs and risks to companiesâ.
I like to use the analogy of CAD (computer-aided design) software for building sky scrapers and bridges. Itâs useful even without regulations, because engineers like building sky scrapers and bridges that donât fall down. We can be useful in the same sort of way for AI (companies like profit, but they also like reducing expenses, such as costs for PR and settlements when things go wrong).
Weâre starting with researchâwith the AI equivalent of developing principles that civil engineers can use to build taller safe sky scrapers and longer safe bridges, to build into our CAD-analogous product.
Iâd like to more fully understand why youâve made this a for-profit company instead of a charity. From your other post:
Are there other roads to profit that youâre considering? Is this the main one? How much does the success of this approach (or others) hinge on governments adopting particular legislation or applying particular regulations? In other words, if governments donât regulate the thing youâre solving, why would companies still buy your product?
No worries if you donât want to say much at this time. Iâm excited for this project regardless, it seems like a novel and promising approach!
Thanks for the great questions, Sawyer!
When I Stuart and I were collaborating on AI safety research, Iâd occasionally ask him, âSo whatâs the plan for getting alignment research incorporated into AIs being build, once we have it?â Heâd answer that DeepMind, Open AI, etc would build it in. Then Iâd say, âBut what about everybody else?â Aligned AI is our answer to that question.
We also want to be able to bring together a brilliant, substantial team to work on these problems. A lot of brilliant minds choose to go the earning to give route, and we think it would be fantastic to be a place where people can both go that route and still work on an aligned organisation.
The âetcâ here doesnât refer just to âother regulationsâ, but also to âother ways that unsafe AI cause costs and risks to companiesâ.
I like to use the analogy of CAD (computer-aided design) software for building sky scrapers and bridges. Itâs useful even without regulations, because engineers like building sky scrapers and bridges that donât fall down. We can be useful in the same sort of way for AI (companies like profit, but they also like reducing expenses, such as costs for PR and settlements when things go wrong).
Weâre starting with researchâwith the AI equivalent of developing principles that civil engineers can use to build taller safe sky scrapers and longer safe bridges, to build into our CAD-analogous product.