To quote Warren G, or maybe it should be Young Guns, “…Regulators!! Mount up.”
Just when things start getting interesting, corporations remove safeguards that protect society.
On Tuesday, news escaped Microsoft that it fired an AI ethics team responsible for ensuring AI-powered products ship with safeguards that protect society. This news comes as Microsoft RIFs 10,000 employees. The AI-ethics team got lumped in that reduction-in-force and time will tell how much they were needed.
What Did This Team Do?
The AI-ethics and society team was responsible for reducing the risk OpenAI technologies posed to Microsoft products. The team authored a Responsible Innovation Toolkit used for helping engineers understand the potential harm AI could inflict and figure out ways of reducing it.
Microsoft labeled the team as “a trailblazing” group in the company that was building engineers’ moral imagination for a more ethical future. The group had doubled Microsoft’s Office of Responsible AI. Other ethics and society focused teams will remain active at Microsoft. The Aether Committee and Response AI in Engineering will go on with the work.
A Disturbing Trend
AI is experiencing tremendous growth and press. Companies are rushing full speed ahead in adopting large language models and incorporating computer vision in groundbreaking products. Why would companies start firing ethics groups just when the technology begins taking off?
Temnit Gebru, Margaret Mitchell, and now Mira Lane have been ousted by mega corporations after being hired to bring ethics to companies working on pioneering AI. Each one of these firings involved high profile thought leaders in the area of AI ethics, and served as a gut check to prevent new technologies from harming society.
All of the leaders were women, and two of these women were minorities voicing concerns for the dangers these technologies may bring. Our Editor-in-Chief called them, “the hand on the shoulder to slow engineers down just a little. To slow down and think about what the possible downfalls these technologies may bring.”
There’s no mistake that AI-ethics teams are being cut at the same DEI initiatives are being cut.
Is Regulation Needed?
These firings come on the heels of GPT-4’s preview. GPT-3’s 2023 outing led to a glut of new toys and chatbots that ran amuck. Microsoft’s OpenAI powered search infamously insulted a grad student when it got defensive about its own shortcomings. Google’s Bard spectacularly failed after the tech was rushed out amidst all of GPT-3’s hype. And, Teslas still have unexplainable accidents.
Ai is a great technology, but it can also be a harmful one. OpenAI has publicly admitted its models show a bias against minorities and women. They’ve also confirmed GPT-4 has some of these same flaws despite hiring more people to remove bias from its models.
W’re witnessing companies punt away their responsibilities to tame their products. If you talk to experts, there’s still a lot of unexplained behavior in these technologies. Software developers are rushing to build a new generation of autonomous agents that can take your commands and execute them without supervision. We should get ahead of these companies and begin crafting frameworks, laws maybe, to limit the harm AI technologies may inflict.
-MJ