OpenAI, Google, Microsoft not transparent with their AI models: Stanford

The first step to understanding an AI chatbot is understanding what goes into their training. Stanford researchers have called out the ‘big three’ – OpenAI, Google and Microsoft – for a lack of transparency with their AI chatbots:

Major artificial intelligence (AI) foundation model developers need to be more transparent with how they train their models and their impact on society, says a new report from Stanford University.

Through Stanford Human-Centered Artificial Intelligence (HAI), the California-based university says that these prominent AI companies are becoming less transparent as their models become more powerful.

“It is clear over the last three years that transparency is on the decline while capability is going through the roof. We view this as highly problematic because we’ve seen in other areas like social media that when transparency goes down, bad things can happen as a result,” said Stanford professor Percy Liang.

Transparency in an emerging technology is vital to ensure safety. With transparency, everything is simply less knowable, and potentially more dangerous.

Read the article here.

Leave a comment