Google Photos AI unable to accurately identify gorillas

Google Photos AI Unable to Accurately Identify Gorillas: What Went Wrong?

Google Photos is a popular photo storage and sharing platform that uses artificial intelligence (AI) to automatically categorize and label images. However, in 2015, the platform faced a major backlash when it was discovered that its AI was unable to accurately identify gorillas. This incident raised questions about the limitations of AI and the potential biases that can be built into these systems.

What Happened?

The incident occurred when a Google Photos user uploaded a series of photos that included images of a black couple and their friend. The AI system automatically labeled the photos with tags such as “people,” “beard,” and “smiling.” However, when the user searched for “gorilla,” the AI system returned photos of the black couple with the same tags.

This error was caused by a flaw in the AI system’s image recognition algorithm. The algorithm was trained on a large dataset of images that did not include enough diverse examples of gorillas. As a result, the system was unable to accurately identify gorillas and instead labeled them as humans.

Google’s Response

Google quickly apologized for the incident and took steps to address the issue. The company removed the “gorilla” tag from its system and promised to improve its image recognition algorithms to prevent similar errors in the future.

In addition, Google hired more diverse teams of engineers and data scientists to work on its AI systems. The company also launched an initiative called “AI for Social Good” to promote the responsible development and use of AI technology.

Lessons Learned

The incident with Google Photos highlights the limitations of AI and the importance of diversity in data and teams. AI systems are only as good as the data they are trained on, and if that data is biased or incomplete, the system will produce biased or incomplete results.

To prevent similar incidents in the future, companies must ensure that their AI systems are trained on diverse datasets that include a wide range of examples. They must also hire diverse teams of engineers and data scientists who can bring different perspectives and experiences to the development process.


The incident with Google Photos and gorillas was a wake-up call for the AI industry. It showed that even the most advanced AI systems can make mistakes and that biases can be built into these systems if they are not developed and tested properly. However, it also demonstrated the importance of taking responsibility for these mistakes and taking steps to address them. By learning from this incident, we can work towards creating more responsible and inclusive AI systems that benefit everyone.


Author Profile

Plato Data
Plato Data
SEO Powered Content & PR Distribution. Get Amplified Today.
Buy and Sell Shares in PRE-IPO Companies with PREIPO®. Access Here.
PlatoAiStream. Web3 Data Intelligence. Knowledge Amplified. Access Here.

Leave a comment