CEO Sundar Pichai informed employees on Tuesday that Google is actively addressing concerns with its Gemini AI, acknowledging instances of “biased” and “completely unacceptable” text and image responses. In response to public backlash over historical inaccuracies, the company temporarily suspended the use of its image creation tool last week. This decision followed the circulation of screenshots on social media depicting the AI generating historically inaccurate or offensive images, including US senators from the 1800s portrayed with racial diversity and a Black woman wearing a topknot.
The images that the Gemini model generated were created using data from Wikipedia and other sources. Google said it would update its training datasets and work to train the model to make better choices, but it could take weeks for the changes to be reflected in the app’s output. In the meantime, users of the app who want to generate images of people will have to use other Google chatbots, including its namesake AI chatbot and its namesake photo search service — both of which still allow users to see a wide range of results when prompted to show them images of specific types of people.
In a blog post, Google explained that it had been “stuck in the trap” of trying to avoid certain types of images generated in previous instances of AI image generation, such as those that were violent or sexually explicit. The company said that the Gemini model was overly cautious and interpreted some “anodyne prompts” as sensitive.
It also argued that the broader harms of image-generating AI models must be addressed adequately and that the sensitivity problems will require more than just a technical fix. Previous studies have found that the image-generating capabilities of many AI bots amplify racial and gender stereotypes that appear in their training datasets.
For example, the research published by MIT researchers in 2020 found that image-generating bots trained on data from Wikipedia were more likely to show lighter-skinned people and female subjects when asked to generate an image of a person. The study was criticized by some in the tech industry, including Elon Musk, who runs a competing AI bot as part of his own X platform.
In the employee note, Pichai acknowledged “some tough questions about bias in AI” and said that he had ordered the company’s teams “to be very thoughtful about these issues.” But he added that while the team can’t promise that Gemini won’t generate “embarrassing or inaccurate or offensive results,” it will keep working on the problem until it’s resolved. It will also evaluate how the AI models it uses work in other parts of its business.