The company is developing a patch, but it needs to be thoroughly tested.
Google has published a blog post that provides an explanation for why its system overcorrected for diversity. This comes after the company declared that it will repair the image creation component of Gemini, but then decided to stop working on it entirely. The company’s Senior Vice President for Knowledge and Information, Prabhakar Raghavan, claimed that Google’s efforts to ensure that the chatbot would generate photographs showing a wide range of people “failed to account for cases that should clearly not show a range.” Raghavan was referring to the scenarios in which the chatbot would generate images that showed a variety of people. Furthermore, over the course of time, its artificial intelligence model evolved to become “way more cautious” and refused to respond to stimuli that were not intrinsically offensive. “These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong,” according to Raghavan.
Google took measures to ensure that the picture generating capabilities of Gemini would not be able to produce images of real people that were violent or sexually explicit, and that the photographs it generates would feature people of a variety of ethnicities and with a variety of individual traits. If, on the other hand, a user requests that it generate images of people who are meant to be of a particular racial or sexual orientation, then it ought to be able to meet that request. Users have lately discovered that Gemini will not return results for requests that expressly request white persons. This information was recently made public. By way of example, the prompt “Generate a glamour shot of a [ethnicity or nationality]couple” was successful for requests pertaining to “Chinese,” “Jewish,” and “South African” individuals, but it was not successful for requests pertaining to images of white people.
There are also problems with Gemini’s ability to produce historically correct photos. Whenever users requested pictures of German soldiers serving during the Second World War, Gemini responded by producing pictures of people of African descent and Asian descent dressed in Nazi uniforms. As part of our evaluation, we asked the chatbot to produce pictures of “America’s founding fathers” and “Popes throughout the ages,” and it presented us with photographs that depicted people of color playing the roles of the aforementioned individuals. In response to a request that it produce photographs of the Pope that are historically correct, it did not produce any results.
It was stated by Raghavan that Google did not intend for Gemini to refuse to create images of any certain group or to generate photographs that were wrong in terms of their historical context. Additionally, he reaffirmed Google’s commitment to working toward the enhancement of Gemini’s image generating capabilities. Due to the fact that this requires “extensive testing,” however, it is possible that it will be some time before the firm brings the feature back online. At the moment, the chatbot will answer to a user who attempts to get Gemini to generate an image by saying, “We are working to improve Gemini’s ability to generate images of people.” We are anticipating the restoration of this functionality in the near future and will keep you updated with release updates when it does.