Uncomfortable truths about using Google Gemini
Google’s Gemini privacy policy states that the company collects user prompts, including text and voice inputs
Google’s Gemini has seen rapid adoption over the past two years, driven largely by its integration across popular Google products such as AI Mode in Search, Gemini Live in Android Auto, and its availability across multiple platforms. The assistant is capable of handling tasks ranging from coding and problem-solving to image generation.
Despite its growing presence, Gemini comes with several limitations that users should be aware of, like other generative AI tools, it is prone to inaccuracies, raises privacy concerns, and has faced criticism over bias-related issues.
Privacy concerns
Google’s Gemini privacy policy states that the company collects user prompts, including text and voice inputs, along with files, images, videos, and screen content shared with the assistant. Device-related data is also gathered when users interact with the Gemini app.
Despite Google's claim regarding the automated processing of the vast majority of such data, some may be examined by manual reviewers employed by Google as well as third-party contractors. In fact, despite the best efforts of the developers to anonymise such data, the sensitive details provided to Gemini may be subject to viewing by those reviewers.
The user has the option to opt out of the data collection process in case there are future conversations. Google retains the data of the user if they are opting out for a maximum of 72 hours. In the case of past conversations that are selected for review, they can be retained for up to a maximum of three years even when they are deleted.
Risk of inaccurate responses
Answering by Gemini isn’t necessarily reliable. Even Google itself has a disclaimer stating a possible error on the part of Gemini and even the confidence measure concerning erring information.
These errors, sometimes referred to as AI hallucinations, range from common issues such as suggesting the use of glue to keep pizza toppings in place to the recommendation to consume stones for sustenance. While there is no foolproof method to eliminate these kinds of issues from arising, it is necessary to carry out verification to confirm the accuracy when the Gemini is applied to fact- or research-related work.
Bias and overcorrection issues
In an effort to address the issue of racial and gender biases, Gemini has appeared to overcorrect at times. This was especially the case in 2024 when the software generated images of historical European personalities who had some diverse racial characteristics.
Google apologised and updated the software to resolve the issue. However, such an incident has highlighted the point that in an effort to remove biases, the results may be skewed. This may continue to happen because the software keeps upgrading.
-
Indonesia blocks Elon Musk’s Grok AI: A global first in deepfake crackdown
-
Lego’s ‘Smart Bricks’: A pivot towards technology and why it matters
-
Nintendo Mario toy ads spark generative AI controversy
-
OpenAI, Common Sense Media back AI safety ballot: What it means for kids & parents
-
Apple and Google face scrutiny over X deepfake content
-
Germany plans fast-track measures to combat AI-driven image manipulation
-
A red pixel in the snow: How AI helped to crack the mystery of a missing mountaineer
-
Grok restricts AI tools to paid users after deepfakes of women and children sparks outrage
