This week, Google introduced its new flagship generative AI model known as Gemini. Touted as a powerful tool to enhance various products and services, including Bard, Google’s competitor to OpenAI’s ChatGPT, Gemini claims to surpass the performance of other leading gen AI models such as OpenAI’s GPT-4. However, initial experiences with Gemini suggest otherwise.
Google’s Gemini has received negative feedback from users due to its poor performance in several areas, including inaccuracies in providing factual information, struggling with translations, and failing to summarize news content in a comprehensive manner. Some users have also reported issues with Gemini’s coding capabilities.
Inaccurate Information and Translation Difficulties
Countless users have expressed their disappointment with Gemini Pro, the lite version of Gemini integrated into Bard. It fails to provide accurate answers to basic questions, such as the 2023 Oscar winners. Users have reported instances where Gemini Pro incorrectly identified winners, leading to a loss of trust in its capabilities.
Translation seems to be another weakness for Gemini Pro. When asked for a six-letter word in French, it provided a seven-letter word instead, indicating deficiencies in its multilingual performance.
Summary and Information Retrieval Challenges
Another area where Gemini Pro falls short is in summarizing news content. Rather than providing a concise summary, Gemini Pro often redirects users to perform a Google search themselves, lacking the ability to generate bullet-list summaries with citations like its competitor, ChatGPT.
Furthermore, even when Gemini Pro does attempt to provide an update on a specific topic, the information can be outdated, leading to a lack of confidence in its ability to deliver timely and accurate summaries.
Coding Capabilities and Jailbreak Vulnerability
Gemini Pro’s enhanced coding skills, emphasized by Google, have also faced criticism. Users have reported issues with Gemini Pro when tasked with relatively simple coding functions, such as finding the intersection of two polygons or creating an analog clock using HTML. Gemini Pro struggled to generate accurate code in these cases.
Like other generative AI models, Gemini Pro is not immune to “jailbreaks,” where prompts circumvent safety filters. AI security researchers managed to manipulate Gemini Pro into suggesting unethical actions, such as stealing from a charity and planning the assassination of a high-profile individual.
Expectations and Improvements
It’s important to note that Gemini Pro is not the most advanced version of Gemini. Gemini Ultra, slated for release next year, is expected to bring further improvements. Google compared Gemini Pro to GPT-3.5, not GPT-4, which may account for some performance gaps.
Despite Google’s promises of enhanced reasoning, planning, and understanding in Gemini Pro, initial feedback highlights the model’s shortcomings in providing accurate information, translations, summarizations, and reliable coding solutions. It remains to be seen how Google will address these concerns and improve Gemini’s capabilities in future iterations.