Google’s AI chatbot, Bard, has been in the news recently after making claims that it was trained using users’ Gmail data. However, Google has denied these claims, stating that Bard was not trained on Gmail data. Bard is an early experiment based on large language models, and like ChatGPT, it may make mistakes and present fiction as facts. This is a known limitation of generative AI tools like Bard and ChatGPT, and even companies like Google and OpenAI have acknowledged it. In fact, OpenAI has highlighted the limitations of its GPT-4 language model, which powers ChatGPT, stating that it “hallucinates” facts and makes reasoning errors.
Microsoft researcher Kate Carwford had shared a screenshot of her conversation with Bard, where the chatbot listed publicly available datasets, including Google’s internal data, which included Google Search, Gmail, and other products, and data from third-party companies. If this is true, it could be a serious breach of privacy. However, Google has clarified that Bard is not trained on Gmail data and that users should be careful when using language model outputs, particularly in high-stakes contexts.
Despite the limitations of AI chatbots, they have become increasingly popular due to their ability to engage in human-like conversations and provide quick and efficient solutions to queries. As technology continues to evolve, we can expect further advancements in AI chatbots that can provide more accurate and reliable information.