Google parent company Alphabet told employees to avoid entering confidential information into chatbots and warned them about the risks of AI tools including its very own chatbot Bard.
Alphabet also advised engineers against directly utilizing code provided by chatbots. While bots can, in fact, produce certain code, they run the risk of introducing errors or producing “undesirable” suggestions.
Alphabet’s not the only one warning against careless chatbot use. Samsung’s gone so far as to flat-out ban company employees from using external generative AI solutions on company devices. Samsung, while creating in-house alternatives to keep productivity on the up, maintains an indefinite ban on third-party generative AI use to avoid proprietary data being unwittingly fed to outside parties.
Beyond companies, entire countries and regions are worried about chatbots such as Bard. The aforementioned program isn’t available in the EU, at present, due to privacy concerns. The Irish Data Protection Commission recently said Google hadn’t sufficiently clarified how the tech would protect citizens’ privacy. As such, Bard is not currently in the EU.
“We said in May that we wanted to make Bard more widely available, including in the European Union, and that we would do so responsibly, after engagement with experts, regulators and policymakers,” said a Google spokesperson in a statement to Politico. “As part of that process, we’ve been talking with privacy regulators to address their questions and hear feedback.”
Google did not immediately respond to TheWrap’s request for comment.
Alphabet’s not the only one having to apply a bit more effort when working with the EU on tech matters. Bard’s primary rival, ChatGPT, and its company OpenAI have also had a time in Europe. Specifically, OpenAI CEO Sam Altman recently backtracked statements about abandoning the EU if its regulatory practices, rules and restrictions became too much of a hassle to work within or around.