According to “Reuters”, the company advised against inserting confidential materials into the AI ​​due to the risk of data leakage.

Alphabet, which owns Google, warned its teams about the use of AI (Artificial Intelligence) tools. According to the agency Reuters released this Thursday (15.jun.2023), the big tech advised employees not to enter confidential or sensitive information into chatbots like OpenAI’s ChatGPT and Bard, Google’s own tool.

Technologies seek to offer the user a simple way to talk and get answers from a command. You chatbots they can, among other functionalities, generate programming codes and emulate literary styles of living or deceased authors. Alphabet also warned engineers about using the tool for service programming.

AT Reuters, the company said that Bard can make unwanted code suggestions, although the technology helps programmers. It also stated that it intends to be transparent about the limitations of its technology.

A survey (3 MB, in English) published this Thursday (June 15) by Cornell University, headquartered in New York, concluded that an artificial intelligence is capable of reproducing the information it absorbed during training, which makes it possible to leak confidential data.

According to Reutersother companies such as Samsung, Amazon and Deutsche Bank have created safeguards for artificial intelligence tools to prevent internal information leakage.

Read more about it:


Leave a Reply