The company, which is developing similar AI technology, fears sensitive data could be leaked.
Apple has restricted the use of ChatGPT and other AI (Artificial Intelligence) platforms among its employees. The fear is that the company’s confidential data will be leaked, including those related to a similar technology under development to compete with the chatbot da OpenAI.
The information is from the newspaper The Wall Street Journalwhich had access to documents that prove the company’s request, and show that it also advised employees not to use Copilot, from GitHub, used to automate the production of code software.
According to the newspaper, projects involving Apple’s AI are under the command of the company’s senior vice president, John Giannandrea, a former Google employee hired in 2018. Since then, together with executive director Tim Cook, the company has acquired several artificial intelligence startups.
Care for the protection of confidential information is based on the working model of tools such as ChatGPT.
Its main attraction is being able to answer questions, write texts, search for information on the internet, among other activities that emulate human behavior. For this, the technology recovers the information provided by the user when he uses the platform and returns it to the server for system improvements and updates.
Recently, OpenAI, creators of ChatGPT, announced a “incognito mode” which does not save conversation history and can be adjusted by the user.
In addition to Apple, other companies have also shown fear in using the tool. This year, JPMorgan Chase has already banned the use of ChatGPT, as has Verizon.