In the rapidly advancing world of artificial intelligence (AI), corporate giants are racing to stay ahead. Companies such as Google and Microsoft have made significant advances in AI, driving others to stay up. In this environment, Apple’s unexpected decision has pushed to the forefront worries about data privacy in AI applications.
According to an internal document recently uncovered by The Wall Street Journal, Apple has decided to restrict its employees from using widely recognized AI chatbots, including Microsoft-backed ChatGPT, while the tech titan develops its own AI technology. This decision points out Apple’s significant worries about the possibility of sensitive and secret corporate information being exposed.
There are concerns about artificial intelligence using private data in a dangerous manner
ChatGPT, an AI chatbot, has grown in popularity due to its capacity to create human-like language in response to a given prompt. Despite its capabilities, Apple is concerned that the usage of such tools may endanger its private data, which is why this internal prohibition has been implemented.
Notably, the limitation does not apply only to ChatGPT. A prohibition on GitHub’s AI tool, Copilot, is also mentioned in the paper. Due to similar concerns, this Microsoft-owned tool, which automates the process of developing software code, is also prohibited.
Apple’s decision came into effect shortly after the debut of the ChatGPT app for iOS in the Apple app store on May 18. The app, currently available to iPhone and iPad users in the United States, plans to expand its reach to other countries in the coming weeks. An Android version is also slated for release soon.
This move underscores the growing concerns about data privacy as AI technology continues to evolve and permeate various sectors. As corporations continue to invest and develop AI, they must also grapple with ensuring the safety and integrity of their data.