Apple Intelligence Could Introduce Device Security Risks
The company focused heavily on data and system security in the announcement of its generative AI platform, Apple Intelligence, but experts worry that companies will have little visibility into data security.
Apples long-awaited announcement of its generative AI (GenAI) capabilities came with an in-depth discussion of the companys security considerations for the platform. But the tech industrys past focus on harvesting user data from nearly every product and service have left many concerned over the data security and privacy implications of Apples move. Fortunately, there are some proactive ways that companies can address potential risks.
Apples approach to integrating GenAI
— dubbed Apple Intelligence — includes context-sensitive searches, editing emails for tone, and the easy creation of graphics, with Apple promising that the powerful features require only local processing on mobile devices to protect user and business data. The company detailed a five-step approach to strengthen privacy and security for the platform, with much of the processing done on a users device using Apple Silicon. More complex queries, however, will be sent to the companys private cloud and use the services of OpenAI and its large language model (LLM).
While companies will have to wait to see how Apples commitment to security plays out, the company has put a lot of consideration into how GenAI services will be handled on devices and how the information will be protected, says Joseph Thacker, principal AI engineer and security researcher at AppOmni, a cloud-security firm.
Apples focus on privacy and security in the design is definitely a good sign, he says. Features like not allowing privileged runtime access and preventing user targeting show they are thinking about potential abuse cases.
Apple spent significant time during its announcement reinforcing the idea that the company takes security seriously, and
published a paper online
that describes the companys five requirements for its Private Cloud Compute service, such as no privileged runtime access and hardening the system to prevent targeting specific users.
Still, large language models (LLMs), such as ChatGPT, and other forms of GenAI are new enough that the threats remain poorly understood, and some will slip through Apples efforts, says Steve Wilson, chief product officer at cloud security and compliance provider Exabeam, and lead on the
Open Web Application Security Projects Top 10 Security Risks for LLMs
.
I really worry that LLMs are a very, very different beast, and traditional security engineers, they just dont have experience with these AI techniques yet, he says. There are very few people who do.
Apple seems to be aware of the security risks that concern its customers, especially businesses. The implementation of Apple Intelligence across a users devices, dubbed the Personal Intelligence System, will connect data from applications in a way that has, perhaps, only been implemented through the companys health-data services. Conceivably, every message and email sent from a device could be reviewed by AI and context added through on-device semantic indexes.
Yet, the company pledged that, in most cases,
the data never leaves the device, and the information is anonymized as well
.
It is aware of your personal data, without collecting your personal data, Craig Federighi, senior vice president of software engineering at Apple, stated
in a four-minute video on Apple Intelligence and privacy
during the companys June 10 launch, adding: You are in control of your data, where it is stored and who can access it.
When it does leave the device, data will be processed in the companys
Private Cloud Compute service
, so Apple can take advantage of more powerful server-based generative-AI models, while still protecting privacy. The company says that it never stores or makes accessible any data to Apple. In addition, Apple will make every production build of its Private Cloud Compute platform available to security researchers for vulnerability research in conjunction with a bug-bounty program.
Such steps seemingly go beyond what other companies have promised and should assuage the fears of enterprise security teams, AppOmnis Thacker says.
This type of transparency and collaboration with the security research community is important for finding and fixing vulnerabilities before they can be exploited in the wild, he says. It allows Apple to leverage the diverse skills and perspectives of researchers to really put the system through the wringer from a security testing perspective. While its not a guarantee of security, it will help a lot.
However, the interactions between apps and data on mobile devices and the behavior of LLMs may be too complex to fully understand at this point, says Exabeams Wilson. The attack surface area of LLMs continues to surprise the large companies behind the major AI models. Following its release of its latest Gemini model, for example,
Google had to contend with inadvertent data poisoning
that arose from training its model with untrusted data.
Those search components are falling victim to these kind of indirect injection data-poisoning incidents, where theyre off telling people to eat glue and rocks, Wilson says. So its one thing to say, Oh, this is a super-sophisticated organization, theyll get this right, but Googles been proving over and over and over again that they wont.
Apples announcement comes as companies are quickly experimenting with ways to integrate GenAI into the workplace to improve productivity and automate traditionally tough-to-automate processes. Bringing the features to mobile devices has happened slowly, but now, Samsung has
released its Galaxy AI
, Google
has announced the Gemini mobile app
, and Microsoft
has announced Copilot for Windows
.
While Copilot for Windows is integrated with many applications, Apple Intelligence appears to go beyond even Microsofts approach.
Overall, companies need to first gain visibility into their employees use of LLMs and other GenAI. While they do not need to go to the extent of billionaire tech innovator Elon Musk, a former investor in OpenAI, who raised concerns that Apple — or OpenAI — would abuse users data or fail to secure business information and pledged to
ban iPhones at his companies
, chief information security officers (CISOs) certainly should have a discussion with their mobile device management (MDM) providers, Exabeams Wilson says.
Right now, controls to regulate data going into and out of Apple Intelligence do not appear to exist and, in the future, may not be accessible to MDM platforms, he says.
Apple has not historically provided a lot of device management, because they are leaned in on personal use, Wilson says. So its been up to third parties for the last 10-plus years to try and build these third-party frameworks that allow you to install controls on the phone, but its unclear whether theyre going to have the hooks into [Apple Intelligence] to help control it.
Until more controls come online, enterprises need to set a policy, and to find ways to integrate their existing security controls, authentication systems, and data loss prevention tools with AI, says AppOmnis Thacker.
Companies should also have clear policies around what types of data and conversations are appropriate to share with AI assistants, he says. So while Apples efforts help, enterprises still have work to do to fully integrate these tools securely.
Tags:
Apple Intelligence Could Introduce Device Security Risks