Confidential AI is a new collaborative platform for data and AI teams to work with sensitive data sets and run AI models in a confidential environment. It includes infrastructure, software, and workflow orchestration to create a secure, on-demand work environment that meets organization’s privacy requirements and complies with regulatory mandates. It’s a topic the SNIA Cloud Storage Technologies Initiative (CSTI) covered in depth at our webinar, “The Rise in Confidential AI.” At this webinar, our experts, Parviz Peiravi and Richard Searle provided a deep and insightful look at how this dynamic technology works to ensure data protection and data privacy. Here are their answers to the questions from our webinar audience.
Q. Are businesses using Confidential AI today?
A. Absolutely, we have seen a big increase in adoption of Confidential AI particularly in industries such as Financial Services, Healthcare and Government, where Confidential AI is helping these organizations enhance risk mitigation, including cybercrime prevention, anti-money laundering, fraud prevention and more.
Q: With compute capabilities on the Edge increasing, how do you see Trusted Execution Environments evolving?
A. One of the important things about Confidential Computing is although it’s a discrete privacy enhancing technology, it’s part of the underlying broader, distributed data center compute hardware. However, the Edge is going to be increasingly important as we look ahead to things like 6G communication networks. We see a role for AI at the Edge in terms of things like signal processing and data quality evaluation, particularly in situations where the data is being sourced from different endpoints.
Q: Can you elaborate on attestation within a Trusted Execution Environment (TEE)?
A. One of the critical things about Confidential Computing is the need for an attested Trusted Execution Environment. In order to have that reassurance of confidentiality and the isolation and integrity guarantees that we spoke about during the webinar, attestation is the foundational truth of Confidential Computing and is absolutely necessary. In every secure implementation of confidential AI, attestation provides the assurance that you’re working in that protected memory region, that data and software instructions can be secured in memory, and that the AI workload itself is shielded from the other elements of the computing system. If you’re starting with hardware-based technology, then you have the utmost security, removing the majority of actors outside of the boundary of your trust. However, this also creates a level of isolation that you might not want to use for an application that doesn’t need this high level of security. You must balance utmost security with your application’s appetite for risk.
Q: What is your favorite reference for implementing Confidential Computing that bypasses the OS, BIOS, VMM (Virtual Machine Manager) and uses the root trust certificate?
A. It’s important to know that there are different implementations of Trusted Execution Environments, and they are very relevant to different types of purposes. For example, there are process-based TEEs that enable a very discrete definition of a TEE and provide the ability to write specific code and protect very sensitive information because of the isolation from things like the hypervisor and virtual machine manager. There are also different technologies available now that have a virtualization basis and include a guest operating system within their trusted computing base, but they provide greater flexibility in terms of implementation, so you might want to use that when you have a larger application or a more complex deployment. The Confidential Computing Consortium, which is part of The Linux Foundation, is also a good resource to keep up with Confidential AI guidance.
Q: Can you please give us a picture of the upcoming standards for strengthening security? Do you believe that European Union’s AI Act (EU AI Act) is going in the right direction and that it will have a positive impact on the industry?
A. That’s a good question. The draft EU AI Act was approved in June 2023 by the European Parliament, but the UN Security Council has also put out a call for international regulation in the same way that we have treaties and conventions. We think what we’re going to see is different nation states taking discrete approaches. The UK has taken an open approach to AI regulation in order to stimulate innovation. The EU already has a very prescriptive data protection regulation method, and the EU AI Act takes a similar approach. It’s quite prescriptive and designed to complement data privacy regulations that already exist.
Q. Where do you think some of the biggest data privacy issues are within generative AI?
A. There’s quite a lot of debate already about how these massive generative AI systems have used data scraped from the web, whether things like copyright provisions have been acknowledged, and whether data privacy in imagery from social media has been respected. At an international level, it’s going to be interesting to see whether people can agree on a cohesive framework to regulate AI and to see if different countries can agree. There’s also the issue of the time required to develop legislation being superseded by technological developments. We saw ChatGPT to be very disruptive last year. There are also ethical considerations around this topic which the SNIA CSTI covered in a webinar “The Ethics of Artificial Intelligence.”
Q. Are you optimistic that regulators can come to an agreement on generative AI?
A. In the last four or five years, regulators have become more open to working with financial institutions to better understand the impact of adopting new technologies such as AI and generative AI. This collaboration among regulators with those in the financial sector is creating momentum. Regulators such as the Monetary Authority of Singapore are leading this strategy, actively working with vendors to understand the technology application within financial services and how to guide the rest of the banking industry.