The third live webcast in our SNIA Cloud Storage Technologies Initiative confidential computing series focused on real-world deployments of confidential computing and included case studies and demonstrations. If you missed the live event, you can watch it on demand here. Our live audience asked some interesting questions, here are our expert presenters’ answers.
Q. What is the overhead in CPU cycles for running in a trusted enclave?
A. We have been running some very large machine learning applications in secure enclaves using the latest available hardware, and seeing very close to “near-native” performance, with no more than 5% performance overhead compared to normal non-secure operations. This performance is significantly better in comparison to older versions of hardware. With new hardware, we are ready to take on bigger workloads with minimal overhead.
Also, it is important to note that encryption and isolation are done in hardware at memory access speeds, so that is not where you will tend to see a performance issue. Regardless of which secure enclave hardware capability you choose, each uses a different technology to manage the barrier between secure enclaves. The important thing is to look at how often an application crosses the barrier, since that is where careful attention is needed.
Q. How do you get around the extremely limited memory space of SGX?
A. With the latest Intel® Ice Lake processors, SGX related memory limits have been significantly relaxed. With previous generations, memory was limited to 256MB of secure enclave base cache or Enclave Page Cache (EPC). With Ice Lake processors, this has been relaxed so that SGX supports memory sizes from hundreds of Gigabytes (GB) up to one Terabyte (TB) of EPC. With this update, we are seeing very large applications now fully fit within secure enclaves, and thus gain significant performance increases. However, we should still be wary about running large commercial database suites within secure enclaves, not only with respect to memory size, but also with respect to how database operations run within CPUs. Databases are large and complex applications that use native CPU and memory features (for example shared memory), that don’t lend themselves well to constrained environments like enclaves.
AMD Secure Encrypted Virtualization (SEV) and AWS Nitro Enclaves have different characteristics, but it’s important to note that they support very large applications in secure enclaves.
Q. How can I test this stuff out? Where can I start?
A. There are a number of demonstration environments where users can test their own applications. Confidential computing offerings are available from many cloud service providers (CSPs). It is recommended that when you’re thinking about confidential computing, talk with someone who has been through the process. They should be able to help identify challenges and demonstrate how deployment can be easier than you initially anticipated.
Q. How does confidential computing compare with other technologies for data security such as homomorphic encryption and multi-party computation?
A. Homomorphic encryption allows data in encrypted memory to be processed and acted upon without moving it out of the encrypted space, but it’s currently computationally expensive.
In contrast, there is interest in multi-party compute. For example, someone owns data in a bank, someone else owns AI models to detect money laundering, and a third owns the compute which acts as a trusted place where the data and algorithms can come together for secure multi-party processing. Confidential computing makes this possible in a way that was previously not feasible.
Homomorphic encryption, multi-party computation and confidential computing may all be used together to implement data protection. Which method is most suitable depends upon performance, security model, ease of use, scalability, flexibility of deployment architecture, and whether or not the application requires significant or regular change.
Q. What industry sectors are you seeing with the most traction for confidential computing?
A. We are seeing many applications related to machine learning, applications which request data from a secure database and applications in highly regulated data privacy and data protection environments. Other applications include distribution of secret information, for example web certificates across web services, web servers, key management and key distribution systems. And finally, distributed compute applications where data needs to be locally processed on secure edge platforms.
We are seeing significant interest in both securing the hundreds of thousands of existing enterprise applications, data, and workloads in the public cloud. Bad actors are focused on the cloud because they know legacy security is easily undermined. Confidential clouds quickly and easily put confidential computing to work to provide instant hardware-grade enclave protections for these cloud assets with no changes to the application, deployment or IT processes.
Q. How does confidential computing help with meeting compliance requirements like GDPR, CCPA, etc.?
A. Regulated organizations are now seeing value in confidential computing. Applications, like UCSF which we shared earlier, can achieve HIPPA regulatory compliance much faster when using confidential computing versus other approaches. Additionally, use of new security primitives which come as part of confidential computing can make it easier to prove to regulators that an environment is secure and meets all of the necessary security regulations. The ability to show that data in use is protected as well as at rest is becoming increasingly important and auditability down to an individual application, process and CPU further demonstrates compliance.
Closing thoughts…
Confidential computing will be widely prevalent in the next five years, but now is the time to begin adoption. Suitable environments and hardware are available now via various CSPs and on-premises platforms.