During Intel Innovation 2023, Intel emphasized their efforts to merge the power of Artificial Intelligence (AI) with robust security measures, advocating a developer-centric, open ecosystem philosophy.
Intel introduced a new attestation service as part of its Intel Trust Authority. This service provides a comprehensive assessment of Intel’s trusted execution environment’s integrity, policy enforcement, and audit records. With a focus on versatility, it can be deployed wherever Intel’s confidential computing is utilized, be it in multi-cloud, hybrid, on-premises, or edge settings.
Intel Trust Authority is poised to become a cornerstone for confidential AI, ensuring the trustworthiness of computing environments in which sensitive intellectual property and data are processed, especially in machine-learning applications using Intel Xeon processors.
Intel Chief Technology Officer, Greg Lavender, conveyed Intel’s commitment to making AI opportunities universally accessible. He underscored that limiting developers in their choices of hardware and software would inevitably hamper the potential use cases for large-scale AI adoption and the societal value it could provide. By enhancing security and trust in AI deployment, Intel aims to ensure AI is accessible to everyone, everywhere.
Additionally, Intel emphasized its dedication to promoting an open ecosystem for AI, which is evident in their recent collaborations. Partnering with leading software vendors like Red Hat, Canonical, and SUSE, Intel is aiming to furnish developers with optimized distributions of enterprise software, tailored for the latest Intel architectures. This collaboration has been further cemented with Intel’s involvement in the Linux Foundation’s Unified Acceleration Foundation. Intel will contribute its oneAPI specification to this initiative, aiming to streamline the development of applications for multi-platform deployment.
Another highlight of the event was the discussion on the challenges facing AI deployment in the real world. Many organizations grapple with issues like a lack of expertise, resource constraints, and the inherent complications of managing the AI pipeline. To address these issues, Intel is working towards establishing an open ecosystem that simplifies AI deployment across diverse architectures. Intel’s oneAPI programming model is a testament to this commitment, allowing developers to write code once and deploy it across various computing architectures.
To further aid developers in maximizing performance, Intel is introducing tools like the Auto Pilot for Kubernetes pod resource rightsizing under Intel Granulate. This tool will provide ongoing capacity management recommendations, allowing users to optimize their cost-performance metrics for containerized environments.
Intel also acknowledged the increasing necessity to shield AI models, data, and platforms from potential security threats. Fully homomorphic encryption (FHE), which facilitates computations directly on encrypted data, has been limited due to its computational complexity. To tackle this, Intel announced plans to develop an application-specific integrated circuit (ASIC) accelerator. This will significantly reduce the performance overhead linked with a software-only FHE approach. Intel will also release the beta version of an encrypted computing software toolkit later this year, offering the community a comprehensive suite to explore and implement FHE coding.