ExecuTorch

End-to-end solution for enabling on-device inference capabilities across mobile and edge devices

What is ExecuTorch?

ExecuTorch is an end-to-end solution for enabling on-device inference capabilities across mobile and edge devices including wearables, embedded devices and microcontrollers. It is part of the PyTorch Edge ecosystem and enables efficient deployment of various PyTorch models (vision, speech, Generative AI, and more) to edge devices. Key value propositions of ExecuTorch are:

Mobile icon
Portability: Compatibility with a wide variety of computing platforms, from high-end mobile phones to highly constrained embedded systems and microcontrollers.
Chip icon
Productivity: Enabling developers to use the same toolchains and SDK from PyTorch model authoring and conversion, to debugging and deployment to a wide variety of platforms.
Stopwatch icon
Performance: Providing end users with a seamless and high-performance experience due to a lightweight runtime and utilizing full hardware capabilities such as CPUs, NPUs and DSPs.

Explore ExecuTorch

ExecuTorch is currently powering various experiences across AR, VR and Family of Apps (FOA) products and services at Meta. We are excited to see how the community leverages our all new on-device AI stack. You can learn more about key components of ExecuTorch and its architecture, how it works, and explore documentation pages and detailed tutorials.

ExecuTorch Documentation

Why ExecuTorch?

Supporting on-device AI presents unique challenges with diverse hardware, critical power requirements, low/no internet connectivity, and realtime processing needs. These constraints have historically prevented or slowed down the creation of scalable and performant on-device AI solutions. We designed ExecuTorch, backed by our industry leaders like Meta, Arm, Apple, and Qualcomm, to be highly portable and provide superior developer productivity without losing on performance.

ExecuTorch Alpha Release

ExecuTorch was initially introduced to the community at the 2023 PyTorch Conference. With our most recent alpha release, we further expanded ExecuTorch’s capabilities across multiple dimensions. First, we enabled support for the deployment of large language models (LLMs) on various edge devices. Second, with ExecuTorch alpha, we have further stabilized the API surface. Lastly, we have significantly improved the developer experience by simplifying the installation flow as well as improving observability and developer productivity via the ExecuTorch SDK. ExecuTorch alpha release also provides early support for the recently announced Llama 3 8B along with demonstrations on how to run this model on an iPhone 15 Pro and a Samsung Galaxy S24 mobile phone.

文档

获取PyTorch完整开发指导文档

查看文档 Docs

教程

为初学者和高级开发人员提供深入教程

查看教程

资源

找到开发资源并解答您的疑问

查看资源