Today at Build 2022, Microsoft unveiled Project Volterra, a device powered by Qualcomm’s Snapdragon platform that’s designed to let developers explore “AI scenarios” through the new processing engine. Qualcomm’s Snapdragon Neural (SNPE) for Windows Toolkit. The hardware arrives alongside support in Windows for neural processing units (NPUs) or dedicated chips suitable for AI and machine learning-specific workloads.
Dedicated AI chips, which speed up AI processing while reducing battery impact, have become common in mobile devices like smartphones. But as applications such as AI-powered image upscalers are increasingly used, manufacturers have added such chips to their laptop lines. M1 Macs feature Apple’s Neural Engine, for example, and Microsoft’s Surface Pro X has the SQ1 (which was co-developed with Qualcomm). Intel at one point signaled that it would offer an AI chip solution for Windows PCs, but – as the AI-powered Arm app ecosystem is well established, thanks to iOS and Android – Project Volterra appears to be an attempt to exploit it rather than reinvent the wheel.
This isn’t the first time Microsoft has teamed up with Qualcomm to launch AI development hardware. In 2018, the companies jointly announced the Vision Intelligence platform, which offered “fully integrated” support for computer vision algorithms running through Microsoft’s Azure ML and Azure IoT Edge services. Project Volterra offers proof that, four years later, Microsoft and Qualcomm remain bedfellows in this space, even after the announced expiration of Qualcomm’s exclusivity agreement for Windows on Arm licenses.
“We believe in an open hardware ecosystem for Windows that gives [developers] more flexibility and more options as well as the ability to support a wide range of scenarios,” said Panos Panay, Windows and Devices Product Manager at Microsoft, in a blog post. “As such, we are constantly evolving the platform to support new and emerging hardware platforms and technologies.”
Microsoft’s Project Volterra hardware, which aims to foster the development of AI applications with Windows on Arm. Picture credits: Microsoft
Coming later this year, Microsoft says (somewhat hyperbolically) that Project Volterra will ship with a neural processor with “best-in-class” AI computational capacity and efficiency. The main chip will be based on Arm, provided by Qualcomm, and will allow developers to build and test native Arm applications with tools such as Visual Studio, VSCode, Microsoft Office and Teams.
The Volterra project is the harbinger of an “end-to-end” developer toolchain for Microsoft’s native Arm apps, which will span all of Visual Studio 2022, VSCode, Visual C++, NET 6, Windows Terminal, Java, Windows Subsystem for Linux, and Windows Subsystem for Android (for running Android applications). Previews for each component will begin rolling out in the coming weeks, with more open source projects natively targeting Arm to come, including Python, node, git, LLVM, and more.
Regarding the SNPE for Windows, Panay said that it will allow developers to run, debug and analyze the performance of deep neural networks on Windows devices with Snapdragon hardware, as well as integrate the networks in apps and other code. The SNPE is complemented by the new Qualcomm Neural Processing SDK for Windows, which provides tools for converting and running AI models on Snapdragon-based Windows devices, in addition to APIs for targeting separate CPU cores with different profiles. of power and performance.
Qualcomm’s new tooling benefits devices beyond Project Volterra, particularly laptops built on Qualcomm’s recently launched Snapdragon 8cx Gen 3 system-on-chip. Designed to compete with Apple’s Arm-based silicon, the Snapdragon 8cx Gen 3 features an AI accelerator, the Hexagon processor, which can be used to apply AI processing to photos and videos.
“The Qualcomm Neural Processing SDK for Windows, together with Project Volterra when available, will enable enhanced and new Windows experiences by leveraging the powerful and efficient performance of the Qualcomm AI engine, which is part of the Snapdragon Compute,” a Qualcomm spokesperson told TechCrunch via email. . “Qualcomm AI Engine” refers to the combined AI processing capabilities of the CPU, GPU, and digital signal processor (eg, Hexagon) components in high-end Snapdragon SoCs.
There is also a cloud component in the Volterra project called Hybrid Loop. Microsoft describes it as a “cross-platform development model” for building AI apps spanning the cloud and edge, with the idea that developers can make runtime decisions about whether to run apps AI on Azure or an on-premises client. The hybrid loop also provides the ability to dynamically shift the load between client and cloud, should developers need additional flexibility.

Picture credits: Microsoft
Panay says the hybrid loop will be exposed through a prototype “AI toolchain” in Azure Machine Learning and a new Azure runtime provider in ONNX Runtime, the open source project to accelerate AI in frameworks, operating systems and hardware.
“Increasingly, AI-powered magical experiences will require enormous levels of processing power beyond the capabilities of traditional CPUs and GPUs alone. But new silicon like NPUs will add expanded capacity for workloads of key AI work,” Panay said. “Our emerging hybrid compute and AI model, along with NPU-enabled devices, create a new platform for [developers] to create high-ambition applications by harnessing incredible amounts of power… With native Arm64 Visual Studio, .NET support and the Volterra project coming later this year, we’re releasing new tools to help [developers] take the first step in this journey.