Surface and AI
NPUs are here to help you take AI innovations to the next level. Check out this solution brief to learn more about how NPUs work, and to discover how Surface devices leverage them to power AI-intensive workloads.
Difference Between CPU, GPU, and NPU
The CPU, or Central Processing Unit, is the primary brain of the computer that executes software instructions and performs basic operations. It can handle various tasks but is not optimized for specific operations. The GPU, or Graphics Processing Unit, specializes in rendering 2D and 3D graphics and is efficient at processing data in parallel but is not suited for AI operations. The NPU, or Neural Processing Unit, is integrated into the System on Chip and is specifically designed for deep learning tasks, enabling faster inference operations with an architecture that reduces memory access needs.
Optimizing CPU Workload with NPU
By utilizing the NPU, various AI-related tasks such as automatic framing, portrait blur, and voice focus can run concurrently while freeing up the CPU to handle other tasks, like PowerPoint presentations and Outlook. This can lead to more efficient multitasking and improved performance without overloading the CPU.
Operations Per Second of the NPU
The NPU in the Surface Pro 9 with 5G can process up to 15 trillion operations per second (15 TOPS). This high performance allows for advanced AI features, contributing to optimized power consumption and improved overall efficiency in device functionality.
Surface and AI
published by TeraCloud, Inc.
TeraCloud is a National Cloud Computing and Managed Services Provider delivering innovative public, private and hybrid services to organizations throughout the United States. Our headquarters is located in Dallas Texas, with field offices located around the country. We have a unique approach to IT, and it starts with our team. Our company culture is customer focused, and any solution we deliver must solve an underlying business need that improves efficiency and enables your growth and success.