On August 22, 2017, posted by the Microsoft Research Blog, Microsoft team unveiled a new deep learning acceleration platform, codenamed Project Brainwave.
Project Brainwave achieves a major leap forward in both performance and flexibility for cloud-based serving of deep learning models. The system is designed for real-time AI, which means the system processes requests as fast as it receives them, with ultra-low latency. Real-time AI is becoming increasingly important as cloud infrastructures process live data streams, whether they be search queries, videos, sensor streams, or interactions with users.
The system is built with three main layers:
- A high-performance, distributed system architecture running on Azure,
- A hardware DNN (deep-neural-networking) engine synthesized onto FPGAs, and
- A compiler and runtime for low-friction deployment of trained models.
Used by Microsoft Azure services, the idea is great to make FPGA processing power available to external Azure developers for data-intensive tasks like deep-neural-networking tasks.
To access info on this new Microsoft Cloud platform for AI, follow the links listed below:
Finally, to learn more about Microsoft’s AI strategy and FGPAs hardware components, you can read the following links:
- Intel Delivers ‘Real-Time AI’ in Microsoft’s New Accelerated Deep Learning Platform
- Intel FPGAs Accelerate Microsoft’s AI Hardware for Project Brainwave
- Microsoft Announces Project Brainwave To Take On Google’s AI Hardware Lead