The Coral M.2 Accelerator is an M.2 module that brings the Edge TPU coprocessor to existing systems and products with an available card module slot. Integrate the Edge TPU into legacy and new systems using an M.2 A+E key interface.
Performs high-speed ML inferencing
The on-board Edge TPU coprocessor is capable of performing 4 trillion operations (tera-operations) per second (TOPS), using 0.5 watts for each TOPS (2 TOPS per watt). For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at 400 FPS, in a power-efficient manner. See more performance benchmarks.
Works with Debian Linux and Windows
Integrates with any Debian-based Linux or Windows 10 system with a compatible card module slot.
Supports TensorFlow Lite
No need to build models from the ground up. TensorFlow Lite models can be compiled to run on the Edge TPU.
- ML accelerator: Google Edge TPU coprocessor: 4 TOPS (int8); 2 TOPS per watt
- Connector: M.2 (A+E key)
- Dimensions: 22 mm x 30 mm (M.2-2230-A-E-S3)
- Model compatibility on the Edge TPU
- Edge TPU inferencing overview
- Run multiple models with multiple Edge TPUs
- Pipeline a model with multiple Edge TPUs