Feature

●Performs high-speed ML inferencing: High-speed TensorFlow Lite inferencing with low power, small footprint, local inferencing
●Supports all major platforms: Connects via USB 3.0 Type-C to any system running Debian Linux (including Raspberry Pi), macOS, or Windows 10
●Supports TensorFlow Lite: no need to build models from the ground up. Tensorflow Lite models can be compiled to run on the edge TPE
●Supports AutoML Vision Edge: easily build and deploy fast, high-accuracy custom image classification models at the edge.
●Compatible with Google Cloud


Description

The Coral USB Accelerator adds an Edge TPU coprocessor to your system, enabling high-speed machine learning inferencing on a wide range of systems, simply by connecting it to a USB port.

  • The on-board Edge TPU coprocessor is capable of performing 4 trillion operations (tera-operations) per second (TOPS), using 0.5 watts for each TOPS (2 TOPS per watt). For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at almost 400 FPS, in a power efficient manner.

  • Connects via USB to any system running Debian Linux (including Raspberry Pi), macOS, or Windows 10.

  • No need to build models from the ground up. TensorFlow Lite models can be compiled to run on the Edge TPU.

System requirements

  • A computer with one of the following operating systems:Linux Debian 10, or a derivative thereof (such as Ubuntu 18.04), and system architecture of either x86-64, Armv7 (32-bit), or Armv8 (64-bit) (Raspberry Pi is supported, but we have only tested Raspberry Pi 3 Model B+ and Raspberry Pi 4) macOS 10.15, with either MacPorts or Homebrew installed Windows 10
  • One available USB port (for the best performance, use a USB 3.0 port)
  • Python 3.5, 3.6, or 3.7

brand seeed studio
accelerator 4 TOPS (int8); 2 TOPS per watt
Connector USB 3.0 Type-C* (data/power)
Dimensions 65 mm x 30 mm