Effective AI inferencing for embedded systems
PROGRAMMABLE
PERFORMANCE
- 256-bit vector processing unit with 100GOPS int-8 AI inferencing performance.
- No heterogeneous communication overhead.
- Supports binarized networks, achieving over 800GOPS.
- 1MB on-chip SRAM supporting AI models with up to ~800kB tensor arena requirement.
- Weights stored in external flash memory. Large on chip memory maximizes AI model inference performance.
These features make xcore.ai a powerful and versatile choice for AI applications.
Easy to
UsE
- XMOS AI Tools allow deployment of AI Models developed in Python-based environments like PyTorch and Tensorflow.
- Models are optimized for embedded use with Tensorflow Lite with high performance operators that exploit the xcore.ai capabilities and minimize tensor arena sizes.
- Generated C code can be integrated with other embedded functions allowing xcore.ai to efficiently handle image-based and audio-based applications.
- While most embedded AI apps use 8-bit quantization, xcore.ai also supports binarized networks and larger datatypes (e.g., floating point) where needed.
A combination of AI tools and the xcore architecture simplifies the deployment of your AI models.
FLEXIBILE /
SCALABLE
- Unique design with 16 independent hardware threads per device
- Threads can be used independently or collaboratively.
- Supports a diverse range of functions, including; AI inferencing, signal conditioning, I/O and control
- High-speed communication between threads and between devices provides:
- Scalability of performance and memory.
- Concurrent implementation of high-performance inferencing alongside other functions
The xcore architecture provides flexibility and efficiency for various embedded applications
APPLICATIONS
Personal electronics and Smart home
INTELLIGENT ENERGY MANAGEMENT
VOICE-ACTIVATED ASSISTANTS
SMART LIGHTING
SECURITY
SYSTEMS
SMART HVAC