developer_board
Edge AI & TinyML
Build real on-device AI systems under strict memory, latency, and power constraints.
Co-Created by Kiran Shirol and Claude
home
Learning Portal
play_arrow
Start Learning
12 chapters
· One focused chapter per critical edge topic
Foundation
Scope and Constraints
Deployment tiers, hardware budgets, and sensor-data realities.
1
menu_book
Edge AI vs TinyML: What Runs Where
Understand edge tiers and decide when TinyML is required instead of general edge inference.
arrow_forward
Learn
2
menu_book
Hardware Budgets: RAM, Flash, Cycles, Power
Build practical budget envelopes for memory, compute, latency, and battery life.
arrow_forward
Learn
3
menu_book
Sensor Data Pipelines for Tiny Devices
Build robust sensor datasets that survive real-world noise and operating drift.
arrow_forward
Learn
Modeling
Tiny Architectures and Compression
Model-family selection and size/quality optimization.
4
menu_book
Tiny Model Architectures That Work
Map edge tasks to architecture families without overfitting to benchmark hype.
arrow_forward
Learn
5
menu_book
Compression and Quantization for Deployment
Shrink models safely while preserving deployment-critical behavior.
arrow_forward
Learn
Runtime
Runtime and Firmware
LiteRT, ExecuTorch, ONNX Runtime Mobile, and RTOS integration.
6
menu_book
LiteRT Micro Stack and CMSIS-NN
Deploy on MCUs with predictable memory planning and optimized kernels.
arrow_forward
Learn
7
menu_book
ExecuTorch and ONNX Runtime Mobile
Choose the right mobile runtime path based on operator support and deployment constraints.
arrow_forward
Learn
8
menu_book
RTOS and Firmware Integration
Integrate inference with real firmware scheduling, interrupts, and watchdog safeguards.
arrow_forward
Learn
Performance
Acceleration and Measurement
Accelerators, delegates, and benchmark discipline.
9
menu_book
Accelerators and Delegates
Use accelerators intentionally and account for fallback paths in performance planning.
arrow_forward
Learn
10
menu_book
Benchmarking TinyML Correctly
Build benchmark practice that reflects real workloads and supports release decisions.
arrow_forward
Learn
Operations
Security and Delivery
OTA lifecycle and end-to-end deployment playbooks.
11
menu_book
Security, Safety, and OTA Model Lifecycle
Operate edge model fleets with security controls and reliable update discipline.
arrow_forward
Learn
12
menu_book
End-to-End Playbooks for Edge AI and TinyML
Turn principles into deployable templates with clear acceptance and rollout criteria.
arrow_forward
Learn
explore
Explore Related Courses
memory
Small Models & Local AI
Quantization and local runtime foundations
bolt
AI Infrastructure
Hardware, serving, and deployment architecture
deployed_code
MLOps & LLMOps
Lifecycle operations and production governance