Research summary
Project overview
This project examines how neural networks can be reworked for constrained hardware without losing too much predictive quality. The study is centred on the practical tension between model accuracy and edge-device performance.
Approach
The research combines pruning, quantization, and deployment profiling to compare lightweight model variants across representative embedded targets. The implementation focuses on repeatable experiments and measurable runtime gains.
Impact
The outcome is a more practical pathway for deploying intelligent features in sensors, field devices, and low-power endpoints where connectivity and compute budget are limited.