TinyML Ops: Exploiting the principle of locality to improve ML performance
Calibrating models from the cloud to specialize them for the edge
As HOTG is inching closer and closer to our open source launch, we are particularly focused on making Rune
containers easier to deploy. Deploying applications in TinyML universe is a hassle and experts agree. So long story short - we built a THING!
Introducing hmrd (hammer dee): Reliability with TinyML Ops!
To make development and deployments easier, hmrd is a rapid development service that can build Runes, collect data in environments completely owned by you and deploy Runes back to devices. hmrd is part of our toolkit for TinyML application developers, to focus more on making cool stuff rather faster than stare at loading bars. hmrd goes live with our open source launch, the release is aimed for final week of May 2021 so keep a look out for updates!
TinyML Prototyping: The power of TinyML OPs and why we are excited!
We are still learning so much about the unique advantages in tinyML, specifically involving model performance. TinyML models fundamentally cannot be large and complex, so it doesn’t make sense to try to build and train on large datasets. Instead TinyML can simply leverage a “rough” cloud model and improve it with rapid prototyping. TinyML prototyping is a pain today due to the deployment friction. In prototyping, we would want to simulate signals and change application or update the models and deploy them back seamlessly. hmrd makes this possible on locally collected data allowing for rapid prototyping and testing.
This led to an amazing moment for us.
Our Ah Ha! Moment
Unlike large cloud based models, TinyML models adhere to the Principle of Locality which means they can actually learn immediately from the environment in which they run. With the ability to collect local data and deploy updated application code/models, tinyML apps can be “calibrated” rapidly and improved iteratively! This is the power of being tiny and local!
Ants are famously among the best super-atom-weight powerlifters in the world. Their prowess differs between species, but some [ants] can lift 10 to 50 times their bodyweight. (source: BBC)
TinyML applications benefit from adhering to the Principle of Locality
Like the mighty common garden ant, TinyML models have an added advantage to larger species and models respectively. Larger model need to be generalized over more data to perform accurately (a speech recognition model needs to work all over the world!). However, TinyML models can be more specialized to their local environment.
In production use cases, such as sound anomaly detectors in a factory, several tinyML devices would need to be deployed over a large surface area aka the factory. The challenge is how should we collect data to build a single model that predicts potential failures using vibrational/acoustic signals? In a factory, meaningful acoustical signal would be occluded with background noise/vibrations from the various machinery. The factory’s layout, sensor placement, day of week, time of year .. ad nauseam, would impact the performance of the trained model. Training a single model to learn anomalies would need a large dataset and potentially many layers in the model. Building a robust and accurate model on that wildly noisy data would require potentially more layers, weights and DSP processing blocks.
However, with calibration, deploying a family of models across each device can be a solution. These model instances would have to be calibrated further to specialize to the local environment on the factory floor they are installed at. Using hmrd and TinyML Ops you can rapidly deploy Runes versioned to the location on the factory similar to docker tags.
Sneak peak at the fluent hmrd python library
# Build a rune instance from Runefile
rune = hmrd.Rune(“hotg”,”anomaly”,runefile,”factory_AL_1”)
# Get data
rune.get_data()
# Add new model(s) - for example tflite model
rune.set_model(model_file_path)
# Rebuild model
..
# and deploy rune to the device!
rune.deploy()
We are building hmrd to help you orchestrate and perform TinyML ops right from your computer keeping all the sensitive data you might collect private and safe, and allow you to compile and live test your models iteratively.
Speed of iteration at your fingertips
Using hmrd our TinyML Ops tool, you can rapidly build, deploy and continuously collect labeled data from all sensors specific to their actual location (e.g. hotg/noise/factory_AL_
). Once collected, all models are retrained with the original data and their specific datasets. Over time each individual sensor is more specialized while your overall application’s performance will improve!
If you love what we are building and/or writing please share!
Subscribe & follow us to check out more upcoming posts!