Over the last few weeks, the HOTG.ai team have been hard at work on expanding the Rune runtime and Rune spec to support more workloads. Using our declarative interfaces, we are able to rapidly iterate our Runtime and running times.
WIT bindgen is a way to describe interfaces for Webassembly modules and export/import functions which can be used to generate stubs for multiple languages. The first implementation the team worked on was to move our initial hand rolled interfaces to WIT bindgen.
Pure Rust Runtime Implementations
New Rust Runtime on Devices
Our initial Runtime for mobile devices was implemented in C++. Updating this implementation required a lot of work and hampered our ability to iterate. This past week, we successfully released our mobile app update where the runtime is entirely defined in WIT. We are seeing slight improvements in memory! Most importantly, we can now use a single code base for all our runtimes (besides the web runtime).
Here’s a sneak peek from our Mobile Engineering Wizard! Our team has been playing with visualization to render ML pipelines directly in the app :) Neat, huh?
Bringing composability to Rune
From the beginning, Rune was built to provide a way to compose and reuse building blocks of ML pipelines (which we call processing blocks). Our initial version of Rune hadn’t quite hit this goal as we still needed to write all our processing blocks in Rust and everything was compiled into a single WebAssembly binary. Our team is excited to share that we are finally changing this!
We can now leverage multiple languages to write proc blocks as long as they fit our specific interfaces.
This means you could have processing blocks, models, and data ingestion written in languages from Go to Java.
We’ve accomplished this by compiling each processing block to WebAssembly individually and embedding both these compiled binaries and the pipeline DAG in the final Rune. The runtime then loads each processing block and chooses how to evaluate them based on the flow of data through the pipeline.
Expect to see more of a detailed breakdown of our runtime in the coming weeks.
Edge in-memory OLAP Analytics
Finally, our team is working hard on our ability to provide more value using portable computing. One of the directions we are exploring this is to address frustrations faced by inability to quickly run ML and ad-hoc queries in a rapid manner. To this effect, we are working on top of great technologies like Apache Arrow and DuckDB, to put together a way to run Runes directly on data on your laptop. No ETL, no Infra, and get started finding insights.
We want to work with you to make an amazing experience for Edge Analytics. Feel free to reach out to kartik@hotg.ai if this interests you!
Have a sneak peek at the video below!
Until then, keep building the future!
HOT-G Socials:
Twitter @hotg_ai and @hammer_otg | LinkedIn | Discord