4 Comments

Great progress! What would be the benefit of using rune rather than implementing tflite directly on the web browser with WASM or on a smartphone? Would it be obfuscation and containerization?

Expand full comment

Rune isn't trying to replace TensorFlow Lite, instead it is trying to remove a lot of the friction you normally encounter when incorporating machine learning into your app.

The key thing to remember is that most ML applications do a lot more than just pass data to a model. There is often a lot of pre-processing and post-processing involved (e.g. rescale an image, crop it to be square, then apply greyscale and edge detection) and you may be taking the output from one model, transforming it, then using it as the input to another model. Normally this would be done in procedural code and need to be rewritten for each platform (Python on the server, JavaScript in the browser, Swift on iOS, and so on).

With Rune, you can write a single declarative Runefile which specifies how the ML pipeline is set up and reuses existing "Proc Blocks" to do various transformations. This then gets compiled into a single Rune that can be loaded by the Rune runtime library, and all you need to care about is that raw data (e.g. images, audio clips, bytes) goes in one end and useful information comes out the other.

You can also trust that different platforms will give the same results because it's running the same WebAssembly, whereas for anything more than the simplest operations it'll be hard to keep your Python, JavaScript, and Swift implementations in sync.

Obfuscation is a happy by-product, but we have plans to build more robust/sound security mechanisms into the system.

Expand full comment

Sounds great! Are there any caveats or drawbacks in terms of e.g. performance? Rust is a very efficient system language - but can my code in Rune keep up with let's say in-built (optimized) Swift functions in iOS?

Expand full comment

Absolutely! This is some ongoing work we are going to explore in terms of the benchmark. Right now we are focused on developing more use cases so we can explore various ML pipelines.

Expand full comment