ARIN UPADHYAY
Embedded Systems & Low Level C Developer


HOME | PROJECTS | BLOG | OTHER


PROJECTS

These projects (and more) are available here: www.github.com/shinymonitor


25-05-2025

EFPIX

In a world of growing surveillance, censorship and infrastructure failure, we need protocols that ensure unrestricted communication in the most disconnected and totalitarian environments. EFPIX (Encrypted Flood Protocol for Information eXchange) is a topology-agnostic flood-type relay protocol that also achieves end-to-end encryption, plausible deniability for users, untraceability of messages, spam resistance and other optional features.

Why use EFPIX

Modern messaging platforms, even encrypted ones, depend on centralized infrastructure which makes them vulnerable to:

  • Government surveillance or takedown
  • Infrastructure loss
  • Metadata leaks
EFPIX counters this by never requiring a central server but also hiding not only the message content but also sender's and receiver's identities.

Where to use EFPIX

  • Disaster zones where network infrastructure was lost
  • Authoritarian Regimes for whistleblowing, journalism and activism where communication is surveilled and censored
  • Space, research, military, or other remote networks where the existence and maintenance of a server is not feasible
  • General-purpose privacy focused applications
  • Broadcasts like emergency rescue calls, disaster warnings, and news distribution

What is it designed for

EFPIX is not optimized for high-bandwidth or ultra-low-latency applications. It is best suited for asynchronous, privacy-critical communication such as secure email, file drops, offline messaging, and distributed alerts.
"0% performance, 100% security": EFPIX prioritizes anonymity and resilience over speed. Choose optional features to balance your needs.

For those interested in the full technical details, check out the full whitepaper (PDF) here or on my Github.


07-09-2025

QMTIK

When most people think of neural networks, they imagine massive models trained on racks of GPUs, running in the cloud. But what if you could train a neural network entirely on cpu and run it on a microcontroller? That's exactly what the QMTIK (Quantized Model Training and Inference Kit) sets out to do. It is a minimal, dependency-free implementation of a quantized neural network designed for embedded systems and resource-constrained environments. By using 8-bit integers for both weights and activations and heap-less memory, it delivers the efficiency needed to deploy machine learning on devices with just a few kilobytes of memory.

Why Quantization

Normally, neural networks are trained and run with 32-bit floating point weights and activations. But this makes running the models on embedded hardware extremely diificult due to slower hardware and small memory. By quantizing everything to 8 bits, we get:

  • 4x smaller models which are easier to store
  • 2-4x faster inference
  • Minimal accuracy loss (often <1%) if training is quantization aware
For example, on the MNIST digit recognition task, this kit achieves ~95% test accuracy with just 327 KB model size (vs ~1.2 MB for float32) and <1 ms inference time on a modern CPU, which is a ~14x speedup. That's small and fast enough to run in real-time on many microcontrollers.

What is it good for

This kit is not meant to compete with cloud-scale AI. Instead, it shines in environments where:

  • Memory is limited and/or hardware is slow (IoT devices, wearables, industrial sensors)
  • Power is constrained (edge devices running on batteries or solar)
  • Fast inference is needed (real-time applications)
  • Determinism is required (real-time systems where malloc/free isnt allowed)
  • Learning is the goal (understanding how quantized neural nets really work)
  • Rapid prototyping (quickly test a small neural network idea)

Features

  • INT8 weights and activations for low memory usage, small model and fast inference
  • Adam optimization with batching
  • Quantization-Aware Training to minimize accuracy loss
  • Extremely configurable with custom network architecture, multiple activation, output, cost and learning decay functions and scaling factors
  • No dynamic memory allocation
  • No dependencies

17-11-2025

NCT

The C ecosystem is a little hypocritical. We write our applications in a language that gives us precise control but then surrender control of our build process to inefficient DSLs and shell scripts. A build script is a program like any other. Why not write it in C. NCT (NiCeTy or Nice C Tea) is a lightweight command-line project manager designed to bring a standardized and convenient workflow to C development and C-based build recipes. NCT addresses the common need for a simple, cross-platform way to initialize, build, and test C projects.

Why use NCT

It is important to clarify that NCT is a project manager, not a build system. Its purpose is to standardize the workflow around the build process. The core reasoning for NCT is based on four principles:

  • Standardization: Provides a consistent project structure and command set, eliminating the need to decipher custom build instructions for each new C project.
  • Convenience: Offers a complete workflow from initialization to compilation, with built-in utilities to simplify common tasks.
  • Cross-Platform: Delivers a uniform command-line experience on both Windows and Unix, addressing the fact that tools like make are not standard on all platforms.
  • Flexibility: NCT provides a powerful utility header build.h to handle common tasks like incremental compilation and dependency fetching. However, you are never locked in. The C-based recipes allow for the use of any library or toolchain. You can use standard libraries, write helper functions, and manage complexity with the same tools you use for your main application. The system can be extended with more advanced build libraries like nob.h or configured to use any compiler.
NCT aims to bring a simple, "one command" experience to the C ecosystem without sacrificing the power and flexibility that C developers expect.