Agent harness for hot-reloadable function evolution in Rust.
You specify the function signature, the LLM writes the implementation, which gets compiled natively and hot-swapped into your running binary for evaluation — bare-metal execution, zero interpreter overhead.

How it works

backpressureprompt steeringUSERspecify signatureLLMgenerate bodyVALIDATEparse + signatureCOMPILEnative .soHOT-SWAPin-placeEVALUATEagainst metricHARNESScatch panics, errorsevolution loopconstrained generation

You define the function signature and the harness enters a tight evolution loop: the LLM writes the body, the harness validates and compiles it natively, hot-swaps the dylib in-place, then evaluates against your metric. Diagnostics and failures are fed back as prompt steering — constrained generation, with bare-metal execution.

Quick Start

symbiont::evolvable! {
    fn step(counter: &mut usize) {
        // Default body will be entrely evolved by the Agent
        *counter += 1;
        println!("doing stuff in iteration {}", counter);
    }
}

#[tokio::main]
async fn main() -> symbiont::Result<()> {
    let runtime = symbiont::Runtime::init(SYMBIONT_DECLS).await?;
    let agent = symbiont::inference::init_agent()?;
    let fn_sigs = runtime.fn_sigs();
    let prompt = format!(
        "Give a concise implementation for this function signature: ```{}```, \
        that increments the counter by a constant in the range (5..20). \
        Give Rust Code Only.",
        fn_sigs[0]
    );

    let mut counter = 0;
    loop {
        step(&mut counter);  // bare-metal: native dylib call
        println!("counter: {counter}");

        // LLM rewrites the fn, harness validates + compiles + hot-swaps
        runtime.evolve(&agent, &prompt).await?;
        // `step` function was updated and new Agent code runs in the next loop iteration
    }
}

Similar code to the above can be run with cargo run --bin counter-example in the repo to showcase the function evolution.
Here the function is evolved every 5 seconds.

Now that you grok the core concept, a whole new world of opportunity opens up...

Core Highlights

Type-safe agentic code

LLms express intent as Rust functions with enforced signatures. The compiler is the guardrail.

Constrained generation

Parse errors, signature mismatches, and compiler diagnostics steer the LLM until it produces valid code.

Hot-swap dylibs

Functions compile to native shared libraries and swap in-place via libloading — no process restart.

Bare-metal performance

~1 ns dispatch overhead. Lock-free hot path via AtomicPtr. Multi-thread safe.

Panic catching

Runtime panics in LLM code are caught inside the dylib and fed back as prompt context automatically.

Plug-in inference

Any OpenAI-compatible provider via rig. Local or cloud.

Performance

Dispatch overhead

~1ns
per function call

Compilation

~120ms
per evolution, depending on Agent code.

Inference latency

Seconds
depends on model and hardware of your choice

Examples