Agent harness for hot-reloadable function evolution in Rust.
You specify the function signature, the LLM writes the implementation, which gets compiled natively and hot-swapped into your running binary for evaluation — bare-metal execution, zero interpreter overhead.
You define the function signature and the harness enters a tight evolution loop: the LLM writes the body, the harness validates and compiles it natively, hot-swaps the dylib in-place, then evaluates against your metric. Diagnostics and failures are fed back as prompt steering — constrained generation, with bare-metal execution.
symbiont::evolvable! {
fn step(counter: &mut usize) {
// Default body will be entrely evolved by the Agent
*counter += 1;
println!("doing stuff in iteration {}", counter);
}
}
#[tokio::main]
async fn main() -> symbiont::Result<()> {
let runtime = symbiont::Runtime::init(SYMBIONT_DECLS).await?;
let agent = symbiont::inference::init_agent()?;
let fn_sigs = runtime.fn_sigs();
let prompt = format!(
"Give a concise implementation for this function signature: ```{}```, \
that increments the counter by a constant in the range (5..20). \
Give Rust Code Only.",
fn_sigs[0]
);
let mut counter = 0;
loop {
step(&mut counter); // bare-metal: native dylib call
println!("counter: {counter}");
// LLM rewrites the fn, harness validates + compiles + hot-swaps
runtime.evolve(&agent, &prompt).await?;
// `step` function was updated and new Agent code runs in the next loop iteration
}
}Similar code to the above can be run with cargo run --bin counter-example in the repo to showcase the function evolution.
Here the function is evolved every 5 seconds.
Now that you grok the core concept, a whole new world of opportunity opens up...
LLms express intent as Rust functions with enforced signatures. The compiler is the guardrail.
Parse errors, signature mismatches, and compiler diagnostics steer the LLM until it produces valid code.
Functions compile to native shared libraries and swap in-place via libloading — no process restart.
~1 ns dispatch overhead. Lock-free hot path via AtomicPtr. Multi-thread safe.
Runtime panics in LLM code are caught inside the dylib and fed back as prompt context automatically.
Any OpenAI-compatible provider via rig. Local or cloud.
The LLM evolves a function to satisfy a test suite. Solved in 1 evolution round.
Reverse-engineer the 2-D Rastrigin function from sample data. LLM only sees sample points and deltas. Exact formula found in 2 rounds.
Evolve a sorting algorithm from scratch. Bubble sort to custom, optimized implementation — 1000x improvement.
The LLM evolves a game-playing function tested against random, smart, and minimax opponents. Discovers center control, fork creation, and blocking strategies.
The simplest example: a counter function hot-swapped every 5 seconds with a new LLM implementation.