Automate your analysis pipeline.
Can I make it fit the way I already work?
Headless, against the models you choose.
A Python API that runs without a Binary Ninja GUI. Point it at a local Ollama or vLLM endpoint, an internal cluster, or your own provider keys — Sidekick uses what you give it. No per-call ceiling, no usage to budget, no shared capacity to fight for. Run an overnight indexing pass, fire an agent over a corpus, or drop a step into a pipeline you already have.
The boring parts are already built.
Model pools with failover across Anthropic, OpenAI, Ollama, and vLLM. Binary objects rendered to model-ready text at any verbosity. Structural queries and natural-language semantic search in one expression. Structured output as Python dicts. Tool loops for agents. You write the logic specific to your problem; the infrastructure is done.
The Library is yours to extend.
Anything you'd reach for at the shell can become a named thing inside Sidekick. Skills you define, Python scripts that wrap a CLI or hit an API, agents that drive your own models — all show up in the chat sidebar, behind a slash command, and as callables from the API. Ask Sidekick's automation specialist to write one for you and you get a draft back to review.
Scripts build the artifact.
Findings, indexes, code maps, and notebook entries are first-class API resources. A scripted run doesn't just dump JSON — it leaves behind the same notebook and chats your team opens interactively, pre-filled and ready to pick up from.
from sidekick.api import sidekick, execute_bnql, init_runtime
with sidekick("./scratch") as sk:
with sk.session("firmware.bndb") as sess:
# Structural + semantic query in one line
candidates = execute_bnql(
sess.bv,
'/view/function[concept("parses untrusted network input") in .]',
limit=20,
)
with init_runtime(bv=sess.bv) as runtime:
triage = runtime.load_agent("triage-parser")
for fn in candidates:
triage(TARGET_FUNCTION=fn) # files notebook findings