Reasoning Control Protocol / built on PM4

Govern the reasoning,
not just the answer.

LLMs generate probabilities. Institutions require verifiable truth. Blazar is a protocol runtime that controls an agent's reasoning trajectory before it becomes an answer or action — purpose-built for law, compliance, audit and the public sector.

5 outcomes
Allow · Downgrade · Support · Escalate · Stop
τ-traj
trajectory-level admissibility
100%
verifiable reasoning logs
01 / Science
The science behind Blazar

Reliability is a property of the trajectory,
not of the final text.

→ THESIS · 01

Probabilities cannot be trusted as truth.

Frontier LLMs generate fluent language under uncertainty. In high-stakes work, fluency is not evidence. Blazar treats every claim as a force that must be matched by accumulated support.

→ THESIS · 02

PM4 is a reasoning-control protocol.

Built on years of applied research into autoregressive reasoning trajectories and premature semantic commitment. PM4 governs when a model may strengthen, downgrade, or refuse a conclusion.

→ THESIS · 03

Control the path before it becomes the answer.

Standard approaches validate output after generation. Blazar intervenes earlier — at the moment a system commits to a locally coherent but inadmissible line of reasoning.

02 / Protocol
How it works

A runtime above your model.
Not a wrapper. Not a guardrail.

01

Connect any LLM agent or workflow.

Blazar sits above GPT-class models via API, SDK, or private deployment. No retraining required.

02

The protocol observes the reasoning trajectory.

PM4 measures dispersion, alternative continuations, and the force of an emerging claim against its accumulated support.

03

One of five outcomes is enforced.

Allow, downgrade, escalate, request additional support, or stop. The agent never commits to an inadmissible answer.

04

Every result ships with a verifiable reasoning log.

An auditable trail from input to outcome — reviewable by counsel, regulators, or oversight bodies.

blazar · runtime · trace LIVE
Allow

Claim force is licensed by the trajectory's accumulated support.

Downgrade

Output is permitted at a weaker assertional force.

Request support

Agent must retrieve additional grounds before continuing.

Escalate

Routed to a human reviewer or higher authority.

Stop

Premature commit detected. No output is released.

03 / Topology
Governance over generation

Every reasoning node, observed.
Every commit, licensed.

Not a black box. A governed graph.

Blazar treats agent cognition as a graph of micro-decisions. Each node is observed against the protocol's admissibility condition. Inadmissible activations are halted, weakened, or rerouted to humans.

The result: agents that move with the speed of an LLM and the discipline of an institution.

Justice ·/ Compliance ·/ Audit ·/ Risk ·/ Public administration ·/ Education ·/ Sovereign AI ·/
04 / Deployment
Government & Enterprise

For workflows where black-box
answers are not an option.

Built for high-stakes institutions — not for chat.

Law firms, regulators, ministries and audit bodies cannot rely on persuasive output without a path. Blazar gives them the protocol layer to use AI agents inside real decisions.

Available as API integrations, private deployments, and controlled workflow runtimes — including sovereign on-premise installations.

See it on a real workflow
Justice & LawHigh-stakes · reviewable
ComplianceRegulated · audited
Risk & AuditAdversarial · evidentiary
Public administrationSovereign · accountable
EducationReasoned · pedagogical
05 / Engage
Two ways to work with us

Deploy the protocol,
or build it with us.

→ Request Demo

Blazar applied to your
highest-stakes workflow.

  • Live demonstration of agent + reasoning log on a workflow you select
  • Walk through admit / downgrade / escalate / stop outcomes
  • Discuss API, private cloud, or sovereign on-prem deployment options
Book a demo
→ Join Our Research

Collaborate on AI reasoning
control & sovereign infra.

  • Co-develop new PM4-based protocols with our research team
  • Open to researchers, labs, technical partners, strategic institutions
  • Topics: agent safety, trajectory governance, sovereign AI infrastructure
Reach the lab
06 / Team
Who is building this

A private research lab,
not a product startup.

Built by Metatesk.

Backgrounds across machine learning, applied mathematics, AI systems engineering, legal-tech and high-stakes analysis — supported by scientific partners with long-term ML and data-science experience.

We treat AI agency as a problem of protocol, not product. Blazar is the operational expression of that thesis.

Metatesk · Private AI Research Lab