logo

Open-source agent runtime for building distributed* automations

* spanning network boundaries, tolerant to auto-scaling
with a highly available orchestration layer.

Get Started with Managed Cloud

Generous free tier • No credit card required

You design the runs, and BYO functions

Inferable orchestrates the rest

Runs

Define the goal and constraints for your automation with declarative, version-controlled configurations. Backed by your source control.

inferable.run({
  initialPrompt: `
    Check our project's GitHub dependencies for any 
    critical security updates or breaking changes in 
    the last week. Create a summary report and notify 
    the dev team if action is needed.
    ---
    Repository: https://github.com/inferablehq/inferable
  `,
  resultSchema: z.object({
    needsAction: z.boolean(),
    criticalUpdates: z.array(z.object({
      name: z.string(),
      currentVersion: z.string(),
      recommendedVersion: z.string(),
      securityImpact: z.string()
    })),
  })
})

Functions

Modular, reusable capabilities that can be executed anywhere in your infrastructure. From your existing codebases or new services.

import { chromium } from "playwright";

async function readWebPage({ url }: { url: string }) {
  const browser = await chromium.launch();
  const page = await browser.newPage();
  await page.goto(url);
  return getReadableContent(page);
}

inferable.default.register({
  name: "readWebPage",
  description: "Read the contents of a web page",
  func: readWebPage,
  input: z.object({
    url: z.string(),
  }),
});

Reasoning and Action Loop

Iteratively plans and executes towards the goal using a distributed durable execution engine.

trigger
schedule.weekly
reasoning
I'll start by reading the project's package.json and lock files
function

readWebPage()

reasoning
Now I'll check GitHub's security advisories for each major dependency
function

readWebPage(), readWebPage(), readWebPage()

reasoning
Found critical updates for three dependencies. Generating detailed analysis.
function

summarizeContent()

reasoning
Critical updates found. Notifying dev team with detailed report.
function

editTicket()

result
{
  "needsAction": true,
  "criticalUpdates": [
    {
      "name": "express",
      "currentVersion": "4.17.1",
      "recommendedVersion": "4.17.3",
      "securityImpact": "High: Remote Code Execution vulnerability"
    }
  ]
}
Enabled by

Inferable Control Plane

Inferable control plane reliably orchestrates functions in your infrastructure.

Semantic Tool Search

Semantic search for all your functions based on the next action and reasoning of the agent

Model routing

Routing each step of your Run to the most appropriate model, based on context and heuristics

Durable Job Engine

Reliable and persistent execution with fault-tolerance, automatic retries and caching

Re-Act Agent

Out of the box Re-Act agent for reasoning and action planning for your Runs

Distributed Orchestrator

Distributed orchestration of function execution across all your on-prem infrastructure, the LLMs, and your Runs.

Build production-ready AI automationsin your language of choice

Inferable comes out of the box with delightful DX to kickstart your AI automation journey.

NodeJS
GA
Golang
Beta
.NET
Beta
Java
Coming Soon
PHP
Coming Soon

Building reliable software is hard.
Doing that with LLMs is even harder.

If you've written software for a living, you know that building reliable software is hard. Building AI agents that are reliable, scalable, and secure is even harder.

Problem
If you want to build an agent that can reason and act (Re-Act), it's non-trivial. You need to be able to handle recursive logic, and ensure that the LLM doesn't go into an infinite loop. If the reasoning chain is too long, you need to implement safeguards to ensure you're not exhausting the context window.

The managed agent runtime for reliable automation

We bring the best in class vertically integrated LLM orchestration. You bring your product and domain expertise.

Distributed Function Orchestration

At the core of Inferable is a distributed message queue with at-least-once delivery guarantees. It ensures your AI automations are scalable and reliable

Works with your codebase

Decorate your existing functions, REST APIs and GraphQL endpoints. No new frameworks to learn, no inversion of control.

Language Support

Inferable has first class support for Node.js, Golang, and C#, with more on the way.

On-premise Execution

Your functions run on your own infrastructure, LLMs can't do anything your functions don't allow.

Observability

Get end-to-end observability into your AI workflows and function calls. No configuration required.

Structured Outputs

Enforce structured outputs, and compose, pipe, and chain outputs using language primitives.

Managed Agent Runtime

Inferable comes with a built-in ReAct agent that can be used to solve complex problems by reasoning step-by-step, and calling your functions to solve sub-problems.

Enterprise-ready
from the ground up

  • Adapts to your existing architecture
  • Bring your own models for complete control over AI
  • Managed cloud with auto-scaling and high availability

Inferable is proudly open source and can be self-hosted on your own infrastructure for complete control over your data and compute.

Frequently Asked Questions

Everything you need to know about Inferable

Data Privacy & Security

Model Usage