Skip to main content

AI Effect: Calling LLMs from AILANG

AILANG v0.5.10 provides a simple, high-level AI effect for calling external AI/ML systems directly from your code. Perfect for game NPCs, agents, CLI tools, and data pipelines.

Try AI Effects Live

See the AI effect in action in the browser:

Overview

The AI effect (std/ai) is a general-purpose AI oracle - an opaque string-to-string interface for calling LLMs:

import std/ai (call)

func ask_ai(question: string) -> string ! {AI} =
call(question)

Key Features:

  • Simple string → string interface (JSON by convention)
  • Multi-provider support (Anthropic, OpenAI, Google)
  • Vertex AI ADC support (no API key required with gcloud auth)
  • Deterministic stub for testing
  • Effect-typed for capability tracking

Quick Start

1. Basic Usage

examples/runnable/ai_effect.ail
-- AI Effect Example
-- Demonstrates calling AI from AILANG
--
-- Run with stub handler (testing):
-- ailang run --caps IO,AI --ai-stub --entry main examples/runnable/ai_effect.ail
--
-- Run with Claude Haiku (requires ANTHROPIC_API_KEY):
-- ailang run --caps IO,AI --ai claude-haiku-4-5 --entry main examples/runnable/ai_effect.ail
--
-- Run with GPT-5 Mini (requires OPENAI_API_KEY):
-- ailang run --caps IO,AI --ai gpt5-mini --entry main examples/runnable/ai_effect.ail
--
-- Run with Gemini Flash (requires GOOGLE_API_KEY):
-- ailang run --caps IO,AI --ai gemini-2-5-flash --entry main examples/runnable/ai_effect.ail
--
-- The AI effect is a general-purpose AI oracle:
-- - String→string interface (JSON by convention)
-- - Pluggable handlers (stub, Anthropic, OpenAI, Google)
-- - Model lookup from models.yml (or guessed from prefix)
-- - No silent fallbacks (nil handler = error)

module examples/runnable/ai_effect

import std/ai (call)
import std/io (println)

-- Ask the AI a question and print the response
export func main() -> () ! {IO, AI} {
println("Asking AI: What is the capital of France?");
let response = call("What is the capital of France? Reply in one word.");
println("AI says: ${response}")
}

2. Run with Different Providers

# Claude (Anthropic) - requires ANTHROPIC_API_KEY
ailang run --caps IO,AI --ai claude-haiku-4-5 --entry main my_app.ail

# GPT (OpenAI) - requires OPENAI_API_KEY
ailang run --caps IO,AI --ai gpt5-mini --entry main my_app.ail

# Gemini (Google) - uses Vertex AI ADC if no GOOGLE_API_KEY
ailang run --caps IO,AI --ai gemini-2-5-flash --entry main my_app.ail

# Stub (deterministic testing)
ailang run --caps IO,AI --ai-stub --entry main my_app.ail

Supported Providers

ProviderModelsAuth MethodEnvironment Variable
Anthropicclaude-sonnet-4-5, claude-haiku-4-5, etc.API KeyANTHROPIC_API_KEY
OpenAIgpt-5, gpt-5-mini, etc.API KeyOPENAI_API_KEY
Googlegemini-2-5-pro, gemini-2-5-flash, etc.API Key or ADCGOOGLE_API_KEY (optional)

Google Vertex AI (ADC)

For Google models, if no GOOGLE_API_KEY is set, AILANG automatically falls back to Application Default Credentials (ADC):

# Configure ADC once
gcloud auth application-default login

# Run without API key - uses ADC automatically
unset GOOGLE_API_KEY
ailang run --caps IO,AI --ai gemini-2-5-flash --entry main my_app.ail

Game Development Example

Here's a complete example of using AI for NPC dialogue generation:

examples/docs/npc_dialogue.ail
-- npc_dialogue.ail - Game NPC Dialogue using AI Effect
-- Used in: docs/docs/guides/ai-effect.md
module examples/docs/npc_dialogue

import std/ai (call)
import std/io (println)

-- Build dialogue prompt for an NPC
pure func makePrompt(npcName: string, role: string, personality: string, topic: string) -> string =
"You are " ++ npcName ++ ", a " ++ role ++ " in a fantasy RPG. " ++
"Personality: " ++ personality ++ ". " ++
"Generate ONE short line of dialogue (1-2 sentences) about: " ++ topic

-- Generate NPC dialogue using AI
func askNpc(npcName: string, role: string, personality: string, topic: string) -> string ! {AI} = {
let prompt = makePrompt(npcName, role, personality, topic);
call(prompt)
}

-- Main: demonstrate NPC dialogue generation
export func main() -> () ! {IO, AI} = {
println("=== NPC Dialogue Demo ===");
println("");

println("[At the Forge]");
println("Player: Can you forge me an enchanted sword?");
let response = askNpc("Grimnar", "blacksmith", "gruff but kind", "forging an enchanted sword");
println("Grimnar: " ++ response)
}

Run it:

# With Claude
ailang run --caps IO,AI --ai claude-haiku-4-5 --entry main examples/docs/npc_dialogue.ail

Output:

=== NPC Dialogue Demo ===

[At the Forge]
Player: Can you forge me an enchanted sword?
Grimnar: *pounds hammer on anvil*
The metal's got a stubborn spirit—takes patience and a steady hand to coax out the magic.

Testing with Stub Handler

For deterministic testing, use --ai-stub:

ailang run --caps IO,AI --ai-stub --entry main my_app.ail

The stub handler returns {"kind":"Wait"} for all inputs, making tests predictable and fast.

JSON Input/Output Pattern

By convention, use JSON for structured input/output:

import std/ai (call)
import std/json (encode, decode)

type GameContext = {
player_health: int,
enemy_count: int,
has_weapon: bool
}

type Action = Wait | Attack | Retreat | Heal

func decide_action(ctx: GameContext) -> Action ! {AI} {
let input = encode(ctx) in
let output = call(input) in
match decode[Action](output) {
Ok(action) => action,
Err(_) => Wait -- Safe fallback
}
}

Effect System Integration

The AI effect integrates with AILANG's capability system:

-- Effect declared in signature
func my_ai_func() -> string ! {AI} = ...

-- Requires --caps AI at runtime
ailang run --caps IO,AI --ai <model> --entry main file.ail

Effects are tracked at the type level, ensuring:

  • AI calls are explicit in function signatures
  • Capability requirements are validated at compile time
  • Effect boundaries are clear and auditable

Configuration

CLI Flags

FlagDescription
--ai <model>Set the AI model to use
--ai-stubUse deterministic stub handler
--caps AIEnable AI capability

Model Lookup

Models are looked up in models.yml or guessed from name prefixes:

  • claude-* → Anthropic
  • gpt-* → OpenAI
  • gemini-* → Google

Typed Errors with callResult / callJsonResult (v0.17.0+)

The legacy call and callJson crash the host on provider failure, which is fine for one-shot scripts but wrong for agent loops, retries, or pipelines. callResult / callJsonResult are drop-in replacements that return Result[string, AIError] instead — the same wire path, but typed errors flow back as Err(AIError {code, message, retryable}).

AIError.code is one of a fixed vocabulary that lets agents route on failure type without parsing message strings:

CodeRetryable?Typical cause
AuthFailedno401/403 — bad API key
RateLimityes429 — slow down and retry
Timeoutyesnetwork / context deadline
ConnectionFailedyesTCP / TLS / DNS issues
ContextLengthnoprompt exceeds model window
SchemaValidationnoresponse doesn't match callJson schema
ToolsNotSupportednoprovider can't do tools (Ollama, configdriven)
ModelNotFoundno404 — model name typo
BudgetExhaustednoAI cap budget overflow
Internalyes5xx or unclassified — conservative default

Same shape as std/ai/streaming.AIError (shipped v0.15.0) — one error record across the whole std/ai surface, single source of truth for retry-decision logic.

ailang run --caps AI,IO --ai gemini-3-flash-preview \
examples/runnable/ai_call_result.ail
-- examples/runnable/ai_call_result.ail
-- Demonstrates std/ai.callResult — the Result-returning variant of call,
-- introduced by M-AI-TOOL-LOOP (v0.17.0).
--
-- callResult mirrors call's wire path but returns Result[string, AIError]
-- instead of crashing the host on failure. The typed AIError carries
-- structured retry-vs-surface signal (retryable bool + code string) so
-- agents can route on it without parsing string error messages.
--
-- Use this in code where you want to handle provider failures explicitly
-- (rate limits, auth errors, transient timeouts) — particularly inside
-- agent loops, retries, or pipelines where panic-on-error is unacceptable.
--
-- Setup:
-- ailang run --caps AI,IO --ai gemini-3-flash-preview \
-- examples/runnable/ai_call_result.ail
--
-- For deterministic offline behaviour:
-- ailang run --caps AI,IO --ai-stub examples/runnable/ai_call_result.ail

module examples/runnable/ai_call_result

import std/ai (callResult, AIError)
import std/io (println)
import std/result (Result, Ok, Err)

-- Render a typed AIError with the retry-decision branch explicit.
-- AIError shape: { code: string, message: string, retryable: bool }.
export pure func renderError(e: AIError) -> string {
let retryStr = if e.retryable then "retryable" else "fatal";
"[${e.code}/${retryStr}] ${e.message}"
}

export func main() -> () ! {AI, IO} {
match callResult("Say hi in five words.") {
Ok(response) =>
println("AI response: ${response}"),
Err(e) =>
-- Typed error gives us the retry hint directly — no message parsing.
if e.retryable
then println("Transient error (would retry): ${renderError(e)}")
else println("Fatal error: ${renderError(e)}")
}
}

Multi-Turn Tool Dispatch with step and runTools (v0.17.0+)

For agentic workflows where the model emits tool calls and the host dispatches them, step (one model turn) and runTools (loop driver) close the gap. The full agent loop — provider HTTP, tool-call parsing, message threading, typed errors — fits in a few lines of pure AILANG.

Shapes:

type Message = { role: string, content: string,
tool_calls: [ToolCall], tool_call_id: string }
type ToolSchema = { name: string, description: string, parameters: string }
type ToolCall = { id: string, name: string, arguments: string }
type StepResult = { message: Message, tool_calls: [ToolCall],
finish_reason: string,
input_tokens: int, output_tokens: int }

step is one model turn. Pass conversation + tool catalog, get the assistant's response back. When finish_reason == "tool_calls", the model is asking the host to dispatch tools and continue.

runTools is the convenience loop driver. It calls step in a loop, runs your dispatch callback on each tool call, threads results back as tool-role messages, and terminates when the model is done OR the step budget hits zero.

runTools(
model: string,
messages: [Message],
tools: [ToolSchema],
dispatch: (ToolCall) -> string, -- effect-polymorphic; can be ! {FS, Process, ...}
step_budget: int
) -> Result[[Message], AIError] ! {AI}

The dispatch callback's effects propagate via row polymorphism — pass a dispatch with ! {FS, Process} (e.g. real file I/O + shell exec) and runTools infers ! {AI, FS, Process} automatically. No signature change needed.

Provider parity:

ProviderTool supportNotes
AnthropicyesMessages API tool_use content blocks
GeminiyesfunctionCall parts; adapter generates stable IDs
OpenAIyesChat Completions tool_calls
OpenRouteryesPassthrough (composes with Routing policies)
Ollamano — typed rejectAIError{ToolsNotSupported}; falls through to chat for no-tools
Configdrivenno in v1Same typed reject pattern

Worked example: a 2-tool catalog with offline-deterministic dispatch.

ailang run --caps AI,IO --ai gemini-3-flash-preview \
examples/runnable/ai_tool_loop.ail
-- examples/runnable/ai_tool_loop.ail
-- Demonstrates std/ai.runTools — the multi-turn tool dispatch loop driver
-- introduced by M-AI-TOOL-LOOP (v0.17.0).
--
-- The example builds a 2-tool catalog (read_doc, list_docs), starts a
-- conversation with one user turn ("Summarize the contents of nda.docx"),
-- and lets runTools drive the agent loop:
-- 1. step asks the model what to do
-- 2. model emits a tool_call (e.g. read_doc with name=nda.docx)
-- 3. runTools calls dispatch(call) → "<doc body>"
-- 4. step is called again with the tool result appended
-- 5. model emits a final assistant message → loop terminates
--
-- The dispatch callback in this example is pure — it returns hard-coded
-- strings keyed off the tool name. Real consumers (e.g. motoko_agent)
-- run a callback with effects {FS, Process} that actually reads files /
-- runs commands; AILANG's row polymorphism lets such a callback compose
-- with runTools without changing runTools' signature.
--
-- Setup:
-- ailang run --caps AI,IO --ai gemini-3-flash-preview \
-- examples/runnable/ai_tool_loop.ail
--
-- For deterministic behaviour against the stub handler:
-- ailang run --caps AI,IO --ai-stub examples/runnable/ai_tool_loop.ail

module examples/runnable/ai_tool_loop

import std/ai (
runTools, callResult, AIError, Message, ToolCall, ToolSchema, StepResult
)
import std/io (println)
import std/result (Result, Ok, Err)
import std/list (length)

-- Build a tool catalog the model may call.
export func tools() -> [ToolSchema] {
[
{
name: "read_doc",
description: "Read the named document and return its contents",
parameters: "{\"type\":\"object\",\"properties\":{\"name\":{\"type\":\"string\"}},\"required\":[\"name\"]}"
},
{
name: "list_docs",
description: "List the names of all available documents",
parameters: "{\"type\":\"object\",\"properties\":{}}"
}
]
}

-- Initial conversation: one user message.
export func initialMessages() -> [Message] {
[
{
role: "user",
content: "Summarize the contents of nda.docx",
tool_calls: [],
tool_call_id: ""
}
]
}

-- Dispatch callback. In a real agent this would do file I/O / shell exec /
-- HTTP requests / etc. with appropriate effects. Here we return canned
-- strings so the example is deterministic and runs offline against any
-- AI handler that emits tool_calls.
export pure func dispatch(call: ToolCall) -> string {
if call.name == "read_doc"
then "[NDA between Acme Corp and Beta LLC, dated 2026-01-15. Confidential information defined as ... 2-year term ... governed by Delaware law.]"
else if call.name == "list_docs"
then "nda.docx, employment_contract.docx, lease.pdf"
else "unknown tool: ${call.name}"
}

-- Render the typed-error path with retry-decision branch.
export pure func renderError(e: AIError) -> string {
let retryStr = if e.retryable then "true" else "false";
"[${e.code}] retryable=${retryStr}: ${e.message}"
}

-- Pull the last message's content for display. If somehow empty (shouldn't
-- happen — runTools always appends at least one message on Ok), return a
-- sentinel so the example output is still informative.
export pure func lastMessage(messages: [Message]) -> string {
match messages {
[] => "<no messages>",
[msg] => msg.content,
[_, ...rest] => lastMessage(rest)
}
}

export func main() -> () ! {AI, IO} {
-- Cap the loop at 8 steps. Reasonable for read_doc → summarize flows;
-- raise for deeper agent traces.
match runTools("", initialMessages(), tools(), dispatch, 8) {
Ok(messages) => {
let n = length(messages);
println("=== runTools succeeded: ${show(n)} messages in transcript ===");
println("Final assistant text:");
println(lastMessage(messages))
},
Err(e) =>
-- Typed-error retry-decision branch — the whole point of returning
-- AIError instead of a plain string.
if e.retryable
then println("Transient error (would retry): ${renderError(e)}")
else println("Fatal error: ${renderError(e)}")
}
}

For the broader integration with motoko_agent (the first AILANG-native coding agent retiring its hand-rolled tool dispatch), see the Motoko integration sequence.

Builtin Documentation

View full builtin documentation:

ailang builtins show _ai_call
ailang builtins show _ai_call_result # v0.17.0+
ailang builtins show _ai_step # v0.17.0+

Comparison: AI Effect vs HTTP API

FeatureAI Effect (std/ai)HTTP API (std/net)
ComplexitySimple string→stringFull HTTP control
Provider setupCLI flagManual headers/auth
JSON handlingBy conventionRequired
Best forQuick LLM callsCustom API integration

Use the AI effect for simple LLM calls. Use std/net when you need full control over HTTP requests (custom endpoints, streaming, etc.).

OpenRouter for Multi-Vendor Routing

The AI effect can also be backed by OpenRouter, a single API that fronts ~100 models across many vendors. Pass a vendor/model model string (e.g., anthropic/claude-sonnet-4.5, openai/gpt-5-mini, or openrouter/auto) and AILANG dispatches the call through OpenRouter automatically.

export OPENROUTER_API_KEY=sk-or-...
ailang run --caps IO,AI --ai openrouter/auto --entry main file.ail

For dynamic provider selection (fallback chains, capability requirements, cost preferences) see the AI Provider Routing guide — this complements the --ai flag with --routing-fallback, --routing-require, --routing-prefer, and the --allow-routing safety gate.