Skip to main content

Serve API Guide

This guide explains how to expose AILANG functions as REST API endpoints and optionally pair them with a React frontend.

Version: Available since v0.7.1

Overview

AILANG provides two commands for web integration:

CommandPurpose
ailang serve-apiServe AILANG module exports as auto-generated REST endpoints
ailang init web-appScaffold a full-stack project (AILANG API + React frontend)

Both build on the Go Interop embed API, wrapping it with HTTP routing so you don't need to write any Go code.


Quick Start

Option 1: Scaffold a New Project

ailang init web-app myproject
cd myproject
cd ui && npm install && cd ..
make dev

This starts:

  • AILANG API server on http://localhost:8080
  • React dev server on http://localhost:5173 (proxies /api to AILANG)

Open http://localhost:5173 in your browser.

Option 2: Serve Existing Modules

# Serve a single module
ailang serve-api api/handlers.ail --port 8080

# Serve all .ail files in a directory
ailang serve-api ./api/ --port 8080

# With React frontend proxy
ailang serve-api ./api/ --port 8080 --frontend ./ui

How It Works

Given two AILANG modules (from examples/web_api_demo/):

-- api/math.ail
module api/math

export pure func add(x: int, y: int) -> int =
x + y

export pure func multiply(x: int, y: int) -> int =
x * y

export pure func factorial(n: int) -> int =
if n <= 1 then 1
else n * factorial(n - 1)

export pure func fibonacci(n: int) -> int =
if n <= 0 then 0
else if n == 1 then 1
else fibonacci(n - 1) + fibonacci(n - 2)
-- api/greet.ail
module api/greet

import std/json (encode, jo, kv, js)

export pure func hello(name: string) -> string =
"Hello, " ++ name ++ "!"

export pure func farewell(name: string) -> string =
"Goodbye, " ++ name ++ ". Until next time!"

export pure func welcome(name: string) -> string =
encode(jo([
kv("message", js("Welcome, " ++ name ++ "!")),
kv("name", js(name))
]))

Running ailang serve-api ./api/ auto-generates these endpoints:

MethodEndpointDescription
POST/api/api/math/addCall add()
POST/api/api/math/multiplyCall multiply()
POST/api/api/math/factorialCall factorial()
POST/api/api/math/fibonacciCall fibonacci()
POST/api/api/greet/helloCall hello()
POST/api/api/greet/farewellCall farewell()
POST/api/api/greet/welcomeCall welcome()
GET/api/_meta/modulesList all modules and exports
GET/api/_meta/modules/api/mathModule detail
GET/api/_healthHealth check

URL Convention

The URL path follows the pattern:

POST /api/{module-path}/{function-name}

Where {module-path} matches the module declaration in the .ail file exactly.


Calling Functions

JSON Request Format

Positional arguments (recommended):

curl -X POST http://localhost:8080/api/api/math/add \
-H "Content-Type: application/json" \
-d '{"args": [3, 4]}'
# {"result":7,"module":"api/math","func":"add","elapsed_ms":12}

Single value (for single-argument functions):

curl -X POST http://localhost:8080/api/api/greet/hello \
-H "Content-Type: application/json" \
-d '"Bob"'
# {"result":"Hello, Bob!","module":"api/greet","func":"hello","elapsed_ms":0}

No arguments (for nullary functions):

curl -X POST http://localhost:8080/api/api/handlers/getStatus

JSON Response Format

All function calls return:

{
"result": "Hello, World!",
"module": "api/greet",
"func": "hello",
"elapsed_ms": 2
}

On error:

{
"error": "function \"nope\" not found in module \"api/math\" (available: [add multiply factorial fibonacci])",
"module": "api/math",
"func": "nope",
"elapsed_ms": 0
}

Tested Examples

These examples are verified by the automated test script at examples/web_api_demo/test.sh:

# Math functions
curl -X POST http://localhost:8080/api/api/math/add \
-H "Content-Type: application/json" -d '{"args": [3, 4]}'
# {"result":7, ...}

curl -X POST http://localhost:8080/api/api/math/multiply \
-H "Content-Type: application/json" -d '{"args": [5, 6]}'
# {"result":30, ...}

curl -X POST http://localhost:8080/api/api/math/factorial \
-H "Content-Type: application/json" -d '{"args": [5]}'
# {"result":120, ...}

curl -X POST http://localhost:8080/api/api/math/fibonacci \
-H "Content-Type: application/json" -d '{"args": [10]}'
# {"result":55, ...}

# Greet functions
curl -X POST http://localhost:8080/api/api/greet/hello \
-H "Content-Type: application/json" -d '{"args": ["World"]}'
# {"result":"Hello, World!", ...}

curl -X POST http://localhost:8080/api/api/greet/farewell \
-H "Content-Type: application/json" -d '{"args": ["Alice"]}'
# {"result":"Goodbye, Alice. Until next time!", ...}

# JSON-returning function
curl -X POST http://localhost:8080/api/api/greet/welcome \
-H "Content-Type: application/json" -d '{"args": ["Charlie"]}'
# {"result":"{\"message\":\"Welcome, Charlie!\",\"name\":\"Charlie\"}", ...}

Introspection Endpoints

List All Modules

curl http://localhost:8080/api/_meta/modules

Response:

{
"count": 2,
"modules": [
{
"path": "api/math",
"exports": [
{ "name": "add", "type": "int -> int -> int", "pure": true, "arity": 2 },
{ "name": "multiply", "type": "int -> int -> int", "pure": true, "arity": 2 },
{ "name": "factorial", "type": "int -> int", "pure": true, "arity": 1 },
{ "name": "fibonacci", "type": "int -> int", "pure": true, "arity": 1 }
]
},
{
"path": "api/greet",
"exports": [
{ "name": "hello", "type": "string -> string", "pure": true, "arity": 1 },
{ "name": "farewell", "type": "string -> string", "pure": true, "arity": 1 },
{ "name": "welcome", "type": "string -> string", "pure": true, "arity": 1 }
]
}
]
}

Module Detail

curl http://localhost:8080/api/_meta/modules/api/math

Health Check

curl http://localhost:8080/api/_health

Response:

{
"status": "ok",
"modules_count": 2,
"exports_count": 7
}

CLI Reference

ailang serve-api

Usage: ailang serve-api [flags] <path...>

Serve AILANG module exports as REST API endpoints.

Flags:
--port PORT HTTP port (default: 8080)
--cors Enable CORS for all origins (default: true)
--frontend PATH Proxy to Vite dev server at PATH
--static PATH Serve static files from PATH
--watch Watch .ail files for changes and hot-reload
--caps CAPS Capabilities to grant (comma-separated: IO,FS,Net,AI,Clock,Env)
--ai MODEL AI model for AI effect (e.g., gemini-2-5-flash)
--ai-stub Use stub AI handler (for testing)

Arguments:
<path...> One or more .ail files or directories

Important: Flags must come before path arguments.

Examples:

# Serve a single file
ailang serve-api api/handlers.ail

# Serve a directory (finds all .ail files)
ailang serve-api ./api/

# Custom port (flags before paths)
ailang serve-api --port 3000 ./api/

# With Vite frontend proxy (development)
ailang serve-api --frontend ./ui ./api/

# With built frontend (production)
ailang serve-api --static ./ui/dist ./api/

ailang init web-app

Usage: ailang init web-app [name]

Scaffold a new AILANG web app project.

Arguments:
[name] Project directory name (default: my-ailang-app)

Project Structure

After ailang init web-app myproject:

myproject/
├── api/
│ └── handlers.ail # AILANG API module
├── ui/
│ ├── package.json # React 18 + Vite 5 + TypeScript
│ ├── vite.config.ts # Proxies /api → localhost:8080
│ ├── tsconfig.json
│ ├── index.html
│ └── src/
│ ├── main.tsx # React entry point
│ └── App.tsx # Demo UI calling AILANG API
├── Makefile # Development commands
└── README.md # Getting started guide

Makefile Targets

make dev        # Start AILANG API + Vite dev server
make api # Start only the AILANG API server
make ui # Start only the Vite dev server
make build # Build React frontend for production

React Integration

Calling AILANG from React

The scaffold includes a working example in ui/src/App.tsx:

const callApi = async () => {
const res = await fetch('/api/api/handlers/hello', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ args: [name || 'World'] }),
})
const data = await res.json()
// data.result = "Hello, World!"
}

TypeScript Types

You can type the API response:

interface ApiResponse {
result: unknown
module: string
func: string
elapsed_ms: number
error?: string
}

Fetching Module Metadata

const res = await fetch('/api/_meta/modules')
const data = await res.json()
// data.modules[0].exports[0].name = "hello"
// data.modules[0].exports[0].type = "string -> string"

Development Workflow

Adding New API Functions

  1. Edit your .ail file to add new exported functions
  2. If using --watch, changes are picked up automatically (hot reload)
  3. Without --watch, restart ailang serve-api to pick up changes
  4. New endpoints are automatically available

Hot Reload

Use --watch to automatically recompile modules when .ail files change:

ailang serve-api --watch ./api/

How it works:

  1. The server watches directories containing loaded .ail files using fsnotify
  2. On file save, all caches are invalidated (loader, runtime, engine)
  3. The module is recompiled through the pipeline
  4. Next API request uses the fresh module

Graceful degradation: If a save introduces a compile error, the error is logged but the server continues serving the previous working version. Fix the error and save again.

Debouncing: Rapid saves within 200ms are batched into a single reload.

Limitation: Dependency-aware reload is not yet supported. If module A imports module B and B changes, only B is reloaded. Save A (or any file) to trigger its reload too.

Effect Capabilities

By default, serve-api only supports pure functions (no side effects). AILANG's effect system requires capabilities to be explicitly granted before effectful functions can execute.

How It Works

AILANG functions declare their effects in their type signatures:

-- Pure function: no capabilities needed
export pure func add(x: int, y: int) -> int = x + y

-- IO effect: requires --caps IO
export func greet(name: string) -> string ! {IO} =
"Hello, " ++ name ++ "!"

-- AI effect: requires --caps AI plus --ai MODEL
export func summarize(text: string) -> string ! {AI} =
ai_call("Summarize this: " ++ text)

-- Multiple effects: requires --caps IO,Net
export func fetchAndLog(url: string) -> string ! {IO, Net} {
let body = http_get(url);
println("Fetched: " ++ url);
body
}

When serving these modules, grant the matching capabilities:

# Pure functions only (default, no flags needed)
ailang serve-api ./api/

# Grant IO capability
ailang serve-api --caps IO ./api/

# Grant IO and FS capabilities
ailang serve-api --caps IO,FS ./api/

# Grant AI capability with a specific model
ailang serve-api --caps IO,AI --ai gemini-2-5-flash ./api/

# Use stub AI handler for testing (returns fixed responses)
ailang serve-api --caps IO,AI --ai-stub ./api/

Capability Reference

CapabilityEffectEnablesExample Builtins
IO{IO}Console I/Oprintln, readLine
FS{FS}File system accessreadFile, writeFile
Net{Net}HTTP requestshttp_get, http_post
AI{AI}LLM API callsai_call
Clock{Clock}Time operationsnow, sleep
Env{Env}Environment variablesenv_get
SharedMem{SharedMem}In-memory key-value cachecache_get, cache_set
SharedIndex{SharedIndex}Semantic similarity searchindex_add, index_search

AI Model Configuration

The --ai flag specifies which AI model to use for the AI effect:

# OpenAI models (requires OPENAI_API_KEY env var)
ailang serve-api --caps AI --ai gpt-4o ./api/

# Anthropic models (requires ANTHROPIC_API_KEY env var)
ailang serve-api --caps AI --ai claude-sonnet-4-5 ./api/

# Google models (requires GOOGLE_API_KEY or ADC)
ailang serve-api --caps AI --ai gemini-2-5-flash ./api/

# Local Ollama models (requires running Ollama server)
ailang serve-api --caps AI --ai ollama:llama3 ./api/

# Stub handler for testing (no API key needed)
ailang serve-api --caps AI --ai-stub ./api/

Model names are resolved via models.yml configuration. If not found, the provider is guessed from the model name prefix (claude- → Anthropic, gpt- → OpenAI, gemini- → Google, ollama: → Ollama).

What Happens Without Capabilities

If an AILANG function uses an effect but the corresponding capability is not granted, the API returns an error:

# Server started without --caps
ailang serve-api ./api/

# Calling a function that needs IO
curl -X POST http://localhost:8080/api/api/handlers/greet \
-H "Content-Type: application/json" -d '{"args": ["World"]}'
# {"error":"IO: capability not granted","module":"api/handlers","func":"greet","elapsed_ms":0}

To fix: restart with --caps IO (or whatever capabilities the function requires).

Security note: Capabilities are granted server-wide. All API endpoints share the same capabilities. Only grant capabilities that your AILANG modules actually need.

Frontend Proxy

When using --frontend ./ui, the server:

  1. Checks for vite.config.ts in the frontend directory
  2. Starts npm run dev as a background process
  3. Proxies all non-/api/ requests to Vite (default port 5173)
  4. Provides hot module replacement for React code

Static Serving

For production, build the frontend and serve statically:

cd ui && npm run build && cd ..
ailang serve-api ./api/ --static ./ui/dist

Relationship to Go Interop

serve-api builds on the Go Interop embed API:

FeatureGo Interopserve-api
Setup effortWrite Go codeZero (CLI command)
CustomizationFull controlConvention-based
PerformanceBestGood (HTTP overhead)
Error handlingCustom Go logicGeneric JSON errors
EffectsCan provide handlersPure functions only
Use caseProduction appsDev tools, prototyping, demos

For production applications requiring custom error handling, effect handlers, or Go-level integration, use the Go Interop embed API directly.


Working Example

A complete working example with automated tests is available at:

examples/web_api_demo/
├── api/
│ ├── math.ail # add, multiply, factorial, fibonacci
│ └── greet.ail # hello, farewell, welcome (with JSON)
├── test.sh # Automated test (17 checks, all passing)
└── README.md

Run the automated tests:

./examples/web_api_demo/test.sh

This starts the server, exercises all endpoints (function calls, introspection, error handling, CORS), and reports pass/fail.