AI Prompt Guide: Teaching AILANG to Language Models
Purpose: This document explains how to use AILANG teaching prompts with AI models.
KPI: One of AILANG's key success metrics is "teachability to AI" - how easily can an LLM learn to write correct AILANG code from a single prompt?
Canonical Prompt
The official AILANG teaching prompt is always available via CLI:
ailang prompt # Get current/active prompt
ailang prompt --list # List all versions
Current Teaching Prompt
This prompt is:
- Validated through eval benchmarks - Tested across GPT-5, Gemini 2.5 Pro, Claude Sonnet 4.5
- Up-to-date with latest features - Record updates, auto-import prelude, syntactic sugar
- Versioned with SHA-256 hashing - Reproducible eval results
- Auto-synced - Website always shows the active prompt version
Quick Reference
Core Features:
- Module execution with effects
- Recursion (self-recursive and mutually-recursive)
- Block expressions (
{ stmt1; stmt2; result }) - Records (literals + field access + updates)
- Multi-line ADTs
type Tree = | Leaf | Node - Record update syntax
{base | field: value} - Auto-import std/prelude - No imports needed for comparisons
- Syntactic sugar:
::cons,->function types,f()zero-arg calls - Effects: IO, FS, Clock, Net, Env, AI, SharedMem, SharedIndex
- Type classes, ADTs, pattern matching
- REPL with full type checking
Known Limitations:
- See Implementation Status for current limitations
- See LIMITATIONS.md for workarounds
For complete syntax guide, see the Current Teaching Prompt or use ailang prompt
Using the Prompt
For AI Code Generation
When asking an AI model (Claude, GPT, Gemini) to write AILANG code, provide the full prompt from ailang prompt.
Example usage:
I need you to write AILANG code to solve this problem: [problem description]
First, read this AILANG syntax guide:
[paste output from: ailang prompt]
Now write the code.
Via CLI (Recommended)
# Get current/active prompt
ailang prompt
# Get specific version
ailang prompt --version v0.5.11
# List all available versions
ailang prompt --list
# Save to file
ailang prompt > syntax.md
For Eval Benchmarks
The eval harness automatically loads the correct prompt version from prompts/versions.json:
# benchmarks/example.yml
id: example_task
languages: ["ailang", "python"]
# Prompt version is auto-resolved from prompts/versions.json
task_prompt: |
Write a program that [task description]
See benchmarks/README.md for details.
Current Prompt
View: Current Teaching Prompt or ailang prompt
Core Features Documented:
- Multi-line ADTs:
type Tree = | Leaf | Node - Record updates:
{base | field: value} - Auto-import std/prelude
- Syntactic sugar:
::cons,->types,f()zero-arg calls - Full module system with effects (IO, FS, Clock, Net, Env, AI)
- Pattern matching, recursion, type classes
- Semantic caching with SimHash and neural embeddings
- SharedMem/SharedIndex effects for AI agent working memory
Why prompt quality matters:
- Better AI code generation
- Reproducible eval results
- Consistent teaching across models
Eval Results
Current success rates:
- See Benchmark Dashboard for latest metrics
- Best model: Claude Sonnet 4.5 (consistently highest success rates)
- Results updated after each release
Key Insights:
- Teaching prompt quality directly impacts AI success rates
- Multi-model testing reveals universal vs model-specific patterns
- Iterative prompt improvements correlate with better code generation
Contributing Improvements
If you find ways to improve the AILANG teaching prompt:
-
Test your changes with the eval harness:
ailang eval-suite --models gpt5-mini,claude-haiku-4-5 -
Measure impact by comparing success rates
-
Create new prompt version in
prompts/following the versioning system -
Document changes with SHA-256 hash and notes in
prompts/versions.json -
Update active version in
prompts/versions.jsonwhen ready
See Also
- CLAUDE.md - Instructions for AI assistants working on AILANG development
- examples/ - Working AILANG code examples
- Language Reference - Complete AILANG syntax guide
- benchmarks/ - Eval harness benchmark suite