Multi-provider AI library with intelligent model selection, type-safe context management, and comprehensive provider support.
@aeye (AI TypeScript) is a modern, type-safe AI library for Node.js and TypeScript applications. It provides a unified interface for working with multiple AI providers (OpenAI, OpenRouter, Replicate, AWS Bedrock, and more) with automatic model selection, cost tracking, streaming support, and extensible architecture.
To see a complex example of a CLI agent built with aeye - npm i -g @aeye/cletus and run cletus!
import { AI } from '@aeye/ai';
import { OpenAIProvider } from '@aeye/openai';
const openai = new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! });
const ai = AI.with<MyContext>().providers({ openai }).create({ /* config */ });
const myTool = ai.tool({ /* name, description, instructions, schema, call, + */ });
const myPrompt = ai.prompt({ /* name, description, content, schema?, config, tools, metadata, + */ });
const myAgent = ai.agent({ /* name, description, refs, call, + */ });
myTool.run(input, ctx?); // run a tool
myPrompt.run(input, ctx?); // run a prompt (streaming generator)
myPrompt.get('result', input, ctx?); // get structured result
myPrompt.get('stream', input, ctx?); // stream all events
myAgent.run(input, ctx?); // run an agent
ai.chat.get(request, ctx?); // or .stream(request, ctx?)
ai.image.generate.get(request, ctx?); // or .stream(request, ctx?)
ai.image.edit.get(request, ctx?); // or .stream(request, ctx?)
ai.image.analyze.get(request, ctx?); // or .stream(request, ctx?)
ai.speech.get(request, ctx?); // or .stream(request, ctx?)
ai.transcribe.get(request, ctx?); // or .stream(request, ctx?)
ai.embed.get(request, ctx?);
ai.models.list(); // .get(id), .search(criteria), .select(criteria), .refresh()- 🎯 Multi-Provider Support - Single interface for OpenAI, OpenRouter, Replicate, AWS Bedrock, and custom providers
- 🤖 Intelligent Model Selection - Automatic model selection based on capabilities, cost, speed, and quality
- 💰 Cost Tracking - Built-in token usage and cost calculation with provider-reported costs
- 🔄 Streaming Support - Full streaming support across all compatible capabilities
- 🛡️ Type-Safe - Strongly-typed context and metadata with compiler validation
- 🎨 Comprehensive APIs - Chat, Image Generation, Speech Synthesis, Transcription, Embeddings
- 🔌 Extensible - Custom providers, model handlers, and transformers
- 📊 Model Registry - Centralized model management with external sources
- 🤖 Tools, Prompts & Agents - Composable components for building sophisticated AI workflows
- 🎣 Lifecycle Hooks - Intercept and modify operations at every stage
- 🔧 Model Overrides - Customize model properties without modifying providers
- 📦 Model Sources - External model sources (OpenRouter, custom APIs)
- 🌊 Context Management - Thread context through your entire AI operation
- 🎛️ Fine-Grained Control - Temperature, tokens, stop sequences, tool calling, and more
# Install core packages
npm install @aeye/ai @aeye/core
# Install provider packages as needed
npm install @aeye/openai openai # OpenAI
npm install @aeye/openrouter # OpenRouter (multi-provider)
npm install @aeye/replicate replicate # Replicate
npm install @aeye/aws # AWSimport { AI } from '@aeye/ai';
import { OpenAIProvider } from '@aeye/openai';
// Create providers
const openai = new OpenAIProvider({
apiKey: process.env.OPENAI_API_KEY!
});
// Create AI instance
const ai = AI.with()
.providers({ openai })
.create();
// Chat completion
const response = await ai.chat.get({
messages: [{ role: 'user', content: 'What is TypeScript?' }]
});
console.log(response.content);
// Streaming
for await (const chunk of ai.chat.stream({
messages: [{ role: 'user', content: 'Write a poem about AI' }]
})) {
if (chunk.content) {
process.stdout.write(chunk.content);
}
}import { AI } from '@aeye/ai';
import { OpenAIProvider } from '@aeye/openai';
import { OpenRouterProvider } from '@aeye/openrouter';
import { ReplicateProvider } from '@aeye/replicate';
import { AWSBedrockProvider } from '@aeye/aws';
const openai = new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! });
const openrouter = new OpenRouterProvider({ apiKey: process.env.OPENROUTER_API_KEY! });
const replicate = new ReplicateProvider({ apiKey: process.env.REPLICATE_API_KEY! });
const aws = new AWSBedrockProvider({ region: 'us-east-1' });
const ai = AI.with()
.providers({ openai, openrouter, replicate, aws })
.create({
// Default scoring weights for automatic model selection
defaultWeights: {
cost: 0.4,
speed: 0.3,
accuracy: 0.3,
}
});
// AI automatically selects the best provider/model
const response = await ai.chat.get({
messages: [{ role: 'user', content: 'Explain quantum computing' }]
});graph TD
AI["<b>AI Class</b><br/>Context Management<br/>Model Registry<br/>Lifecycle Hooks"]
APIs["<b>APIs</b><br/>Chat · Image<br/>Speech · Embed"]
Registry["<b>Registry</b><br/>Models · Search · Select"]
Providers["<b>Providers</b><br/>OpenAI · OpenRouter<br/>Replicate · AWS · Custom"]
AI --> APIs
AI --> Registry
Registry --> Providers
APIs --> Providers
Core primitives for building AI agents, tools, and prompts with TypeScript. Provides the foundational Tool, Prompt, and Agent classes along with all shared types.
npm install @aeye/coreMain AI library with intelligent model selection, context management, and comprehensive APIs. Built on top of @aeye/core.
npm install @aeye/ai @aeye/coreOpenAI provider supporting chat completions, image generation, speech synthesis, transcription, and embeddings. Also serves as a base class for OpenAI-compatible providers.
npm install @aeye/openai openaiFeatures:
- Chat completions with vision support
- Reasoning models
- Image generation and editing
- Speech-to-text (transcription)
- Text-to-speech
- Embeddings
- Tool/function calling
- Structured outputs
OpenRouter provider for unified access to hundreds of AI models from multiple providers with competitive pricing.
npm install @aeye/openrouterFeatures:
- Access to models from OpenAI, Anthropic, Google, Meta, and more
- Automatic fallbacks
- Built-in cost tracking
- Zero Data Retention (ZDR) support
- Provider routing preferences
Replicate provider with flexible adapter system for running open-source AI models.
npm install @aeye/replicate replicateFeatures:
- Thousands of open-source models
- Model adapters for handling diverse schemas
- Image generation, transcription, embeddings
- Custom model support
AWS Bedrock provider supporting a wide range of foundation models via the Converse API.
npm install @aeye/awsFeatures:
- Chat completions with models from Anthropic, Meta, Mistral, Amazon, and more
- Image generation (Stability AI)
- Text embeddings (Amazon Titan)
- Automatic AWS credential discovery
import { AI } from '@aeye/ai';
import { OpenAIProvider } from '@aeye/openai';
const openai = new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! });
const ai = AI.with().providers({ openai }).create();
const response = await ai.chat.get({
messages: [{ role: 'user', content: 'What is TypeScript?' }]
});
console.log(response.content);const imageResponse = await ai.image.generate.get({
prompt: 'A serene mountain landscape at sunset',
size: '1024x1024',
quality: 'high'
});
console.log('Image URL:', imageResponse.images[0].url);The recommended way to use tools is through the ai.tool() and ai.prompt() factories, which bind your components to the AI instance:
import { AI } from '@aeye/ai';
import { OpenAIProvider } from '@aeye/openai';
import z from 'zod';
const openai = new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! });
const ai = AI.with().providers({ openai }).create();
// Define a tool
const getWeather = ai.tool({
name: 'getWeather',
description: 'Get current weather for a city',
instructions: 'Use this tool to fetch current weather data for {{location}}.',
schema: z.object({
location: z.string().describe('City name, e.g. "San Francisco"'),
units: z.enum(['celsius', 'fahrenheit']).default('celsius'),
}),
call: async ({ location, units }) => {
// In a real app, call a weather API here
return { temperature: 18, condition: 'sunny', units };
},
});
// Create a prompt that uses the tool
const weatherAdvisor = ai.prompt({
name: 'weatherAdvisor',
description: 'Gives travel clothing advice based on weather',
content: 'You are a helpful travel advisor. The user is visiting {{destination}}. Check the weather and suggest what to wear.',
input: (input: { destination: string }) => ({ destination: input.destination }),
tools: [getWeather],
schema: z.object({
suggestion: z.string().describe('Clothing and packing suggestion'),
temperature: z.number().describe('Current temperature'),
condition: z.string().describe('Weather condition'),
}),
});
// The prompt automatically calls the weather tool and returns structured output
const advice = await weatherAdvisor.get('result', { destination: 'Paris' });
console.log(advice?.suggestion); // "Bring a light jacket, it's 18°C and sunny."Agents coordinate multiple tools and prompts to accomplish complex goals:
import { AI } from '@aeye/ai';
import { OpenAIProvider } from '@aeye/openai';
import z from 'zod';
const openai = new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! });
const ai = AI.with().providers({ openai }).create();
// Define individual tools
const searchFiles = ai.tool({
name: 'searchFiles',
description: 'Search for files matching a glob pattern',
schema: z.object({
pattern: z.string().describe('Glob pattern, e.g. "**/*.ts"'),
}),
call: async ({ pattern }) => {
// Return matching file paths
return { files: [`src/index.ts`, `src/app.ts`] };
},
});
const readFile = ai.tool({
name: 'readFile',
description: 'Read the contents of a file',
schema: z.object({
path: z.string().describe('File path to read'),
}),
call: async ({ path }) => {
// Return file contents
return { content: `// Contents of ${path}` };
},
});
const summarizeCode = ai.prompt({
name: 'summarizeCode',
description: 'Summarizes TypeScript code',
content: 'Summarize the following TypeScript code:\n\n{{code}}',
input: (input: { code: string }) => ({ code: input.code }),
schema: z.object({
summary: z.string(),
exports: z.array(z.string()),
}),
});
// Agent that finds, reads, and summarizes code files
const codeReviewer = ai.agent({
name: 'codeReviewer',
description: 'Reviews TypeScript files and produces summaries',
refs: [searchFiles, readFile, summarizeCode] as const,
call: async ({ pattern }: { pattern: string }, [search, read, summarize], ctx) => {
const { files } = await search.run({ pattern }, ctx);
const summaries: Array<{ file: string; summary: string; exports: string[] }> = [];
for (const file of files) {
const { content } = await read.run({ path: file }, ctx);
const result = await summarize.get('result', { code: content }, ctx);
summaries.push({ file, summary: result?.summary ?? '', exports: result?.exports ?? [] });
}
return summaries;
},
});
const results = await codeReviewer.run({ pattern: 'src/**/*.ts' });
results.forEach(({ file, summary }) => console.log(`${file}: ${summary}`));Here are some examples inspired by the Cletus CLI agent to show how Tools, Agents, and Prompts work together:
import { AI } from '@aeye/ai';
import { OpenAIProvider } from '@aeye/openai';
import z from 'zod';
interface AppContext {
userId: string;
db: { todos: Map<string, { id: string; name: string; done: boolean }> };
}
const openai = new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! });
const ai = AI.with<AppContext>().providers({ openai }).create();
const addTodo = ai.tool({
name: 'addTodo',
description: 'Add a new to-do item',
schema: z.object({
name: z.string().describe('The to-do item description'),
}),
call: async ({ name }, _refs, ctx) => {
const id = crypto.randomUUID();
ctx.db.todos.set(id, { id, name, done: false });
return { id, name, done: false };
},
});
const listTodos = ai.tool({
name: 'listTodos',
description: 'List all to-do items',
schema: z.object({}),
call: async (_input, _refs, ctx) => {
return { todos: Array.from(ctx.db.todos.values()) };
},
});
const markDone = ai.tool({
name: 'markDone',
description: 'Mark a to-do item as complete',
schema: z.object({
id: z.string().describe('The to-do item ID'),
}),
call: async ({ id }, _refs, ctx) => {
const todo = ctx.db.todos.get(id);
if (!todo) throw new Error(`Todo ${id} not found`);
todo.done = true;
return { success: true, todo };
},
});
// A prompt that uses all three tools
const taskManager = ai.prompt({
name: 'taskManager',
description: 'Manages to-do items via natural language',
content: `You are a helpful task manager assistant. Help the user manage their to-do list.
User request: {{request}}`,
input: (input: { request: string }) => ({ request: input.request }),
tools: [addTodo, listTodos, markDone],
});
// Usage
const db = { todos: new Map() };
await taskManager.get('result', { request: 'Add a todo to finish the report' }, { userId: 'user1', db });import { AI } from '@aeye/ai';
import { OpenAIProvider } from '@aeye/openai';
import z from 'zod';
const openai = new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! });
const ai = AI.with().providers({ openai }).create();
const searchKnowledge = ai.tool({
name: 'searchKnowledge',
description: 'Semantically search the knowledge base',
instructions: 'Use this to find relevant information in the knowledge base for the query: "{{query}}"',
schema: z.object({
query: z.string().describe('Search query'),
limit: z.number().optional().describe('Max results (default 5)'),
}),
input: (ctx) => ({ query: '' }), // template variable for instructions
call: async ({ query, limit = 5 }) => {
// In a real app, use vector embeddings and similarity search
return { results: [{ source: 'docs/api.md', text: 'API documentation...' }] };
},
});
const knowledgeAssistant = ai.prompt({
name: 'knowledgeAssistant',
description: 'Answers questions using the knowledge base',
content: `You are a helpful assistant. Use the searchKnowledge tool to find relevant information, then answer the user's question.
Question: {{question}}`,
input: (input: { question: string }) => ({ question: input.question }),
tools: [searchKnowledge],
});
const answer = await knowledgeAssistant.get('result', {
question: 'How do I configure the API timeout?'
});
console.log(answer);Hooks let you intercept every AI call. Here's how to check estimated cost before running and record actual cost after, using a user budget from context:
import { AI } from '@aeye/ai';
import { OpenAIProvider } from '@aeye/openai';
interface User {
id: string;
budgetRemaining: number; // in dollars
totalSpent: number;
save: () => Promise<void>;
}
interface AppContext {
user: User;
}
const openai = new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! });
const ai = AI.with<AppContext>()
.providers({ openai })
.create({
hooks: {
beforeRequest: async (ctx, request, selected, estimatedUsage, estimatedCost) => {
// Throw to cancel the request if the estimated cost exceeds the user's budget
if (estimatedCost > ctx.user.budgetRemaining) {
throw new Error(
`Request cancelled: estimated cost $${estimatedCost.toFixed(4)} exceeds ` +
`remaining budget $${ctx.user.budgetRemaining.toFixed(4)}`
);
}
console.log(
`[${ctx.user.id}] Using ${selected.model.id}, ` +
`estimated cost: $${estimatedCost.toFixed(4)}`
);
},
afterRequest: async (ctx, request, response, responseComplete, selected, usage, cost) => {
// Deduct actual cost from user's budget and record spending
ctx.user.budgetRemaining -= cost;
ctx.user.totalSpent += cost;
await ctx.user.save();
console.log(
`[${ctx.user.id}] Used ${usage.text?.input ?? 0} in / ${usage.text?.output ?? 0} out tokens, ` +
`cost: $${cost.toFixed(4)}, budget remaining: $${ctx.user.budgetRemaining.toFixed(4)}`
);
},
onError: (errorType, message, error, ctx) => {
console.error(`[AI Error] ${errorType}: ${message}`, error?.message);
},
}
});
// Chat with budget enforcement
const user: User = {
id: 'user123',
budgetRemaining: 0.05,
totalSpent: 0,
save: async () => { /* persist to database */ },
};
const response = await ai.chat.get(
{ messages: [{ role: 'user', content: 'Explain monads in simple terms' }] },
{ user }
);
console.log(response.content);// Explicit model selection via metadata
const response = await ai.chat.get(
{ messages: [{ role: 'user', content: 'Hello' }] },
{ metadata: { model: 'openai/gpt-4o' } }
);
// Automatic selection with scoring weights
const precise = await ai.chat.get(
{ messages: [{ role: 'user', content: 'Analyze this code' }] },
{
metadata: {
weights: { cost: 0.2, speed: 0.3, accuracy: 0.5 },
contextWindow: { min: 32000 },
}
}
);
// Provider filtering
const costEfficient = await ai.chat.get(
{ messages: [{ role: 'user', content: 'Summarize this' }] },
{
metadata: {
providers: {
allow: ['openai', 'openrouter'],
deny: ['replicate'],
}
}
}
);import fs from 'fs';
import { Readable } from 'stream';
const response = await ai.speech.get({
text: 'Hello! This is a text-to-speech example.',
voice: 'alloy',
});
// Pipe the audio stream to a file
const fileStream = fs.createWriteStream('output.mp3');
Readable.fromWeb(response.audio).pipe(fileStream);import fs from 'fs';
const audioBuffer = fs.readFileSync('audio.mp3');
const transcription = await ai.transcribe.get({
audio: audioBuffer,
language: 'en',
});
console.log('Transcription:', transcription.text);const embeddingResponse = await ai.embed.get({
texts: [
'The quick brown fox jumps over the lazy dog',
'Machine learning is a subset of artificial intelligence',
],
});
embeddingResponse.embeddings.forEach((item, i) => {
console.log(`Embedding ${i}:`, item.embedding.length, 'dimensions');
});interface AppContext {
userId: string;
sessionId: string;
}
const ai = AI.with<AppContext>()
.providers({ openai })
.create({
providedContext: async (ctx) => ({
// Automatically enrich context from the database
// (user and session data are fetched here and available in hooks/tools)
}),
});
const response = await ai.chat.get(
{ messages: [{ role: 'user', content: 'Hello!' }] },
{ userId: 'user123', sessionId: 'session456' }
);Create custom providers by extending existing providers:
import { OpenAIProvider, OpenAIConfig } from '@aeye/openai';
import OpenAI from 'openai';
class CustomProvider extends OpenAIProvider {
readonly name = 'custom';
protected createClient(config: OpenAIConfig) {
return new OpenAI({
apiKey: config.apiKey,
baseURL: 'https://custom-api.example.com/v1',
});
}
}Fetch models from external sources:
import { OpenRouterModelSource } from '@aeye/openrouter';
const source = new OpenRouterModelSource({
apiKey: process.env.OPENROUTER_API_KEY,
});
const ai = AI.with()
.providers({ openrouter })
.create({
modelSources: [source],
});Customize model properties:
const ai = AI.with()
.providers({ openai })
.create({
modelOverrides: [
{
modelPattern: /gpt-4/,
overrides: {
pricing: {
text: { input: 30, output: 60 },
},
},
},
],
});@aeye provides comprehensive cost tracking:
const response = await ai.chat.get({
messages: [{ role: 'user', content: 'Hello' }]
});
// Token usage
console.log('Input tokens:', response.usage?.text?.input);
console.log('Output tokens:', response.usage?.text?.output);
// Cost (calculated from model pricing, or provider-reported when available)
console.log('Cost: $', response.usage?.cost);# Install dependencies
npm install
# Build all packages
npm run build
# Run tests
npm run test
# Clean build artifacts
npm run cleanaeye/
├── packages/
│ ├── core/ # Core types, Tool, Prompt, Agent
│ ├── ai/ # Main AI library
│ ├── openai/ # OpenAI provider
│ ├── openrouter/ # OpenRouter provider
│ ├── replicate/ # Replicate provider
│ ├── aws/ # AWS Bedrock provider
│ └── cletus/ # Example CLI agent
├── package.json # Root package configuration
└── tsconfig.json # TypeScript configuration
-
API Key Security - Never hardcode API keys, use environment variables
-
Streaming - Use streaming for better UX with lengthy responses
-
Cost Monitoring - Use
afterRequesthooks to track expenses per user -
Budget Enforcement - Throw from
beforeRequestto cancel overbudget requests -
Context Management - Use
providedContextto enrich context from databases -
Provider Selection - Choose providers based on:
- Cost efficiency
- Feature availability
- Reliability/uptime
- Privacy requirements (ZDR)
- Built-in retry logic with exponential backoff
- Rate limiting utilities
Contributions are welcome! Areas where we'd especially appreciate help:
- New Providers - Google, Cohere, etc.
- Model Adapters - For Replicate and other platforms
- Documentation - Examples, tutorials, guides
- Testing - Unit tests, integration tests
- Bug Fixes - Issue reports and fixes
Please see the main @aeye repository for contribution guidelines.
GPL-3.0 © ClickerMonkey
See LICENSE for details.
- GitHub Issues: https://github.com/ClickerMonkey/aeye/issues
- Documentation: https://github.com/ClickerMonkey/aeye
Made with TypeScript | GPL-3.0 Licensed | Production Ready