API Reference - Core
Core types and utilities for Mojentic.
Error Module
Result Type
type Result<T, E extends Error> = Ok<T> | Err<E>;A discriminated union representing either success (Ok) or failure (Err).
Ok
interface Ok<T> {
ok: true;
value: T;
}Represents a successful result.
Creating:
import { Ok } from 'mojentic';
const result = Ok(42);
const result2 = Ok({ name: "Alice", age: 30 });Err
interface Err<E extends Error> {
ok: false;
error: E;
}Represents a failed result.
Creating:
import { Err } from 'mojentic';
const result = Err(new Error("Something went wrong"));
const result2 = Err(new ValidationError("Invalid input"));Type Guards
isOk
function isOk<T, E extends Error>(result: Result<T, E>): result is Ok<T>Type guard to check if result is Ok.
Example:
const result = await broker.generate(messages);
if (isOk(result)) {
console.log(result.value); // Type: string
}isErr
function isErr<T, E extends Error>(result: Result<T, E>): result is Err<E>Type guard to check if result is Err.
Example:
const result = await broker.generate(messages);
if (isErr(result)) {
console.error(result.error); // Type: Error
}Utility Functions
unwrap
function unwrap<T, E extends Error>(result: Result<T, E>): TExtracts the value from Ok result or throws the error from Err result.
Example:
const result = Ok(42);
const value = unwrap(result); // 42
const errorResult = Err(new Error("Failed"));
unwrap(errorResult); // Throws ErrorunwrapOr
function unwrapOr<T, E extends Error>(result: Result<T, E>, defaultValue: T): TReturns the value from Ok or a default value if Err.
Example:
const result = Err(new Error("Failed"));
const value = unwrapOr(result, 0); // 0mapResult
function mapResult<T, U, E extends Error>(
result: Result<T, E>,
fn: (value: T) => U
): Result<U, E>Transforms the Ok value while preserving Err.
Example:
const result = Ok(5);
const doubled = mapResult(result, x => x * 2); // Ok(10)mapError
function mapError<T, E extends Error, F extends Error>(
result: Result<T, E>,
fn: (error: E) => F
): Result<T, F>Transforms the Err value while preserving Ok.
Example:
const result = Err(new Error("Failed"));
const mapped = mapError(result, e => new ValidationError(e.message));Error Classes
MojenticError
Base error class for all Mojentic errors.
class MojenticError extends Error {
constructor(message: string)
}GatewayError
Errors from LLM gateway operations.
class GatewayError extends MojenticError {
constructor(
message: string,
public readonly gateway: string,
public readonly statusCode?: number
)
}Properties:
gateway: Name of the gateway that threw the errorstatusCode: Optional HTTP status code
ToolError
Errors from tool execution.
class ToolError extends MojenticError {
constructor(
message: string,
public readonly toolName: string
)
}Properties:
toolName: Name of the tool that threw the error
ValidationError
Schema validation errors.
class ValidationError extends MojenticError {
constructor(message: string)
}ParseError
JSON parsing errors.
class ParseError extends MojenticError {
constructor(message: string)
}TimeoutError
Timeout errors.
class TimeoutError extends MojenticError {
constructor(
message: string,
public readonly timeoutMs: number
)
}Properties:
timeoutMs: The timeout duration in milliseconds
Message Module
MessageRole
enum MessageRole {
System = 'system',
User = 'user',
Assistant = 'assistant',
Tool = 'tool'
}Role identifier for messages in conversation.
LlmMessage
interface LlmMessage {
role: MessageRole;
content: string;
name?: string;
toolCalls?: ToolCall[];
toolCallId?: string;
}Message in a conversation.
Properties:
role: Who sent the messagecontent: Message textname: Optional name/identifiertoolCalls: Optional tool calls (for assistant messages)toolCallId: Optional tool call ID (for tool result messages)
Message Helpers
class Message {
static system(content: string): LlmMessage
static user(content: string, name?: string): LlmMessage
static assistant(content: string, toolCalls?: ToolCall[]): LlmMessage
static tool(content: string, toolCallId: string, name: string): LlmMessage
}Helper class for creating messages.
Examples:
const system = Message.system('You are a helpful assistant');
const user = Message.user('Hello!');
const assistant = Message.assistant('Hi there!');
const tool = Message.tool('{"result": 42}', 'call_123', 'calculator');Tool Module
ToolCall
interface ToolCall {
id: string;
type: 'function';
function: {
name: string;
arguments: string; // JSON string
};
}Represents a tool call request from the LLM.
ToolArgs
type ToolArgs = Record<string, any>;Arguments passed to a tool (parsed from JSON).
ToolResult
type ToolResult = Record<string, any>;Result returned by a tool.
ToolDescriptor
interface ToolDescriptor {
type: 'function';
function: {
name: string;
description: string;
parameters: Record<string, any>; // JSON Schema
};
}JSON schema describing a tool for the LLM.
LlmTool Interface
interface LlmTool {
run(args: ToolArgs): Promise<Result<ToolResult, Error>>;
descriptor(): ToolDescriptor;
}Interface for implementing tools.
Methods:
run: Execute the tool with given argumentsdescriptor: Return the tool's schema
BaseTool Class
abstract class BaseTool implements LlmTool {
abstract run(args: ToolArgs): Promise<Result<ToolResult, Error>>;
abstract descriptor(): ToolDescriptor;
}Abstract base class for tools.
Configuration
CompletionConfig
interface CompletionConfig {
temperature?: number;
maxTokens?: number;
numPredict?: number;
topP?: number;
topK?: number;
numCtx?: number;
frequencyPenalty?: number;
presencePenalty?: number;
stop?: string[];
stream?: boolean;
responseFormat?: {
type: 'json_object' | 'text';
schema?: Record<string, unknown>;
};
reasoningEffort?: 'low' | 'medium' | 'high';
}Configuration for LLM generation.
Properties:
temperature: Randomness (0.0-2.0, default varies by model)maxTokens: Maximum tokens to generate (cross-provider compatibility)numPredict: Ollama-specific maximum tokens (takes precedence over maxTokens)topP: Nucleus sampling threshold (0.0-1.0)topK: Top-K sampling parameter (limits token selection to top K choices)numCtx: Context window size (number of tokens for context)frequencyPenalty: Penalty for token frequency (reduces repetition)presencePenalty: Penalty for token presence (encourages diversity)stop: Array of strings that will stop generationstream: Whether to stream responsesresponseFormat: Structured output configuration for JSON responsesreasoningEffort: Extended thinking effort level ('low', 'medium', 'high')- Ollama: Maps to
think: trueparameter for extended thinking - OpenAI: Maps to
reasoning_effortAPI parameter for reasoning models (o1, o3, etc.)
- Ollama: Maps to
Gateway Types
GatewayResponse
interface GatewayResponse {
content: string;
toolCalls?: ToolCall[];
finishReason?: string;
usage?: {
promptTokens: number;
completionTokens: number;
totalTokens: number;
};
thinking?: string;
}Response from LLM gateway.
Properties:
content: Generated texttoolCalls: Tool calls requested by LLMfinishReason: Why generation stoppedusage: Token usage statisticsthinking: Model's reasoning trace (populated by Ollama when reasoningEffort is set)
StreamChunk
interface StreamChunk {
content: string;
isComplete: boolean;
toolCalls?: ToolCall[];
finishReason?: string;
}Chunk in a streaming response.
Properties:
content: Partial or complete textisComplete: Whether this is the final chunktoolCalls: Tool calls (only in final chunk)finishReason: Why generation stopped (only in final chunk)