Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

lunary-ai/abso

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

34 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Abso banner

Drop-in replacement for OpenAI

npm version GitHub last commit (by committer) GitHub commit activity (branch)

Abso provides a unified interface for calling various LLMs while maintaining full type safety.

Features

  • OpenAI-compatible API ๐Ÿ” (drop in replacement)
  • Call any LLM provider (OpenAI, Anthropic, Groq, Ollama, etc.)
  • Lightweight & Fast โšก
  • Embeddings support ๐Ÿงฎ
  • Unified tool calling ๐Ÿ› ๏ธ
  • Tokenizer and cost calculation (soon) ๐Ÿ”ข
  • Smart routing (soon)

Providers

Provider Chat Streaming Tool Calling Embeddings Tokenizer Cost Calculation
OpenAI โœ… โœ… โœ… โœ… ๐Ÿšง ๐Ÿšง
Anthropic โœ… โœ… โœ… โŒ ๐Ÿšง ๐Ÿšง
xAI Grok โœ… โœ… โœ… โŒ ๐Ÿšง ๐Ÿšง
Mistral โœ… โœ… โœ… โŒ ๐Ÿšง ๐Ÿšง
Groq โœ… โœ… โœ… โŒ โŒ ๐Ÿšง
Ollama โœ… โœ… โœ… โŒ โŒ ๐Ÿšง
OpenRouter โœ… โœ… โœ… โŒ โŒ ๐Ÿšง
Voyage โŒ โŒ โŒ โœ… โŒ โŒ
Azure ๐Ÿšง ๐Ÿšง ๐Ÿšง ๐Ÿšง โŒ ๐Ÿšง
Bedrock ๐Ÿšง ๐Ÿšง ๐Ÿšง ๐Ÿšง โŒ ๐Ÿšง
Gemini โœ… โœ… โœ… โŒ ๐Ÿšง โŒ
DeepSeek โœ… โœ… โœ… โŒ ๐Ÿšง โŒ
Perplexity โœ… โœ… โŒ โŒ ๐Ÿšง โŒ

Installation

npm install abso-ai

Usage

import { abso } from "abso-ai"

const result = await abso.chat.completions.create({
  messages: [{ role: "user", content: "Say this is a test" }],
  model: "gpt-4o",
})

console.log(result.choices[0].message.content)

Manually selecting a provider

Abso tries to infer the correct provider for a given model, but you can also manually select a provider.

const result = await abso.chat.completions.create({
  messages: [{ role: "user", content: "Say this is a test" }],
  model: "openai/gpt-4o",
  provider: "openrouter",
})

console.log(result.choices[0].message.content)

Streaming

const stream = await abso.chat.completions.create({
  messages: [{ role: "user", content: "Say this is a test" }],
  model: "gpt-4o",
  stream: true,
})

for await (const chunk of stream) {
  console.log(chunk)
}

// Helper to get the final result
const fullResult = await stream.finalChatCompletion()

console.log(fullResult)

Embeddings

const embeddings = await abso.embeddings.create({
  model: "text-embedding-3-small",
  input: ["A cat was playing with a ball on the floor"],
})

console.log(embeddings.data[0].embedding)

Tokenizers (soon)

const tokens = await abso.chat.tokenize({
  messages: [{ role: "user", content: "Hello, world!" }],
  model: "gpt-4o",
})

console.log(`${tokens.count} tokens`)

Custom Providers

You can also configure built-in providers directly by passing a configuration object with provider names as keys when instantiating Abso:

import { Abso } from "abso-ai"

const abso = new Abso({
  openai: { apiKey: "your-openai-key" },
  anthropic: { apiKey: "your-anthropic-key" },
  // add other providers as needed
})

const result = await abso.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Hello!" }],
})

console.log(result.choices[0].message.content)

Alternatively, you can also change the providers that are loaded by passing a custom providers array to the constructor.

Observability

You can use Abso with Lunary to get instant observability into your LLM usage.

First signup to Lunary and get your public key.

Then simply set the LUNARY_PUBLIC_KEY environment variable to your public key to enable observability.

Contributing

See our Contributing Guide.

Roadmap

  • More providers
  • Built in caching
  • Tokenizers
  • Cost calculation
  • Smart routing