Context Module

Token counting and context management for LLMs

Available Tools

  • ctx_context - Track token usage (total/used/left)
  • ctx_compact - Compress text using zlib/gzip algorithms
  • ctx_remove - Clear context and reset usage
  • ctx_token_count - Count tokens for various LLM providers
  • ctx_memory_store - Store data in-memory (process lifetime)
  • ctx_memory_recall - Retrieve stored data
  • ctx_estimate_cost - Estimate API costs for various providers

Supported LLM Providers

  • GPT-4, GPT-3.5 (OpenAI)
  • Claude 3 (Anthropic)
  • Gemini (Google)
  • LLaMA (Meta)
  • Ollama (Local models)
  • GLM (ChatGLM)

Example: Count Tokens

{
  "name": "ctx_token_count",
  "arguments": {
    "text": "Your text here",
    "model": "gpt-4"
  }
}

Example: Estimate API Cost

{
  "name": "ctx_estimate_cost",
  "arguments": {
    "provider": "anthropic",
    "model": "claude-3-opus",
    "input_tokens": 1000,
    "output_tokens": 500
  }
}

Calculates the estimated cost for API usage based on the provider's pricing model.

Example: Compress Text

{
  "name": "ctx_compact",
  "arguments": {
    "text": "Long text to compress...",
    "algorithm": "gzip"
  }
}

Example: Store and Recall Data

Store data

{
  "name": "ctx_memory_store",
  "arguments": {
    "key": "user_preferences",
    "value": "{\"theme\": \"dark\"}"
  }
}

Recall data

{
  "name": "ctx_memory_recall",
  "arguments": {
    "key": "user_preferences"
  }
}

Track Context Usage

{
  "name": "ctx_context",
  "arguments": {
    "action": "status"
  }
}

Clear Context

{
  "name": "ctx_remove",
  "arguments": {
    "scope": "all"
  }
}