Custom Integration

Build Custom Chatbots with Legible's Chunk & Retrieval API

Leverage Legible's AI Export API to power your custom chatbots, internal assistants, or RAG applications. Get structured documents and chunks without building an ingestion pipeline.

7 min readUpdated 2026-03-21Chatbots & RAG
Why this matters

Legible is a strong fit when you want to own the assistant experience but not the whole content-preparation stack. The API gives you structured documents and chunks ready for retrieval workflows.

The Simplest Mental Model

Think of Legible as the managed content layer that sits before your model. Your app handles the chat UI, orchestration, and model choice. Legible handles content syncing, cleaning, chunking, and the export surface you query.

Basic Architecture

Website content
-> Legible sync + cleanup
-> document/chunk export API
-> your retrieval or indexing layer
-> your chatbot backend
-> your UI or support channel

Two Common Patterns

  • Pull full documents and index them in your own vector store.
  • Pull Legible chunks directly and use them as the retrieval-ready unit in your app.

When To Use Which Pattern

  • Use full documents when you want full control over your own chunking and indexing.
  • Use Legible chunks when you want faster integration and a content format already prepared for RAG.
  • Use `updated_since` and hashes for incremental sync instead of rebuilding everything from scratch.

What Legible Saves You From Building

  • HTML cleanup and content extraction.
  • Markdown normalization.
  • Heading-aware chunking.
  • Content hashing and delta detection.
  • A stable API surface over changing website content.