Legible's FAQ tool lets you add question-and-answer content directly in the product. That content becomes part of the AI knowledge layer even if the FAQ is not yet published on your public site.
This is useful when you want LLMs and chat assistants to understand key answers, policies, or support explanations that are not easy to surface in your CMS right away.
What The FAQ Tool Does
Each FAQ item is stored as a structured content item in Legible. The question becomes the title, the answer becomes the body content, and the item is prepared for retrieval like other AI-ready content in the system.
That means FAQ items are not just display text in a settings screen. They become part of the same content layer Legible uses for chat, retrieval, and AI-facing delivery.
Why This Matters
- You can add high-priority answers without waiting for a full website update.
- Content Chat can use those answers immediately.
- Integrated chats such as Intercom, Zendesk, or custom assistants can retrieve them through the same chunked content layer.
- Legible can expose them as AI-readable content items, which means important answers can be available to LLM-oriented discovery even if the content is not yet on the public site.
How FAQs Flow Through Legible
- The question-and-answer pair is converted into body content suitable for chunking.
- The content becomes retrievable like the rest of your Legible knowledge layer.
- This makes FAQs useful for both chatbot quality and AI discoverability.
Create FAQ in Legible
-> Legible stores it as a content item
-> Legible chunks it for retrieval
-> Content Chat and integrated assistants can use it
-> Legible can include it in AI-facing content indexes and llms.txt visibilityWhen To Use The FAQ Tool
- When support teams keep answering the same question and want it in the AI layer immediately.
- When the answer should be available to LLMs even before the CMS or website catches up.
- When you want to improve chatbot answer quality without editing multiple site templates.
- When a short, direct answer is more useful than asking the model to infer it from a long page.
What you skip by using Legible
- No CMS deployment required for every new FAQ.
- No separate chunking or indexing workflow to maintain.
- The same FAQ can strengthen Content Chat, external chatbots, and AI-facing discovery.
- Teams can manage FAQ order and content in one place rather than across multiple systems.
