Ai
LLMs Integration
Generate AI-ready content files for LLM consumption using the Nuxt LLMs module.
Generate AI-ready content files for LLM consumption using the Nuxt LLMs module.
Overview
EletroDS uses the nuxt-llms module to generate optimized content files that can be consumed by Large Language Models (LLMs). These files provide structured documentation in formats that are easy for AI models to parse and understand.
Available Endpoints
The following endpoints are automatically generated and available:
| Endpoint | Description |
|---|---|
/llms.txt | A compact text representation of all documentation |
/llms-full.txt | Complete documentation content in text format |
https://eletro.design/llms.txt
https://eletro.design/llms-full.txt
http://localhost:3000/llms.txt
http://localhost:3000/llms-full.txt
Configuration
The LLMs module is configured in nuxt.config.ts:
nuxt.config.ts
export default defineNuxtConfig({
modules: ['nuxt-llms'],
llms: {
domain: 'https://eletro.design/',
title: 'EletroDS - Design System',
description: 'A complete design system for creating cohesive, accessible, and scalable experiences within Mercado Eletrônico.',
full: {
title: 'EletroDS - Full Documentation',
description: 'Complete documentation for EletroDS Vue 3, the modern design system for Mercado Eletrônico.'
},
sections: [
{
title: 'Getting Started',
contentCollection: 'docs',
contentFilters: [
{ field: 'path', operator: 'LIKE', value: '/getting-started%' }
]
},
{
title: 'Components',
contentCollection: 'docs',
contentFilters: [
{ field: 'path', operator: 'LIKE', value: '/components%' }
]
}
]
}
})
Use Cases
Custom AI Integrations
You can use these endpoints to build custom AI integrations:
// Fetch documentation for AI processing
const response = await fetch('https://eletro.design/llms.txt')
const documentation = await response.text()
// Use with your AI model
const aiResponse = await yourAIModel.generate({
context: documentation,
prompt: 'How do I use the MeButton component?'
})
RAG (Retrieval-Augmented Generation)
The LLMs files can be used as a knowledge base for RAG implementations:
- Download the
/llms-full.txtcontent - Split into chunks for vector embedding
- Store in a vector database
- Query relevant chunks based on user questions
- Provide context to your LLM for accurate responses
MCP vs LLMs
| Feature | MCP Server | LLMs Files |
|---|---|---|
| Real-time access | ✅ Yes | ❌ No (static files) |
| IDE integration | ✅ Native support | ❌ Requires custom setup |
| Programmatic access | ✅ Via MCP protocol | ✅ Via HTTP |
| Offline usage | ❌ Requires connection | ✅ Can be cached |
| Best for | IDE assistants | Custom AI integrations |
Recommendation: Use the MCP Server for IDE integrations (Cursor, VS Code, Claude) and LLMs files for custom AI applications or offline scenarios.