Ai

LLMs Integration

Generate AI-ready content files for LLM consumption using the Nuxt LLMs module.

Generate AI-ready content files for LLM consumption using the Nuxt LLMs module.

Overview

EletroDS uses the nuxt-llms module to generate optimized content files that can be consumed by Large Language Models (LLMs). These files provide structured documentation in formats that are easy for AI models to parse and understand.

Available Endpoints

The following endpoints are automatically generated and available:

EndpointDescription
/llms.txtA compact text representation of all documentation
/llms-full.txtComplete documentation content in text format
https://eletro.design/llms.txt
https://eletro.design/llms-full.txt

Configuration

The LLMs module is configured in nuxt.config.ts:

nuxt.config.ts
export default defineNuxtConfig({
  modules: ['nuxt-llms'],
  
  llms: {
    domain: 'https://eletro.design/',
    title: 'EletroDS - Design System',
    description: 'A complete design system for creating cohesive, accessible, and scalable experiences within Mercado Eletrônico.',
    full: {
      title: 'EletroDS - Full Documentation',
      description: 'Complete documentation for EletroDS Vue 3, the modern design system for Mercado Eletrônico.'
    },
    sections: [
      {
        title: 'Getting Started',
        contentCollection: 'docs',
        contentFilters: [
          { field: 'path', operator: 'LIKE', value: '/getting-started%' }
        ]
      },
      {
        title: 'Components',
        contentCollection: 'docs',
        contentFilters: [
          { field: 'path', operator: 'LIKE', value: '/components%' }
        ]
      }
    ]
  }
})

Use Cases

Custom AI Integrations

You can use these endpoints to build custom AI integrations:

// Fetch documentation for AI processing
const response = await fetch('https://eletro.design/llms.txt')
const documentation = await response.text()

// Use with your AI model
const aiResponse = await yourAIModel.generate({
  context: documentation,
  prompt: 'How do I use the MeButton component?'
})

RAG (Retrieval-Augmented Generation)

The LLMs files can be used as a knowledge base for RAG implementations:

  1. Download the /llms-full.txt content
  2. Split into chunks for vector embedding
  3. Store in a vector database
  4. Query relevant chunks based on user questions
  5. Provide context to your LLM for accurate responses

MCP vs LLMs

FeatureMCP ServerLLMs Files
Real-time access✅ Yes❌ No (static files)
IDE integration✅ Native support❌ Requires custom setup
Programmatic access✅ Via MCP protocol✅ Via HTTP
Offline usage❌ Requires connection✅ Can be cached
Best forIDE assistantsCustom AI integrations
Recommendation: Use the MCP Server for IDE integrations (Cursor, VS Code, Claude) and LLMs files for custom AI applications or offline scenarios.

Next Steps

MCP Server

Set up the MCP server for real-time AI integration in your IDE.

Getting Started

Learn how to get started with EletroDS.