Agent Skills: LangChain4j RAG Implementation Patterns

Provides Retrieval-Augmented Generation (RAG) implementation patterns with LangChain4j for Java. Generates document ingestion pipelines, embedding stores, vector search, and semantic search capabilities. Use when building chat-with-documents systems, document Q&A over PDFs or text files, AI assistants with knowledge bases, semantic search over document repositories, or knowledge-enhanced AI applications with source attribution.

UncategorizedID: giuseppe-trisciuoglio/developer-kit/langchain4j-rag-implementation-patterns

Install this agent skill to your local

pnpm dlx add-skill https://github.com/giuseppe-trisciuoglio/developer-kit/tree/HEAD/plugins/developer-kit-java/skills/langchain4j-rag-implementation-patterns

Skill Files

Browse the full folder contents for langchain4j-rag-implementation-patterns.

Download Skill

Loading file tree…

plugins/developer-kit-java/skills/langchain4j-rag-implementation-patterns/SKILL.md

Skill Metadata

Name
langchain4j-rag-implementation-patterns
Description
Provides Retrieval-Augmented Generation (RAG) implementation patterns with LangChain4j for Java. Generates document ingestion pipelines, embedding stores, vector search, and semantic search capabilities. Use when building chat-with-documents systems, document Q&A over PDFs or text files, AI assistants with knowledge bases, semantic search over document repositories, or knowledge-enhanced AI applications with source attribution.

LangChain4j RAG Implementation Patterns

Overview

Implements RAG systems with LangChain4j: document ingestion pipelines, embedding stores, and vector search for chat-with-documents and knowledge-enhanced AI applications.

When to Use This Skill

  • Building chat-with-documents systems or document Q&A over PDFs, text files, or web pages
  • Creating AI assistants with access to company knowledge bases or external sources
  • Implementing semantic search or hybrid search over document repositories
  • Building domain-specific AI with curated knowledge and source attribution

Instructions

Initialize RAG Project

Create a new Spring Boot project with required dependencies:

pom.xml:

<dependency>
    <groupId>dev.langchain4j</groupId>
    <artifactId>langchain4j-spring-boot-starter</artifactId>
    <version>1.8.0</version>
</dependency>
<dependency>
    <groupId>dev.langchain4j</groupId>
    <artifactId>langchain4j-open-ai</artifactId>
    <version>1.8.0</version>
</dependency>

Setup Document Ingestion

Configure document loading and processing with validation:

Validation Checkpoint: After ingestion, verify embedding count matches segment count and test retrieval with a sample query.

@Configuration
public class RAGConfiguration {

    @Bean
    public EmbeddingModel embeddingModel() {
        return OpenAiEmbeddingModel.builder()
            .apiKey(System.getenv("OPENAI_API_KEY"))
            .modelName("text-embedding-3-small")
            .build();
    }

    @Bean
    public EmbeddingStore<TextSegment> embeddingStore() {
        return new InMemoryEmbeddingStore<>();
    }
}

Create document ingestion service:

@Service
@RequiredArgsConstructor
public class DocumentIngestionService {

    private final EmbeddingModel embeddingModel;
    private final EmbeddingStore<TextSegment> embeddingStore;

    public void ingestDocument(String filePath, Map<String, Object> metadata) {
        Document document = FileSystemDocumentLoader.loadDocument(filePath);
        document.metadata().putAll(metadata);

        DocumentSplitter splitter = DocumentSplitters.recursive(
            500, 50, new OpenAiTokenCountEstimator("text-embedding-3-small")
        );

        List<TextSegment> segments = splitter.split(document);
        List<Embedding> embeddings = embeddingModel.embedAll(segments).content();
        embeddingStore.addAll(embeddings, segments);

        // Validation: verify embedding count matches segments
        if (embeddings.size() != segments.size()) {
            throw new IllegalStateException("Embedding count mismatch: expected " + segments.size() + ", got " + embeddings.size());
        }
    }

    public boolean validateIngestion(String testQuery) {
        // Validation: test retrieval with sample query
        Embedding queryEmbedding = embeddingModel.embed(testQuery).content();
        List<EmbeddingMatch<TextSegment>> results = embeddingStore.search(
            EmbeddingSearchRequest.builder()
                .queryEmbedding(queryEmbedding)
                .maxResults(1)
                .build()
        ).matches();
        return !results.isEmpty();
    }
}

Configure Content Retrieval

Setup content retrieval with filtering:

Validation Checkpoint: After configuration, test retrieval with a known query to verify embeddings are searchable.

@Configuration
public class ContentRetrieverConfiguration {

    @Bean
    public ContentRetriever contentRetriever(
            EmbeddingStore<TextSegment> embeddingStore,
            EmbeddingModel embeddingModel) {

        return EmbeddingStoreContentRetriever.builder()
            .embeddingStore(embeddingStore)
            .embeddingModel(embeddingModel)
            .maxResults(5)
            .minScore(0.7)
            .build();
    }
}

Create RAG-Enabled AI Service

Define AI service with context retrieval:

interface KnowledgeAssistant {
    @SystemMessage("""
        You are a knowledgeable assistant with access to a comprehensive knowledge base.

        When answering questions:
        1. Use the provided context from the knowledge base
        2. If information is not in the context, clearly state this
        3. Provide accurate, helpful responses
        4. When possible, reference specific sources
        5. If the context is insufficient, ask for clarification
        """)
    String answerQuestion(String question);
}

@Service
@RequiredArgsConstructor
public class KnowledgeService {

    private final KnowledgeAssistant assistant;

    public KnowledgeService(ChatModel chatModel, ContentRetriever contentRetriever) {
        this.assistant = AiServices.builder(KnowledgeAssistant.class)
            .chatModel(chatModel)
            .contentRetriever(contentRetriever)
            .build();
    }

    public String answerQuestion(String question) {
        return assistant.answerQuestion(question);
    }
}

Examples

Basic Document Processing

public class BasicRAGExample {
    public static void main(String[] args) {
        var embeddingStore = new InMemoryEmbeddingStore<TextSegment>();

        var embeddingModel = OpenAiEmbeddingModel.builder()
            .apiKey(System.getenv("OPENAI_API_KEY"))
            .modelName("text-embedding-3-small")
            .build();

        var ingestor = EmbeddingStoreIngestor.builder()
            .embeddingModel(embeddingModel)
            .embeddingStore(embeddingStore)
            .build();

        ingestor.ingest(Document.from("Spring Boot is a framework for building Java applications with minimal configuration."));

        var retriever = EmbeddingStoreContentRetriever.builder()
            .embeddingStore(embeddingStore)
            .embeddingModel(embeddingModel)
            .build();
    }
}

Multi-Domain Assistant

interface MultiDomainAssistant {
    @SystemMessage("""
        You are an expert assistant with access to multiple knowledge domains:
        - Technical documentation
        - Company policies
        - Product information
        - Customer support guides

        Tailor your response based on the type of question and available context.
        Always indicate which domain the information comes from.
        """)
    String answerQuestion(@MemoryId String userId, String question);
}

Hierarchical RAG

@Service
@RequiredArgsConstructor
public class HierarchicalRAGService {

    private final EmbeddingStore<TextSegment> chunkStore;
    private final EmbeddingStore<TextSegment> summaryStore;
    private final EmbeddingModel embeddingModel;

    public String performHierarchicalRetrieval(String query) {
        List<EmbeddingMatch<TextSegment>> summaryMatches = searchSummaries(query);
        List<TextSegment> relevantChunks = new ArrayList<>();

        for (EmbeddingMatch<TextSegment> summaryMatch : summaryMatches) {
            String documentId = summaryMatch.embedded().metadata().getString("documentId");
            List<EmbeddingMatch<TextSegment>> chunkMatches = searchChunksInDocument(query, documentId);
            chunkMatches.stream()
                .map(EmbeddingMatch::embedded)
                .forEach(relevantChunks::add);
        }

        return generateResponseWithChunks(query, relevantChunks);
    }
}

Best Practices

Document Segmentation

  • Use recursive splitting with 500-1000 token chunks for most applications
  • Maintain 20-50 token overlap between chunks for context preservation
  • Consider document structure (headings, paragraphs) when splitting
  • Use token-aware splitters for optimal embedding generation

Metadata Strategy

  • Include rich metadata for filtering and attribution:
    • User and tenant identifiers for multi-tenancy
    • Document type and category classification
    • Creation and modification timestamps
    • Version and author information
    • Confidentiality and access level tags

Query Processing

  • Implement query preprocessing and cleaning
  • Consider query expansion for better recall
  • Apply dynamic filtering based on user context
  • Use re-ranking for improved result quality

Performance Optimization

  • Cache embeddings for repeated queries
  • Use batch embedding generation for bulk operations
  • Implement pagination for large result sets
  • Consider asynchronous processing for long operations

Common Patterns

Simple RAG Pipeline

@RequiredArgsConstructor
@Service
public class SimpleRAGPipeline {

    private final EmbeddingModel embeddingModel;
    private final EmbeddingStore<TextSegment> embeddingStore;
    private final ChatModel chatModel;

    public String answerQuestion(String question) {
        Embedding queryEmbedding = embeddingModel.embed(question).content();
        EmbeddingSearchRequest request = EmbeddingSearchRequest.builder()
            .queryEmbedding(queryEmbedding)
            .maxResults(3)
            .build();

        List<TextSegment> segments = embeddingStore.search(request).matches().stream()
            .map(EmbeddingMatch::embedded)
            .collect(Collectors.toList());

        String context = segments.stream()
            .map(TextSegment::text)
            .collect(Collectors.joining("\n\n"));

        return chatModel.generate(context + "\n\nQuestion: " + question + "\nAnswer:");
    }
}

Hybrid Search (Vector + Keyword)

@Service
@RequiredArgsConstructor
public class HybridSearchService {

    private final EmbeddingStore<TextSegment> vectorStore;
    private final FullTextSearchEngine keywordEngine;
    private final EmbeddingModel embeddingModel;

    public List<Content> hybridSearch(String query, int maxResults) {
        // Vector search
        List<Content> vectorResults = performVectorSearch(query, maxResults);

        // Keyword search
        List<Content> keywordResults = performKeywordSearch(query, maxResults);

        // Combine and re-rank using RRF algorithm
        return combineResults(vectorResults, keywordResults, maxResults);
    }
}

Troubleshooting

Validation Failures

Embedding Count Mismatch: Thrown when segments != embeddings. Check splitter configuration and model availability.

Empty Retrieval Results: Call validateIngestion(testQuery) to verify embeddings are searchable. Check if document was ingested successfully.

Low Retrieval Scores: Verify minScore threshold (default 0.7) is not too high for your use case. Test with known queries.

Common Issues

Poor Retrieval Results

  • Check document chunk size and overlap settings
  • Verify embedding model compatibility
  • Ensure metadata filters are not too restrictive
  • Consider adding re-ranking step
  • Run validation to confirm embeddings exist

Slow Performance

  • Use cached embeddings for frequent queries
  • Optimize database indexing for vector stores
  • Implement pagination for large datasets
  • Consider async processing for bulk operations

High Memory Usage

  • Use disk-based embedding stores for large datasets
  • Implement proper pagination and filtering
  • Clean up unused embeddings periodically
  • Monitor and optimize chunk sizes

Constraints and Warnings

  • Embedding Model Costs: Generating embeddings for large document collections can be expensive; implement caching and batch processing.
  • Vector Store Scalability: In-memory stores are suitable for development only; use persistent stores (Pinecone, Qdrant, Redis) for production.
  • Chunk Size Trade-offs: Smaller chunks improve precision but lose context; larger chunks preserve context but may introduce noise.
  • Stale Data: Cached embeddings become stale when source documents change; implement update strategies.
  • Token Limits: RAG context windows have limits; typically 3-5 retrieved chunks fit within standard model limits.
  • Hallucination Risk: RAG reduces but doesn't eliminate hallucinations; always validate critical responses against sources.
  • Latency: Vector search and embedding generation add latency; consider async processing for real-time applications.
  • Metadata Filtering: Overly restrictive filters may return no results; implement fallback strategies.
  • Multi-tenancy: Ensure proper metadata isolation to prevent cross-tenant data leakage.

References