Introduction
When working with Large Language Models (LLMs), getting consistent, well-structured responses can be challenging. LLMs naturally generate free-form text, which can be difficult to parse and integrate into applications. LangChain.js provides powerful tools to solve this problem through prompt chaining and structured output generation. This guide will show you how to implement these techniques effectively, using a legal analysis system as a practical example.
Understanding the Core Concepts
Prompt Chaining
Prompt chaining is the practice of connecting multiple prompts together, where each prompt builds upon the results of previous ones. Think of it like an assembly line, where each station adds specific value to the final product. In our case, we're creating a system that can analyze legal queries, generate follow-up questions, analyze documents, and estimate costs.
Structured Output
Structured output transforms free-form LLM responses into predictable, typed data structures. This is crucial for building reliable applications that need to process LLM outputs programmatically. It's like having a contract with the LLM about exactly what format of data it should return.
Benefits of Using Prompt Chaining and Structured Output
Consistency: Ensures responses follow a predictable format
Type Safety: Catches errors early through schema validation
Better Integration: Makes it easier to integrate LLM responses into your application
Maintainability: Separates prompt logic from business logic
Testing: Enables proper unit testing with mock data
Error Handling: Provides clear validation and error messages
Implementation Guide
1. Setting Up the Schema
Let's start by examining how to define the structure for our LLM responses using Zod:
export const LegalAnalysisSchema = z.object({ facts: z.array(z.object({ statement: z.string().describe("A clear statement of the factual situation"), context: z.string().describe("Additional context or background information"), legality: z.enum(["legal", "illegal", "uncertain"]), laws: z.array(z.object({ act: z.string(), section: z.string(), explanation: z.string() })) })), riskLevel: z.enum(["high", "medium", "low"]) });
Key points about this schema: - The .describe()
method adds metadata that helps the LLM understand what each field should contain - Using z.enum
ensures the LLM can only return predefined values for certain fields - Nested arrays and objects help organize complex data structures - Each field has a specific type, ensuring type safety throughout your application
2. Creating Prompt Templates
Prompt templates are crucial for guiding the LLM's responses. Let's break down how to create effective templates:
export const LEGAL_ANALYSIS_TEMPLATE = PromptTemplate.fromTemplate
(`You are an expert legal analyst specializing in Indian law. Analyze the following legal query:
Query: {query} {additionalInformation} {schema_description}
Based on the query and the user's responses above, provide a comprehensive legal analysis that includes:
1. Detailed examination of facts and their legal implications
2. Relevant laws, regulations, and case precedents
3. Assessment of strengths and challenges
4. Step-by-step procedures and recommendations
Your response should be thorough, well-structured, and focused on practical legal solutions.
Consider all the information provided in the user's responses when forming your analysis.`);
Important elements of a good prompt template: - Clear role definition ("You are an expert legal analyst") - Specific context ("specializing in Indian law") - Structured requirements (numbered list of what to include) - Dynamic placeholders ({query}, {additionalInformation}) - Schema description inclusion ({schema_description})
3. Implementing the LLM Service
Let's examine the core service implementation:
class LLMService { private model: BaseChatModel; constructor(provider: LLMProvider = 'claude', options?: LLMOptions) { // Initialize with default provider and options this.model = this.createModel(provider, options); } async analyzeQuery(query: string, additionalInformation?: string) { // Development mode check for testing if (DEV_MODE) { console.log('DEV MODE: Returning mock legal analysis'); return MOCK_LEGAL_ANALYSIS; } // Add structured output capability to the model const structuredModel = this.model.withStructuredOutput(LegalAnalysisSchema); // Create the processing chain const chain = LEGAL_ANALYSIS_TEMPLATE.pipe(structuredModel); // Execute the chain with all required information const result = await chain.invoke({ query, schema_description: zodToJsonSchema(LegalAnalysisSchema).description, additionalInformation: additionalInformation || 'No additional information provided', }); // Handle string results by parsing them if (typeof result === 'string') { const parsedResult = JSON.parse(result); return LegalAnalysisSchema.parse(parsedResult); } // Validate and return the result return LegalAnalysisSchema.parse(result); } }
Key implementation details: - The service uses dependency injection for the LLM provider - Development mode support helps in testing - Structured output is added through withStructuredOutput - The chain is created by piping the template to the structured model - Result parsing handles both string and object responses - Schema validation ensures type safety
Practical Tips and Gotchas
1. Schema Design
Start Simple: Begin with a basic schema and expand as needed
Use Descriptions: Always include .describe()
for better LLM understanding
Handle Optional Fields: Mark fields as optional using .optional()
when appropriate
Consider Preprocessing: Use z.preprocess
for handling edge cases:
z.preprocess( (val) => { if (typeof val === 'string') { try { return JSON.parse(val); } catch { return []; } } return val; }, z.array(...) );
2. Prompt Engineering
Be Specific: Clearly define the expected output format
Include Examples: When possible, provide example responses
Context Matters: Give enough background information
Handle Edge Cases: Include instructions for unexpected situations
3. Error Handling
Validate Early: Check inputs before processing
Graceful Fallbacks: Provide sensible defaults
Detailed Logging: Log both inputs and outputs for debugging
Custom Error Messages: Use Zod's error customization:
z.string().min(1, { message: "This field cannot be empty" })
4. Testing Strategies
Mock Data: Create realistic mock responses:
const MOCK_LEGAL_ANALYSIS = { facts: [ { statement: "Contract breach occurred on January 15, 2024", context: "Business agreement violation", legality: "illegal", laws: [ { act: "Contract Act", section: "Section 73", explanation: "Breach of contract damages" } ] } ], riskLevel: "medium" };
Test Different Providers: Ensure compatibility across different LLM providers
Validate Edge Cases: Test with minimal and maximal inputs
Performance Testing: Monitor token usage and response times
Common Challenges and Solutions
1. Inconsistent LLM Responses
Problem: LLMs might sometimes generate responses that don't match the schema. Solution: Implement retry logic with different prompts or temperatures:
async function retryWithDifferentTemperature(func: () => Promise<any>, maxRetries = 3) { for (let i = 0; i < maxRetries; i++) { try { return await func(); } catch (error) { if (i === maxRetries - 1) {
2. Performance Optimization
Implement caching for common queries
Use batch processing when possible
Consider response streaming for large outputs
3. Cost Management
Monitor token usage
Implement rate limiting
Use smaller models for simple tasks
Conclusion
Prompt chaining and structured output are powerful techniques that can significantly improve the reliability and useability of LLM-powered applications. By following the practices outlined in this guide, you can build robust systems that effectively leverage LLM capabilities while maintaining type safety and predictable behavior.
Remember to: - Start with clear schemas - Write specific, detailed prompts - Implement proper error handling - Test thoroughly - Monitor and optimize performance
The example code provided demonstrates these principles in action, creating a foundation you can build upon for your own applications.
We are a family of Promactians
We are an excellence-driven company passionate about technology where people love what they do.
Get opportunities to co-create, connect and celebrate!
Headquarter
B-301, Monalisa Business Center, Manjalpur, Vadodara, Gujarat, India - 390011
Ahmedabad
West Gate, B-1802, Besides YMCA Club Road, SG Highway, Ahmedabad, Gujarat, India - 380015
Pune
46 Downtown, 805+806, Pashan-Sus Link Road, Near Audi Showroom, Baner, Pune, Maharastra, India - 411045.
USA
4201 Cypress Creek Pkwy, Ste 540 # 1188, Houston, TX 77068
Services
Copyright ⓒ Promact Infotech Pvt. Ltd. All Rights Reserved
We are a family of Promactians
We are an excellence-driven company passionate about technology where people love what they do.
Get opportunities to co-create, connect and celebrate!
Headquarter
B-301, Monalisa Business Center, Manjalpur, Vadodara, Gujarat, India - 390011
Ahmedabad
West Gate, B-1802, Besides YMCA Club Road, SG Highway, Ahmedabad, Gujarat, India - 380015
Pune
46 Downtown, 805+806, Pashan-Sus Link Road, Near Audi Showroom, Baner, Pune, Maharastra, India - 411045.
USA
4201 Cypress Creek Pkwy, Ste 540 # 1188, Houston, TX 77068
Services
Copyright ⓒ Promact Infotech Pvt. Ltd. All Rights Reserved