langchain-deploy-integration

LangChain Deploy Integration

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "langchain-deploy-integration" with this command: npx skills add jeremylongshore/claude-code-plugins-plus-skills/jeremylongshore-claude-code-plugins-plus-skills-langchain-deploy-integration

LangChain Deploy Integration

Overview

Deploy LangChain applications to production using LangServe, Docker, and cloud platforms. Covers containerization of chains and agents, LangServe API deployment, and integration with LangSmith for production observability.

Prerequisites

  • LangChain application with chains/agents defined

  • Docker installed for containerization

  • LangSmith API key for production tracing

  • Platform CLI (gcloud, aws, or docker compose)

Instructions

Step 1: LangServe API Setup

serve.py

from fastapi import FastAPI from langserve import add_routes from langchain_openai import ChatOpenAI from langchain_core.prompts import ChatPromptTemplate

app = FastAPI(title="LangChain API")

Define your chain

prompt = ChatPromptTemplate.from_template("Answer: {question}") chain = prompt | ChatOpenAI(model="gpt-4o-mini")

Add LangServe routes

add_routes(app, chain, path="/chat")

if name == "main": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000) # 8000: API server port

Step 2: Dockerfile

FROM python:3.11-slim

WORKDIR /app COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt

COPY . .

ENV LANGCHAIN_TRACING_V2=true ENV LANGCHAIN_PROJECT=production

EXPOSE 8000 # 8000: API server port CMD ["uvicorn", "serve:app", "--host", "0.0.0.0", "--port", "8000"] # API server port

Step 3: Docker Compose for Development

version: "3.8" services: langchain-api: build: . ports: - "8000:8000" # 8000: API server port environment: - OPENAI_API_KEY=${OPENAI_API_KEY} - LANGCHAIN_API_KEY=${LANGCHAIN_API_KEY} - LANGCHAIN_TRACING_V2=true healthcheck: test: ["CMD", "curl", "-f", "http://localhost:8000/health"] # API server port interval: 30s

Step 4: Cloud Run Deployment

gcloud run deploy langchain-api
--source .
--region us-central1
--set-secrets=OPENAI_API_KEY=openai-key:latest
--set-secrets=LANGCHAIN_API_KEY=langsmith-key:latest
--set-env-vars=LANGCHAIN_TRACING_V2=true
--min-instances=1
--memory=1Gi

Step 5: Health Check with LangSmith

from langsmith import Client

async def health_check(): try: client = Client() # Verify LangSmith connection client.list_projects(limit=1) return {"status": "healthy", "tracing": "enabled"} except Exception as e: return {"status": "degraded", "error": str(e)}

Error Handling

Issue Cause Solution

Import errors Missing dependencies Pin versions in requirements.txt

LangSmith timeout Network issue Set LANGCHAIN_TRACING_V2=false as fallback

Memory exceeded Large context Increase container memory, use streaming

Cold start slow Heavy imports Use gunicorn with preload

Examples

Production Requirements

langchain>=0.3.0 langchain-openai>=0.2.0 langserve>=0.3.0 langsmith>=0.1.0 uvicorn>=0.30.0 fastapi>=0.115.0

Resources

  • LangServe Documentation

  • LangSmith

  • LangChain Deployment

Next Steps

For multi-environment setup, see langchain-multi-env-setup .

Output

  • Configuration files or code changes applied to the project

  • Validation report confirming correct implementation

  • Summary of changes made and their rationale

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Web3

tracking-crypto-prices

No summary provided by upstream source.

Repository SourceNeeds Review
Web3

aggregating-crypto-news

No summary provided by upstream source.

Repository SourceNeeds Review
Web3

tracking-crypto-derivatives

No summary provided by upstream source.

Repository SourceNeeds Review