Full Stack Ai Agent Template
| Entity Passport | |
| Registry ID | gh-model--vstorm-co--full-stack-ai-agent-template |
| License | MIT |
| Provider | github |
Cite this model
Academic & Research Attribution
@misc{gh_model__vstorm_co__full_stack_ai_agent_template,
author = {Vstorm Co},
title = {Full Stack Ai Agent Template Model},
year = {2026},
howpublished = {\url{https://github.com/vstorm-co/full-stack-ai-agent-template}},
note = {Accessed via Free2AITools Knowledge Fortress}
} đŦTechnical Deep Dive
Full Specifications [+]âž
Quick Commands
git clone https://github.com/vstorm-co/full-stack-ai-agent-template âī¸ Nexus Index V2.0
đŦ Index Insight
FNI V2.0 for Full Stack Ai Agent Template: Semantic (S:50), Authority (A:0), Popularity (P:67), Recency (R:100), Quality (Q:50).
Verification Authority
đ What's Next?
Technical Deep Dive
Full-Stack AI Agent Template
Production-ready FastAPI + Next.js project generator with AI agents, RAG, and 20+ enterprise integrations.
Quick Start âĸ Features âĸ Demo âĸ Documentation âĸ Configurator âĸ PyPI
đ¤ 6 AI Agent Frameworks (PydanticAI, PydanticDeep, LangChain, LangGraph, CrewAI, DeepAgents)
đ RAG Pipeline (Milvus, Qdrant, pgvector, ChromaDB)
⥠FastAPI + Next.js 15 (WebSocket streaming, real-time chat UI)
đ Conversation Sharing (direct sharing, public links, admin browser)
đ Enterprise-Ready (JWT, OAuth, admin panel, Celery, Docker, K8s)
Table of Contents
Vstorm OSS Ecosystem
This template is part of a broader open-source ecosystem for production AI agents:
| Project | Description | |
|---|---|---|
| pydantic-deepagents | The modular agent runtime for Python. Claude Code-style CLI with Docker sandbox, browser automation, multi-agent teams, and /improve. | |
| pydantic-ai-shields | Drop-in guardrails for Pydantic AI agents. 5 infra + 5 content shields. | |
| pydantic-ai-subagents | Declarative multi-agent orchestration with token tracking. | |
| summarization-pydantic-ai | Smart context compression for long-running agents. | |
| pydantic-ai-backend | Sandboxed execution for AI agents. Docker + Daytona. |
Want the runtime behind this template's AI agents? pydantic-deepagents powers the
deepagentsframework option â install it standalone withcurl -fsSL .../install.sh | bash.
Browse all projects at oss.vstorm.co
đ Quick Start
[!TIP] Prefer a visual configurator? Use the Web Configurator to configure your project in the browser and download a ZIP â no CLI installation needed.
Installation
# pip
pip install fastapi-fullstack
# uv (recommended)
uv tool install fastapi-fullstack
# pipx
pipx install fastapi-fullstack
Create Your Project
# Interactive wizard (recommended â runs by default)
fastapi-fullstack
# Quick mode with options
fastapi-fullstack create my_ai_app \
--database postgresql \
--frontend nextjs
# Use presets for common setups
fastapi-fullstack create my_ai_app --preset production # Full production setup
fastapi-fullstack create my_ai_app --preset ai-agent # AI agent with streaming
# Minimal project (no extras)
fastapi-fullstack create my_ai_app --minimal
Start Development (Docker â recommended)
The fastest way to get running â 2 commands:
cd my_ai_app
make docker-up # Start backend + database + migrations + admin user
make docker-frontend # Start frontend
Access:
- API: http://localhost:8000
- Docs: http://localhost:8000/docs
- Admin Panel: http://localhost:8000/admin
- Frontend: http://localhost:3000
[!TIP] That's it. Docker handles database setup, migrations, and admin user creation automatically.
Manual setup (without Docker)
1. Install dependencies
cd my_ai_app
make install
[!NOTE] Windows Users: The
makecommand requires GNU Make which is not available by default on Windows. Install via Chocolatey (choco install make), use WSL, or run raw commands manually. Each generated project includes a "Manual Commands Reference" section in its README with all commands.
2. Start the database
# PostgreSQL (with Docker)
make docker-db
3. Create and apply database migrations
[!WARNING] Both commands are required!
db-migratecreates the migration file,db-upgradeapplies it to the database.
# Create initial migration (REQUIRED first time)
make db-migrate
# Enter message: "Initial migration"
# Apply migrations to create tables
make db-upgrade
4. Create admin user
make create-admin
# Enter email and password when prompted
5. Start the backend
make run
6. Start the frontend (new terminal)
cd frontend
bun install
bun dev
Access:
- API: http://localhost:8000
- Docs: http://localhost:8000/docs
- Admin Panel: http://localhost:8000/admin (login with admin user)
- Frontend: http://localhost:3000
Using the Project CLI
Each generated project has a CLI named after your project_slug. For example, if you created my_ai_app:
cd backend
# The CLI command is: uv run
uv run my_ai_app server run --reload # Start dev server
uv run my_ai_app db migrate -m "message" # Create migration
uv run my_ai_app db upgrade # Apply migrations
uv run my_ai_app user create-admin # Create admin user
Use make help to see all available Makefile shortcuts.
đŦ Demo
đ¸ Screenshots
Landing Page & Login
| Landing Page | Login |
|---|---|
![]() |
![]() |
Dashboard, Chat & RAG
| Dashboard | Chat with RAG |
|---|---|
![]() |
![]() |
| Documents | Search |
![]() |
![]() |
Observability
| Logfire (PydanticAI) | LangSmith (LangChain) |
|---|---|
![]() |
![]() |
Messaging Channels
| Telegram Bot |
|---|
![]() |
Admin, Monitoring & API
| Celery Flower | SQLAdmin Panel |
|---|---|
![]() |
![]() |
| API Documentation |
|---|
![]() |
đ¯ Why This Template
Building AI/LLM applications requires more than just an API wrapper. You need:
- Type-safe AI agents with tool/function calling
- Real-time streaming responses via WebSocket
- Conversation persistence and history management
- Production infrastructure - auth, rate limiting, observability
- Enterprise integrations - background tasks, webhooks, admin panels
This template gives you all of that out of the box, with 20+ configurable integrations so you can focus on building your AI product, not boilerplate.
Perfect For
- đ¤ AI Chatbots & Assistants - PydanticAI or LangChain agents with streaming responses
- đ ML Applications - Background task processing with Celery/Taskiq
- đĸ Enterprise SaaS - Full auth, admin panel, webhooks, and more
- đ Startups - Ship fast with production-ready infrastructure
AI-Agent Friendly
Generated projects include CLAUDE.md and AGENTS.md files optimized for AI coding assistants (Claude Code, Codex, Copilot, Cursor, Zed). Following progressive disclosure best practices - concise project overview with pointers to detailed docs when needed.
⨠Features
đ¤ AI/LLM First
- 6 AI Frameworks - PydanticAI, PydanticDeep, LangChain, LangGraph, CrewAI, DeepAgents
- 4 LLM Providers - OpenAI, Anthropic, Google Gemini, OpenRouter
- RAG - Document ingestion, vector search, reranking (Milvus, Qdrant, ChromaDB, pgvector)
- WebSocket Streaming - Real-time responses with full event access
- Messaging Channels - Telegram and Slack multi-bot integration with polling, webhooks, per-thread sessions, group concurrency control
- Conversation Sharing - Share conversations with users or via public links, admin conversation browser
- Conversation Persistence - Save chat history to database
- Message Ratings - Like/dislike responses with feedback, admin analytics
- Image Description - Extract images from documents, describe via LLM vision
- Multimodal Embeddings - Google Gemini embedding model (text + images)
- Document Sources - Local files, API upload, Google Drive, S3/MinIO
- Sync Sources - Configurable connectors (Google Drive, S3) with scheduled sync
- Observability - Logfire for PydanticAI, LangSmith for LangChain/LangGraph/DeepAgents
⥠Backend (FastAPI)
- FastAPI + Pydantic v2 - High-performance async API
- Multiple Databases - PostgreSQL (async), MongoDB (async), SQLite
- Authentication - JWT + Refresh tokens, API Keys, OAuth2 (Google)
- Background Tasks - Celery, Taskiq, or ARQ
- Django-style CLI - Custom management commands with auto-discovery
đ¨ Frontend (Next.js 15)
- React 19 + TypeScript + Tailwind CSS v4
- AI Chat Interface - WebSocket streaming, tool call visualization
- Authentication - HTTP-only cookies, auto-refresh
- Dark Mode + i18n
đ 20+ Enterprise Integrations
| Category | Integrations |
|---|---|
| AI Frameworks | PydanticAI, PydanticDeep, LangChain, LangGraph, CrewAI, DeepAgents |
| LLM Providers | OpenAI, Anthropic, Google Gemini, OpenRouter |
| RAG / Vector Stores | Milvus, Qdrant, ChromaDB, pgvector |
| RAG Sources | Local files, API upload, Google Drive, S3/MinIO, Sync Sources (configurable, scheduled) |
| Embeddings | OpenAI, Voyage, Gemini (multimodal), SentenceTransformers |
| Caching & State | Redis, fastapi-cache2 |
| Security | Rate limiting, CORS, CSRF protection |
| Observability | Logfire, LangSmith, Sentry, Prometheus |
| Admin | SQLAdmin panel with auth |
| Collaboration | Conversation sharing (direct + link), admin conversation browser |
| Messaging | Telegram multi-bot (polling + webhook), Slack multi-bot (Events API + Socket Mode) |
| Events | Webhooks, WebSockets |
| DevOps | Docker, GitHub Actions, GitLab CI, Kubernetes |
đēī¸ Architecture Overview
ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
â FRONTEND (Next.js 15) â
â Chat UI ¡ Knowledge Base ¡ Dashboard ¡ Settings ¡ Dark Mode ¡ i18n â
ââââââââââââââââŦââââââââââââââââââââââââââââââââââââââââââââŦââââââââââââââââ
â REST / WebSocket â Vercel
âŧ âŧ
ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
â BACKEND (FastAPI) â
â â
â âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ â
â â AI AGENTS â â
â â PydanticAI ¡ LangChain ¡ LangGraph ¡ CrewAI ¡ DeepAgents â â
â â ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ â â
â â Tools: datetime ¡ web_search (Tavily) ¡ search_knowledge_base â â
â â Providers: OpenAI ¡ Anthropic ¡ Gemini ¡ OpenRouter â â
â âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ â
â â
â âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ â
â â RAG PIPELINE â â
â â â â
â â Sources Parse Chunk Embed â â
â â âââââââââ ââââââââââ ââââââââââ ââââââââââââââ â â
â â Local files PyMuPDF recursive OpenAI â â
â â API upload LiteParse markdown Voyage â â
â â Google Drive LlamaParse fixed Gemini (multi) â â
â â S3/MinIO python-docx SentenceTransf. â â
â â Sync Sources â â
â â â â
â â Store Search Rank â â
â â ââââââââââââââ ââââââââââââââ ââââââââââââââ â â
â â Milvus Vector similarity Cohere reranker â â
â â Qdrant BM25 + vector RRF CrossEncoder â â
â â ChromaDB Multi-collection â â
â â pgvector â â
â âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ â
â â
â Auth (JWT/API Key/OAuth) ¡ Rate Limiting ¡ Webhooks ¡ Admin Panel â
â Background Tasks (Celery/Taskiq/ARQ) ¡ Django-style CLI â
â Observability (Logfire/LangSmith/Sentry/Prometheus) â
âââââââââŦâââââââââââââââŦâââââââââââââââŦâââââââââââââââŦââââââââââââââââââââââ
â â â â
âŧ âŧ âŧ âŧ
PostgreSQL Redis Vector DB LLM APIs
MongoDB (Milvus/ (OpenAI/
SQLite Qdrant/ Anthropic/
ChromaDB/ Gemini)
pgvector)
đī¸ Architecture
graph TB
subgraph Frontend["Frontend (Next.js 15)"]
UI[React Components]
WS[WebSocket Client]
Store[Zustand Stores]
end
subgraph Backend["Backend (FastAPI)"]
API[API Routes]
Services[Services Layer]
Repos[Repositories]
Agent[AI Agent]
end
subgraph Infrastructure
DB[(PostgreSQL/MongoDB)]
Redis[(Redis)]
Queue[Celery/Taskiq]
end
subgraph External
LLM[OpenAI/Anthropic]
Webhook[Webhook Endpoints]
end
UI --> API
WS <--> Agent
API --> Services
Services --> Repos
Services --> Agent
Repos --> DB
Agent --> LLM
Services --> Redis
Services --> Queue
Services --> Webhook
Layered Architecture
The backend follows a clean Repository + Service pattern:
graph LR
A[API Routes] --> B[Services]
B --> C[Repositories]
C --> D[(Database)]
B --> E[External APIs]
B --> F[AI Agents]
| Layer | Responsibility |
|---|---|
| Routes | HTTP handling, validation, auth |
| Services | Business logic, orchestration |
| Repositories | Data access, queries |
See Architecture Documentation for details.
đ¤ AI Agent
Choose from 6 AI frameworks and 4 LLM providers when generating your project:
# PydanticAI with OpenAI (default)
fastapi-fullstack create my_app --ai-framework pydantic_ai
# LangGraph with Anthropic
fastapi-fullstack create my_app --ai-framework langgraph --llm-provider anthropic
# CrewAI with Google Gemini
fastapi-fullstack create my_app --ai-framework crewai --llm-provider google
# DeepAgents with OpenAI
fastapi-fullstack create my_app --ai-framework deepagents
# With RAG enabled
fastapi-fullstack create my_app --rag --database postgresql --task-queue celery
Supported Combinations
| Framework | OpenAI | Anthropic | Gemini | OpenRouter |
|---|---|---|---|---|
| PydanticAI | â | â | â | â |
| PydanticDeep | â | â | â | - |
| LangChain | â | â | â | - |
| LangGraph | â | â | â | - |
| CrewAI | â | â | â | - |
| DeepAgents | â | â | â | - |
PydanticAI Integration
Type-safe agents with full dependency injection:
# app/agents/assistant.py
from pydantic_ai import Agent, RunContext
@dataclass
class Deps:
user_id: str | None = None
db: AsyncSession | None = None
agent = Agent[Deps, str](
model="openai:gpt-4o-mini",
system_prompt="You are a helpful assistant.",
)
@agent.tool
async def search_database(ctx: RunContext[Deps], query: str) -> list[dict]:
"""Search the database for relevant information."""
# Access user context and database via ctx.deps
...
LangChain Integration
Flexible agents with LangGraph:
# app/agents/langchain_assistant.py
from langchain.tools import tool
from langgraph.prebuilt import create_react_agent
@tool
def search_database(query: str) -> list[dict]:
"""Search the database for relevant information."""
...
agent = create_react_agent(
model=ChatOpenAI(model="gpt-4o-mini"),
tools=[search_database],
prompt="You are a helpful assistant.",
)
WebSocket Streaming
Both frameworks use the same WebSocket endpoint with real-time streaming:
@router.websocket("/ws")
async def agent_ws(websocket: WebSocket):
await websocket.accept()
# Works with both PydanticAI and LangChain
async for event in agent.stream(user_input):
await websocket.send_json({
"type": "text_delta",
"content": event.content
})
Observability
Each framework has its own observability solution:
| Framework | Observability | Dashboard |
|---|---|---|
| PydanticAI | Logfire | Agent runs, tool calls, token usage |
| LangChain | LangSmith | Traces, feedback, datasets |
See AI Agent Documentation for more.
đ RAG (Retrieval-Augmented Generation)
Enable RAG to give your AI agents access to a knowledge base built from your documents.
Vector Store Backends
| Backend | Type | Docker Required | Best For |
|---|---|---|---|
| Milvus | Dedicated vector DB | Yes (3 services) | Production, large scale |
| Qdrant | Dedicated vector DB | Yes (1 service) | Production, simple setup |
| ChromaDB | Embedded / HTTP | No | Development, prototyping |
| pgvector | PostgreSQL extension | No (uses existing PG) | Already have PostgreSQL |
Document Ingestion (CLI)
# Local files
uv run my_app rag-ingest /path/to/document.pdf --collection docs
uv run my_app rag-ingest /path/to/folder/ --recursive
# Google Drive (service account)
uv run my_app rag-sync-gdrive --collection docs --folder-id
# S3/MinIO
uv run my_app rag-sync-s3 --collection docs --prefix reports/ --bucket my-bucket
Embedding Providers
| Provider | Model | Dimensions | Multimodal |
|---|---|---|---|
| OpenAI | text-embedding-3-small | 1536 | - |
| Voyage | voyage-3 | 1024 | - |
| Gemini | gemini-embedding-exp-03-07 | 3072 | Text + Images |
| SentenceTransformers | all-MiniLM-L6-v2 | 384 | - |
Features
- Document parsing - PDF (PyMuPDF with tables, headers/footers, OCR), DOCX, TXT, MD + 130+ formats via LlamaParse
- Image description - Extract images from documents, describe via LLM vision API (opt-in)
- Chunking - RecursiveCharacterTextSplitter with configurable size/overlap
- Reranking - Cohere API or local CrossEncoder for improved search quality
- Agent integration - All 6 AI frameworks get a
search_knowledge_basetool automatically
đ Observability
Logfire (for PydanticAI)
Logfire provides complete observability for your application - from AI agents to database queries. Built by the Pydantic team, it offers first-class support for the entire Python ecosystem.
graph LR
subgraph Your App
API[FastAPI]
Agent[PydanticAI]
DB[(Database)]
Cache[(Redis)]
Queue[Celery/Taskiq]
HTTP[HTTPX]
end
subgraph Logfire
Traces[Traces]
Metrics[Metrics]
Logs[Logs]
end
API --> Traces
Agent --> Traces
DB --> Traces
Cache --> Traces
Queue --> Traces
HTTP --> Traces
| Component | What You See |
|---|---|
| PydanticAI | Agent runs, tool calls, LLM requests, token usage, streaming events |
| FastAPI | Request/response traces, latency, status codes, route performance |
| PostgreSQL/MongoDB | Query execution time, slow queries, connection pool stats |
| Redis | Cache hits/misses, command latency, key patterns |
| Celery/Taskiq | Task execution, queue depth, worker performance |
| HTTPX | External API calls, response times, error rates |
LangSmith (for LangChain)
LangSmith provides observability specifically designed for LangChain applications:
| Feature | Description |
|---|---|
| Traces | Full execution traces for agent runs and chains |
| Feedback | Collect user feedback on agent responses |
| Datasets | Build evaluation datasets from production data |
| Monitoring | Track latency, errors, and token usage |
LangSmith is automatically configured when you choose LangChain:
# .env
LANGCHAIN_TRACING_V2=true
LANGCHAIN_API_KEY=your-api-key
LANGCHAIN_PROJECT=my_project
Configuration
Enable Logfire and select which components to instrument:
fastapi-fullstack new
# â Enable Logfire observability
# â Instrument FastAPI
# â Instrument Database
# â Instrument Redis
# â Instrument Celery
# â Instrument HTTPX
Usage
# Automatic instrumentation in app/main.py
import logfire
logfire.configure()
logfire.instrument_fastapi(app)
logfire.instrument_asyncpg()
logfire.instrument_redis()
logfire.instrument_httpx()
# Manual spans for custom logic
with logfire.span("process_order", order_id=order.id):
await validate_order(order)
await charge_payment(order)
await send_confirmation(order)
For more details, see Logfire Documentation.
đ ī¸ Django-style CLI
Each generated project includes a powerful CLI inspired by Django's management commands:
Built-in Commands
# Server
my_app server run --reload
my_app server routes
# Database (Alembic wrapper)
my_app db init
my_app db migrate -m "Add users"
my_app db upgrade
# Users
my_app user create --email admin@example.com --superuser
my_app user list
Custom Commands
Create your own commands with auto-discovery:
# app/commands/seed.py
from app.commands import command, success, error
import click
@command("seed", help="Seed database with test data")
@click.option("--count", "-c", default=10, type=int)
@click.option("--dry-run", is_flag=True)
def seed_database(count: int, dry_run: bool):
"""Seed the database with sample data."""
if dry_run:
info(f"[DRY RUN] Would create {count} records")
return
# Your logic here
success(f"Created {count} records!")
Commands are automatically discovered from app/commands/ - just create a file and use the @command decorator.
my_app cmd seed --count 100
my_app cmd seed --dry-run
đ Generated Project Structure
my_project/
âââ backend/
â âââ app/
â â âââ main.py # FastAPI app with lifespan
â â âââ api/
â â â âââ routes/v1/ # Versioned API endpoints
â â â âââ deps.py # Dependency injection
â â â âââ router.py # Route aggregation
â â âââ core/ # Config, security, middleware
â â âââ db/models/ # SQLAlchemy/MongoDB models
â â âââ schemas/ # Pydantic schemas
â â âââ repositories/ # Data access layer
â â âââ services/ # Business logic
â â âââ agents/ # AI agents with centralized prompts
â â âââ rag/ # RAG module (vector store, embeddings, ingestion)
â â âââ commands/ # Django-style CLI commands
â â âââ worker/ # Background tasks
â âââ cli/ # Project CLI
â âââ tests/ # pytest test suite
â âââ alembic/ # Database migrations
âââ frontend/
â âââ src/
â â âââ app/ # Next.js App Router
â â âââ components/ # React components
â â âââ hooks/ # useChat, useWebSocket, etc.
â â âââ stores/ # Zustand state management
â âââ e2e/ # Playwright tests
âââ docker-compose.yml
âââ Makefile
âââ README.md
Generated projects include version metadata in pyproject.toml for tracking:
[tool.fastapi-fullstack]
generator_version = "0.1.5"
generated_at = "2024-12-21T10:30:00+00:00"
âī¸ Configuration Options
Core Options
| Option | Values | Description |
|---|---|---|
| Database | postgresql, mongodb, sqlite, none |
Async by default |
| ORM | sqlalchemy, sqlmodel |
SQLModel for simplified syntax |
| Auth | jwt, api_key, both, none |
JWT includes user management |
| OAuth | none, google |
Social login |
| AI Framework | pydantic_ai, langchain, langgraph, crewai, deepagents |
Choose your AI agent framework |
| LLM Provider | openai, anthropic, google, openrouter |
OpenRouter only with PydanticAI |
| RAG | --rag |
Enable RAG with vector database |
| Vector Store | milvus, qdrant, chromadb, pgvector |
pgvector uses existing PostgreSQL |
| Background Tasks | none, celery, taskiq, arq |
Distributed queues |
| Frontend | none, nextjs |
Next.js 15 + React 19 |
Presets
| Preset | Description |
|---|---|
--preset production |
Full production setup with Redis, Sentry, Kubernetes, Prometheus |
--preset ai-agent |
AI agent with WebSocket streaming and conversation persistence |
--minimal |
Minimal project with no extras |
Integrations
Select what you need:
fastapi-fullstack new
# â Redis (caching/sessions)
# â Rate limiting (slowapi)
# â Pagination (fastapi-pagination)
# â Admin Panel (SQLAdmin)
# â AI Agent (PydanticAI or LangChain)
# â Webhooks
# â Sentry
# â Logfire / LangSmith
# â Prometheus
# ... and more
đ Comparison
vs. Manual Setup
Setting up a production AI agent stack manually means wiring together 10+ tools yourself:
# Without this template, you'd need to manually:
# 1. Set up FastAPI project structure
# 2. Configure SQLAlchemy + Alembic migrations
# 3. Implement JWT auth with refresh tokens
# 4. Build WebSocket streaming for AI responses
# 5. Integrate PydanticAI/LangChain with tool calling
# 6. Set up RAG pipeline (parsing, chunking, embedding, vector store)
# 7. Configure Celery + Redis for background tasks
# 8. Build Next.js frontend with auth and chat UI
# 9. Write Docker Compose for all services
# 10. Add observability, rate limiting, admin panel...
# With this template:
pip install fastapi-fullstack
fastapi-fullstack
# Done. All of the above, configured and working.
vs. Alternatives
| Feature | This Template | full-stack-fastapi-template | create-t3-app |
|---|---|---|---|
| AI Agents (5 frameworks) | â | â | â |
| RAG Pipeline (4 vector stores) | â | â | â |
| WebSocket Streaming | â | â | â |
| Conversation Persistence | â | â | â |
| LLM Observability (Logfire/LangSmith) | â | â | â |
| FastAPI Backend | â | â | â |
| Next.js Frontend | â (v15) | â | â |
| JWT + OAuth Authentication | â | â | â (NextAuth) |
| Background Tasks (Celery/Taskiq/ARQ) | â | â (Celery) | â |
| Admin Panel | â (SQLAdmin) | â | â |
| Multiple Databases (PG/Mongo/SQLite) | â | PostgreSQL only | Prisma |
| Docker + K8s | â | â | â |
| Interactive CLI Wizard | â | â | â |
| Django-style Commands | â | â | â |
| Document Sources (GDrive, S3, API) | â | â | â |
| AI-Agent Friendly (CLAUDE.md) | â | â | â |
â FAQ
How is this different from full-stack-fastapi-template?
full-stack-fastapi-template by @tiangolo is a great starting point for FastAPI projects, but it focuses on traditional web apps. This template is purpose-built for AI/LLM applications â it adds AI agents (5 frameworks), RAG with 4 vector stores, WebSocket streaming, conversation persistence, LLM observability, and a Next.js chat UI out of the box.
Can I use this without AI/LLM features?
Yes. The AI agent and RAG modules are optional. You can use this as a pure FastAPI + Next.js template with auth, admin panel, background tasks, and all other infrastructure â just skip the AI framework selection during setup.
What Python and Node.js versions are required?
Python 3.11+ and Node.js 18+ (for the Next.js frontend). We recommend using uv for Python and bun for the frontend.
Can I add integrations after project generation?
The generated project is plain code â no lock-in or runtime dependency on the generator. You can add, remove, or modify any integration manually. The template just gives you a well-structured starting point.
Can I use a different LLM provider than the one I selected?
Yes. The LLM provider is configured via environment variables (AI_MODEL, OPENAI_API_KEY, etc.). You can switch providers by changing the .env file and the model name â no code changes needed for PydanticAI (which supports all providers natively).
đ Documentation
| Document | Description |
|---|---|
| Architecture | Repository + Service pattern, layered design |
| Frontend | Next.js setup, auth, state management |
| AI Agent | PydanticAI, tools, WebSocket streaming |
| Observability | Logfire integration, tracing, metrics |
| Deployment | Docker, Kubernetes, production setup |
| Development | Local setup, testing, debugging |
| Changelog | Version history and release notes |
Star History
đ Inspiration
This project is inspired by:
- full-stack-fastapi-template by @tiangolo
- fastapi-template by @s3rius
- FastAPI Best Practices by @zhanymkanov
- Django's management commands system
đ¤ Contributing
Contributions are welcome! Please read our Contributing Guide for details.
đ License
MIT License - see LICENSE for details.
Need help implementing this in your company?
We're Vstorm â an Applied Agentic AI Engineering Consultancy
with 30+ production AI agent implementations.
Made with â¤ī¸ by Vstorm
đ Quick Start
# pip
pip install fastapi-fullstack
# uv (recommended)
uv tool install fastapi-fullstack
# pipx
pipx install fastapi-fullstack
â ī¸ Incomplete Data
Some information about this model is not available. Use with Caution - Verify details from the original source before relying on this data.
View Original Source âđ Limitations & Considerations
- âĸ Benchmark scores may vary based on evaluation methodology and hardware configuration.
- âĸ VRAM requirements are estimates; actual usage depends on quantization and batch size.
- âĸ FNI scores are relative rankings and may change as new models are added.
- â License Unknown: Verify licensing terms before commercial use.
Social Proof
AI Summary: Based on GitHub metadata. Not a recommendation.
đĄī¸ Model Transparency Report
Technical metadata sourced from upstream repositories.
đ Identity & Source
- id
- gh-model--vstorm-co--full-stack-ai-agent-template
- slug
- vstorm-co--full-stack-ai-agent-template
- source
- github
- author
- Vstorm Co
- license
- MIT
- tags
- python, ai-agents, docker, fastapi, llm, logfire, mongodb, nextjs, postgresql, pydantic-ai, react, template, typescript, celery, full-stack, langchain, langgraph, redis, saas-boilerplate, clawdbot, rag, agent-framework, crewai, pydantic-deep, websocket, ai-agent-template, vstorm
âī¸ Technical Specs
- architecture
- null
- params billions
- null
- context length
- null
- pipeline tag
- text-generation
đ Engagement & Metrics
- downloads
- 0
- stars
- 1,013
- forks
- 0
Data indexed from public sources. Updated daily.











