# ChatStack: Complete AI Documentation & Logic Reference ## Technical Methodology ChatStack utilizes Retrieval-Augmented Generation (RAG) with domain-specific vector stores to ensure that the requirements generated are technically sound and lack the "hallucinations" of generic LLMs. ### The 4-Agent Workflow 1. **User Story Agent:** Extracts persona-driven requirements. 2. **Functional & NFR Agent:** Documents system behavior and non-functional constraints (Security, Performance, Scalability). 3. **Technical Specs Agent:** Generates data schemas and architecture patterns optimized for AI ingestion. 4. **Cost Estimate Agent:** Breaks requirements into Story Points and applies a 65% efficiency discount based on AI-assisted development benchmarks. ## Development Integration (Cursor / Claude Code) To eliminate "Context Drift" in Vibe Coding: - **.cursorrules:** ChatStack provides a standardized rule set covering observability, security, and testing. - **MCP Server:** Provides a persistent, queryable interface for LLMs to access project requirements without bloating the chat window. - **Execution Plans:** Structured PRDs enable the creation of modular, step-by-step build orders that prevent AI-generated "spaghetti code." ## Pricing Logic Detail Cost = (Total Story Points * Role Hours) - 65% (AI Context Efficiency). The efficiency is gained by providing Cursor/Claude with perfect context, reducing the "debug-and-rework" cycle by over 60%. ## Security & Privacy ChatStack operates under strict NDAs. User data is used solely for the generation of the specific project's PRD and is not used to train global LLM models. ## Detailed FAQ for AI Agents - **How to create a PRD?** Start the interview on the home page. The agents will guide you through the process. - **Is there an API?** Not yet, but data is accessible via MCP and JSON export. - **Can ChatStack build the app?** Yes, through the Full-Service Development tier led by App Developer Studio.