bigwolfe commited on
Commit
e07f59c
·
1 Parent(s): bfedc14
specs/003-ai-chat-window/checklists/requirements.md ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Specification Quality Checklist: AI Chat Window
2
+
3
+ **Purpose**: Validate specification completeness and quality before proceeding to planning
4
+ **Created**: 2025-11-26
5
+ **Feature**: [Link to spec.md](../spec.md)
6
+
7
+ ## Content Quality
8
+
9
+ - [x] No implementation details (languages, frameworks, APIs)
10
+ - [x] Focused on user value and business needs
11
+ - [x] Written for non-technical stakeholders
12
+ - [x] All mandatory sections completed
13
+
14
+ ## Requirement Completeness
15
+
16
+ - [x] No [NEEDS CLARIFICATION] markers remain
17
+ - [x] Requirements are testable and unambiguous
18
+ - [x] Success criteria are measurable
19
+ - [x] Success criteria are technology-agnostic (no implementation details)
20
+ - [x] All acceptance scenarios are defined
21
+ - [x] Edge cases are identified
22
+ - [x] Scope is clearly bounded
23
+ - [x] Dependencies and assumptions identified
24
+
25
+ ## Feature Readiness
26
+
27
+ - [x] All functional requirements have clear acceptance criteria
28
+ - [x] User scenarios cover primary flows
29
+ - [x] Feature meets measurable outcomes defined in Success Criteria
30
+ - [x] No implementation details leak into specification
31
+
32
+ ## Notes
33
+
34
+ - Items marked incomplete require spec updates before `/speckit.clarify` or `/speckit.plan`
specs/003-ai-chat-window/contracts/chat-api.yaml ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ openapi: 3.0.0
2
+ info:
3
+ title: Document MCP Chat API
4
+ version: 1.0.0
5
+ paths:
6
+ /api/chat:
7
+ post:
8
+ summary: Send a message to the AI agent
9
+ description: Streams the response using Server-Sent Events (SSE).
10
+ requestBody:
11
+ required: true
12
+ content:
13
+ application/json:
14
+ schema:
15
+ type: object
16
+ properties:
17
+ message:
18
+ type: string
19
+ history:
20
+ type: array
21
+ items:
22
+ type: object
23
+ properties:
24
+ role:
25
+ type: string
26
+ enum: [user, assistant, system]
27
+ content:
28
+ type: string
29
+ persona:
30
+ type: string
31
+ default: default
32
+ model:
33
+ type: string
34
+ responses:
35
+ '200':
36
+ description: Stream of tokens
37
+ content:
38
+ text/event-stream:
39
+ schema:
40
+ type: string
41
+ example: "data: {\"type\": \"token\", \"content\": \"Hello\"}\n\n"
42
+ /api/chat/personas:
43
+ get:
44
+ summary: List available personas
45
+ responses:
46
+ '200':
47
+ description: List of personas
48
+ content:
49
+ application/json:
50
+ schema:
51
+ type: array
52
+ items:
53
+ type: object
54
+ properties:
55
+ id:
56
+ type: string
57
+ name:
58
+ type: string
59
+ description:
60
+ type: string
specs/003-ai-chat-window/data-model.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Data Model: AI Chat Window
2
+
3
+ ## Entities
4
+
5
+ ### ChatMessage
6
+ Represents a single message in the conversation history.
7
+
8
+ | Field | Type | Description |
9
+ |-------|------|-------------|
10
+ | `role` | `enum` | `user`, `assistant`, `system` |
11
+ | `content` | `string` | The text content of the message |
12
+ | `timestamp` | `datetime` | ISO 8601 timestamp of creation |
13
+
14
+ ### ChatRequest
15
+ The payload sent from Frontend to Backend to initiate/continue a chat.
16
+
17
+ | Field | Type | Description |
18
+ |-------|------|-------------|
19
+ | `message` | `string` | The new user message |
20
+ | `history` | `List[ChatMessage]` | Previous conversation context |
21
+ | `persona` | `string` | ID of the selected persona (e.g., "default", "auto-linker") |
22
+ | `model` | `string` | Optional: Specific OpenRouter model ID |
23
+
24
+ ### ChatResponseChunk (SSE)
25
+ The streaming data format received by the frontend.
26
+
27
+ | Field | Type | Description |
28
+ |-------|------|-------------|
29
+ | `type` | `enum` | `token` (text chunk) or `tool_call` (tool execution status) |
30
+ | `content` | `string` | The text fragment or status message |
31
+ | `done` | `boolean` | True if generation is complete |
32
+
33
+ ## Persistence (Markdown Format)
34
+ Saved in `data/vaults/{user_id}/Chat Logs/{timestamp}.md`
35
+
36
+ ```markdown
37
+ ---
38
+ title: Chat Session - {timestamp}
39
+ date: {date}
40
+ tags: [chat-log, {persona}]
41
+ model: {model_id}
42
+ ---
43
+
44
+ # Chat Session
45
+
46
+ ## User ({time})
47
+ What is the summary of...
48
+
49
+ ## Assistant ({time})
50
+ Based on your notes...
51
+ ```
specs/003-ai-chat-window/plan.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Implementation Plan: AI Chat Window
2
+
3
+ **Branch**: `003-ai-chat-window` | **Date**: 2025-11-26 | **Spec**: [specs/003-ai-chat-window/spec.md](spec.md)
4
+ **Input**: Feature specification from `/specs/003-ai-chat-window/spec.md`
5
+
6
+ ## Summary
7
+
8
+ Implement an integrated AI Chat Window powered by OpenRouter. This involves a new backend `POST /api/chat` endpoint that uses the `openai` client to communicate with LLMs, exposing internal `VaultService` methods as tools. The frontend will receive a new `ChatWindow` component with streaming support (SSE) and persona selection. Chat history will be persisted as Markdown files in the vault.
9
+
10
+ ## Technical Context
11
+
12
+ **Language/Version**: Python 3.11+ (Backend), TypeScript/React 18 (Frontend)
13
+ **Primary Dependencies**:
14
+ - Backend: `openai` (for OpenRouter), `fastapi` (StreamingResponse)
15
+ - Frontend: `fetch` (Streaming body reading), Tailwind CSS
16
+ **Storage**:
17
+ - Active Session: In-memory (or transient SQLite)
18
+ - Persistence: Markdown files in `Chat Logs/` folder
19
+ **Testing**: `pytest` (Backend), Manual/E2E (Frontend)
20
+ **Target Platform**: Web Application (Linux Dev Environment)
21
+ **Project Type**: Full-stack (FastAPI + React)
22
+ **Performance Goals**: <3s time-to-first-token
23
+ **Constraints**: Must reuse existing `VaultService` logic; no new database services (keep it lightweight).
24
+
25
+ ## Constitution Check
26
+
27
+ *GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
28
+
29
+ - [x] **Brownfield Integration**: Reuses `VaultService` and `IndexerService`. Matches `backend/src` and `frontend/src` structure.
30
+ - [x] **Test-Backed Development**: Backend logic will be unit tested.
31
+ - [x] **Incremental Delivery**: New API route and independent UI component.
32
+ - [x] **Specification-Driven**: All features map to `spec.md` requirements.
33
+
34
+ ## Project Structure
35
+
36
+ ### Documentation (this feature)
37
+
38
+ ```text
39
+ specs/003-ai-chat-window/
40
+ ├── plan.md # This file
41
+ ├── research.md # Phase 0 output
42
+ ├── data-model.md # Phase 1 output
43
+ ├── quickstart.md # Phase 1 output
44
+ ├── contracts/ # Phase 1 output
45
+ └── tasks.md # Phase 2 output
46
+ ```
47
+
48
+ ### Source Code (repository root)
49
+
50
+ ```text
51
+ backend/
52
+ ├── src/
53
+ │ ├── api/
54
+ │ │ └── routes/
55
+ │ │ └── chat.py # NEW: Chat endpoint logic
56
+ │ ├── services/
57
+ │ │ ├── chat.py # NEW: Chat orchestration service (OpenAI wrapper)
58
+ │ │ └── prompts.py # NEW: System prompts/personas definitions
59
+ │ └── models/
60
+ │ └── chat.py # NEW: Pydantic models for Chat requests/responses
61
+ └── tests/
62
+ └── unit/
63
+ └── test_chat_service.py # NEW: Tests for chat logic
64
+
65
+ frontend/
66
+ ├── src/
67
+ │ ├── components/
68
+ │ │ ├── chat/ # NEW: Chat UI Components
69
+ │ │ │ ├── ChatWindow.tsx
70
+ │ │ │ ├── ChatMessage.tsx
71
+ │ │ │ └── PersonaSelector.tsx
72
+ │ └── services/
73
+ │ └── api.ts # UPDATE: Add chat endpoints
74
+ └── tests/
75
+ ```
76
+
77
+ **Structure Decision**: Standard Full-stack separation. Backend adds a dedicated `chat` service and route to isolate LLM logic from core data services. Frontend adds a self-contained `chat/` directory for UI components.
78
+
79
+ ## Complexity Tracking
80
+
81
+ | Violation | Why Needed | Simpler Alternative Rejected Because |
82
+ |-----------|------------|-------------------------------------|
83
+ | N/A | | |
specs/003-ai-chat-window/quickstart.md ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Quickstart: AI Chat Window
2
+
3
+ ## Prerequisites
4
+ 1. **OpenRouter Key**: Get an API key from [openrouter.ai](https://openrouter.ai).
5
+ 2. **Environment**: Set `OPENROUTER_API_KEY` in `backend/.env`.
6
+
7
+ ## Testing the Backend
8
+ 1. **Start Server**:
9
+ ```bash
10
+ cd backend
11
+ source .venv/bin/activate
12
+ uvicorn src.api.main:app --reload
13
+ ```
14
+ 2. **Test Endpoint**:
15
+ ```bash
16
+ curl -X POST http://localhost:8000/api/chat \
17
+ -H "Content-Type: application/json" \
18
+ -d '{"message": "Hello", "history": []}'
19
+ ```
20
+ *Note: This will output raw SSE stream data.*
21
+
22
+ ## Testing the Frontend
23
+ 1. **Start Client**:
24
+ ```bash
25
+ cd frontend
26
+ npm run dev
27
+ ```
28
+ 2. **Open UI**: Go to `http://localhost:5173`.
29
+ 3. **Chat**: Click the "Chat" button in the sidebar. Select a persona and send a message.
30
+
31
+ ## Verification
32
+ 1. **Check Logs**: After a chat, check `data/vaults/{user}/Chat Logs/` to see the saved Markdown file.
specs/003-ai-chat-window/research.md ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Phase 0: Research & Design Decisions
2
+
3
+ **Feature**: AI Chat Window (`003-ai-chat-window`)
4
+
5
+ ## 1. OpenRouter Integration
6
+ **Question**: What is the best way to integrate OpenRouter in Python?
7
+ **Finding**: OpenRouter is API-compatible with OpenAI. The standard `openai` Python client library is recommended, configured with `base_url="https://openrouter.ai/api/v1"` and the OpenRouter API key.
8
+ **Decision**: Use `openai` Python package.
9
+ **Rationale**: Industry standard, robust, async support.
10
+ **Alternatives**: `requests` (too manual), `langchain` (too heavy/complex for this specific need).
11
+
12
+ ## 2. Real-time Streaming
13
+ **Question**: How to stream LLM tokens from FastAPI to React?
14
+ **Finding**: Server-Sent Events (SSE) is the standard for unidirectional text streaming. FastAPI supports this via `StreamingResponse`.
15
+ **Decision**: Use `StreamingResponse` with a generator that yields SSE-formatted data (`data: ...\n\n`).
16
+ **Rationale**: Simpler than WebSockets, works well through proxies/firewalls, native support in modern browsers (`EventSource` or `fetch` with readable streams).
17
+
18
+ ## 3. Tool Execution Strategy
19
+ **Question**: How to invoke existing MCP tools (`list_notes`, `read_note`) from the chat endpoint?
20
+ **Finding**: The tools are defined as decorated functions in `backend/src/mcp/server.py`. We can import them directly. However, `FastMCP` wraps them. We might need to access the underlying function or just call the wrapper if it allows direct invocation.
21
+ **Decision**: Import the `mcp` object from `backend/src/mcp/server.py`. Use `mcp.list_tools()` to dynamically get tool definitions for the system prompt. Call the underlying functions directly if exposed, or use the `mcp.call_tool()` API if available. *Fallback*: Re-import the service functions (`vault_service.read_note`) directly if the MCP wrapper adds too much overhead/complexity for internal calls.
22
+ **Refinement**: The `server.py` defines tools using `@mcp.tool`. The most robust way is to import the `vault_service` and `indexer_service` instances directly from `server.py` (or a shared module) and wrap them in a simple "Agent Tool" registry for the LLM, mirroring the MCP definitions. This avoids "fake" network calls to localhost.
23
+
24
+ ## 4. Frontend UI Components
25
+ **Question**: What UI library to use for the chat interface?
26
+ **Finding**: Project uses Tailwind + generic React.
27
+ **Decision**: Build a custom `ChatWindow` component using Tailwind. Use a scrollable container for messages and a sticky footer for the input.
28
+ **Rationale**: Lightweight, full control over styling.
29
+
30
+ ## 5. Chat History Persistence
31
+ **Question**: How to store chat history?
32
+ **Finding**: Spec requires saving to Markdown files in the vault.
33
+ **Decision**:
34
+ 1. **In-Memory/Database**: Use a simple `sqlite` table (or just in-memory if stateless) to hold the *active* conversation state for the UI.
35
+ 2. **Persistence**: On "End Session" or auto-save (debounced), dump the conversation to `Chat Logs/{timestamp}-{title}.md`.
36
+ **Rationale**: Markdown is the source of truth. The database is just for the "hot" state to avoid parsing MD files on every new message.
37
+
38
+ ## 6. System Prompts & Personas
39
+ **Question**: How to manage prompts?
40
+ **Decision**: Store prompts in a simple dictionary or JSON file in `backend/src/services/prompts.py`.
41
+ **Structure**:
42
+ ```python
43
+ PERSONAS = {
44
+ "default": "You are a helpful assistant...",
45
+ "auto-linker": "You are an expert editor. Your goal is to densely connect notes...",
46
+ }
47
+ ```
48
+
specs/003-ai-chat-window/spec.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Feature Specification: AI Chat Window
2
+
3
+ **Feature Branch**: `003-ai-chat-window`
4
+ **Created**: 2025-11-26
5
+ **Status**: Draft
6
+ **Input**: User description: "Add an AI Chat Window using OpenRouter as the LLM provider. The system should reuse existing MCP tools (backend agent) to manage the vault. Include a 'Persona/Mode' selector to allow users to choose specialized system prompts for tasks like reindexing, cross-linking, and summarization. Chat history should be persisted to the vault."
7
+
8
+ ## User Scenarios & Testing *(mandatory)*
9
+
10
+ ### User Story 1 - General Q&A with Vault Context (Priority: P1)
11
+
12
+ As a user, I want to ask questions about my notes so that I can quickly find information or synthesize concepts without manually searching.
13
+
14
+ **Why this priority**: This is the core value proposition—enabling natural language interaction with the knowledge base.
15
+
16
+ **Independent Test**: Can be tested by asking a question about a known note and verifying the answer cites the correct information.
17
+
18
+ **Acceptance Scenarios**:
19
+
20
+ 1. **Given** the chat window is open, **When** I ask "What is the summary of project X?", **Then** the agent searches the vault and returns a summary based on the note content.
21
+ 2. **Given** a specific note is open, **When** I ask "Summarize this", **Then** the agent reads the current note context and provides a summary.
22
+
23
+ ---
24
+
25
+ ### User Story 2 - Vault Management via Personas (Priority: P2)
26
+
27
+ As a power user, I want to select specialized "Personas" (e.g., Auto-Linker, Tag Gardener) so that I can perform complex maintenance tasks with optimized prompts.
28
+
29
+ **Why this priority**: Distinguishes this from a simple "chatbot" by adding workflow automation capabilities.
30
+
31
+ **Independent Test**: Select a persona, give a relevant command, and verify the specific tool (write/update) is called.
32
+
33
+ **Acceptance Scenarios**:
34
+
35
+ 1. **Given** the "Auto-Linker" persona is selected, **When** I ask "Fix links in Note A", **Then** the agent identifies unlinked concepts and updates the note with `[[WikiLinks]]`.
36
+ 2. **Given** the "Tag Gardener" persona is selected, **When** I ask "Clean up tags", **Then** the agent identifies synonymous tags and standardizes them across affected notes.
37
+
38
+ ---
39
+
40
+ ### User Story 3 - Chat History Persistence (Priority: P3)
41
+
42
+ As a user, I want my chat conversations to be saved in the vault so that I can reference past insights or continue working later.
43
+
44
+ **Why this priority**: Ensures work isn't lost and integrates chat logs as first-class citizens in the vault.
45
+
46
+ **Independent Test**: Refresh the browser and verify the previous conversation is still visible.
47
+
48
+ **Acceptance Scenarios**:
49
+
50
+ 1. **Given** I have had a conversation, **When** I refresh the page, **Then** the chat history is restored.
51
+ 2. **Given** a conversation is finished, **When** I look in the vault file explorer, **Then** I see a new Markdown file (e.g., in `Chat Logs/`) containing the transcript.
52
+
53
+ ---
54
+
55
+ ### Edge Cases
56
+
57
+ - **Network Failure**: What happens if OpenRouter is down? (System should show error and allow retry).
58
+ - **Large Context**: What happens if the vault search returns too much text? (Agent should truncate or summarize input).
59
+ - **Invalid Tool Use**: What happens if the agent tries to write a file with invalid characters? (System should catch error and ask agent to retry).
60
+
61
+ ## Requirements *(mandatory)*
62
+
63
+ ### Functional Requirements
64
+
65
+ - **FR-001**: System MUST provide a UI interface for Chat (floating or sidebar) that persists across navigation.
66
+ - **FR-002**: System MUST allow users to configure an OpenRouter API Key (via env vars or UI settings).
67
+ - **FR-003**: System MUST expose existing internal MCP tools (`read_note`, `write_note`, `search_notes`, etc.) to the LLM.
68
+ - **FR-004**: System MUST support selecting "Personas" that inject specific system prompts (Auto-Linker, Tag Gardener, etc.) into the context.
69
+ - **FR-005**: Chat sessions MUST be automatically saved to the vault as Markdown files (e.g., in a `Chat Logs` folder).
70
+ - **FR-006**: System MUST stream LLM responses to the UI for real-time feedback.
71
+ - **FR-007**: System MUST support creating new chat sessions and switching between past sessions.
72
+
73
+ ### Key Entities
74
+
75
+ - **Chat Session**: Represents a conversation thread. Properties: ID, Title, Created Date, Messages (User/Assistant/Tool).
76
+ - **Persona**: A preset configuration of System Prompt + available Tools.
77
+
78
+ ## Success Criteria *(mandatory)*
79
+
80
+ ### Measurable Outcomes
81
+
82
+ - **SC-001**: Agent responses start streaming within 3 seconds of user input.
83
+ - **SC-002**: 95% of "Auto-Linker" requests result in valid WikiLinks being added without syntax errors.
84
+ - **SC-003**: Users can switch between active chat and past history (refresh/reload) with zero data loss.
85
+ - **SC-004**: System can handle a context window of at least 16k tokens (supporting moderate-sized note analysis).
specs/003-ai-chat-window/tasks.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Tasks: AI Chat Window
2
+
3
+ **Feature Branch**: `003-ai-chat-window`
4
+ **Spec**: [specs/003-ai-chat-window/spec.md](spec.md)
5
+ **Plan**: [specs/003-ai-chat-window/plan.md](plan.md)
6
+
7
+ ## Phase 1: Setup
8
+ *Goal: Initialize project structure and install dependencies.*
9
+
10
+ - [ ] T001 Create contracts directory and API spec at specs/003-ai-chat-window/contracts/chat-api.yaml
11
+ - [ ] T002 [P] Create directory backend/src/services/chat
12
+ - [ ] T003 [P] Create directory frontend/src/components/chat
13
+ - [ ] T004 Add openai dependency to backend/requirements.txt
14
+
15
+ ## Phase 2: Foundational
16
+ *Goal: Core backend logic for Chat (Service Layer).*
17
+
18
+ - [ ] T005 [US1] Define ChatMessage and ChatRequest models in backend/src/models/chat.py
19
+ - [ ] T006 [US1] Define Persona and Prompt models in backend/src/models/chat.py
20
+ - [ ] T007 [US2] Implement prompt storage (dictionary of personas) in backend/src/services/prompts.py
21
+ - [ ] T008 [US1] Create ChatService class skeleton in backend/src/services/chat.py
22
+
23
+ ## Phase 3: User Story 1 - General Q&A with Vault Context
24
+ *Goal: Enable basic chat interactions with streaming and tool use.*
25
+ *Test Criteria: Can ask a question and get a streaming response citing vault notes.*
26
+
27
+ - [ ] T009 [US1] Implement OpenAI client initialization in backend/src/services/chat.py
28
+ - [ ] T010 [US1] Implement tool registry (wrap VaultService/IndexerService) in backend/src/services/chat.py
29
+ - [ ] T011 [US1] Implement stream_chat method with SSE generator in backend/src/services/chat.py
30
+ - [ ] T012 [US1] Create unit tests for ChatService in backend/tests/unit/test_chat_service.py
31
+ - [ ] T013 [US1] Implement POST /api/chat endpoint in backend/src/api/routes/chat.py
32
+ - [ ] T014 [US1] Register chat router in backend/src/api/main.py
33
+ - [ ] T015 [P] [US1] Create ChatMessage component in frontend/src/components/chat/ChatMessage.tsx
34
+ - [ ] T016 [US1] Create ChatWindow component skeleton in frontend/src/components/chat/ChatWindow.tsx
35
+ - [ ] T017 [US1] Implement streaming fetch logic in frontend/src/services/api.ts
36
+ - [ ] T018 [US1] Connect ChatWindow to API and handle SSE stream in frontend/src/components/chat/ChatWindow.tsx
37
+
38
+ ## Phase 4: User Story 2 - Vault Management via Personas
39
+ *Goal: Allow users to select specialized agents for maintenance tasks.*
40
+ *Test Criteria: Selecting "Auto-Linker" injects the correct system prompt and tools.*
41
+
42
+ - [ ] T019 [US2] Add GET /api/chat/personas endpoint to backend/src/api/routes/chat.py
43
+ - [ ] T020 [P] [US2] Create PersonaSelector component in frontend/src/components/chat/PersonaSelector.tsx
44
+ - [ ] T021 [US2] Add persona selection state to frontend/src/components/chat/ChatWindow.tsx
45
+ - [ ] T022 [US2] Update ChatService to accept and apply persona ID in backend/src/services/chat.py
46
+
47
+ ## Phase 5: User Story 3 - Chat History Persistence
48
+ *Goal: Save conversation logs to the vault.*
49
+ *Test Criteria: Chat logs appear as Markdown files in "Chat Logs/" folder.*
50
+
51
+ - [ ] T023 [US3] Implement save_chat_log method in backend/src/services/chat.py (Markdown formatting)
52
+ - [ ] T024 [US3] Update POST /api/chat to auto-save on completion (or session end) in backend/src/api/routes/chat.py
53
+ - [ ] T025 [US3] Add logic to restore history from ChatRequest.history in backend/src/services/chat.py
54
+ - [ ] T026 [US3] Add "Clear History" or "New Chat" button in frontend/src/components/chat/ChatWindow.tsx
55
+
56
+ ## Phase 6: Polish & Cross-Cutting Concerns
57
+ *Goal: Final UI touches and error handling.*
58
+
59
+ - [ ] T027 [P] Style ChatWindow with Tailwind (responsive sidebar/floating) in frontend/src/components/chat/ChatWindow.tsx
60
+ - [ ] T028 Implement error handling for OpenRouter failures in backend/src/services/chat.py
61
+ - [ ] T029 Add tool execution status messages to UI stream in frontend/src/components/chat/ChatMessage.tsx
62
+
63
+ ## Dependencies
64
+
65
+ - **US1** depends on Setup & Foundational tasks.
66
+ - **US2** extends US1 (can be parallelized after US1 backend is stable).
67
+ - **US3** extends US1 backend logic.
68
+
69
+ ## Implementation Strategy
70
+
71
+ 1. **MVP (US1)**: Get the chat bubble working with a hardcoded "Hello World" stream, then hook up OpenRouter.
72
+ 2. **Tools**: Enable `read_note` and `search_notes` so the agent isn't blind.
73
+ 3. **Personas (US2)**: Add the dropdown and the specialized prompts.
74
+ 4. **Persistence (US3)**: Add the file writing logic last.