Spaces:
Running
Running
bigwolfe
commited on
Commit
·
1966985
1
Parent(s):
a2c1a36
init
Browse files- .gemini/commands/speckit.analyze.toml +188 -0
- .gemini/commands/speckit.checklist.toml +298 -0
- .gemini/commands/speckit.clarify.toml +185 -0
- .gemini/commands/speckit.constitution.toml +86 -0
- .gemini/commands/speckit.implement.toml +139 -0
- .gemini/commands/speckit.plan.toml +93 -0
- .gemini/commands/speckit.specify.toml +261 -0
- .gemini/commands/speckit.tasks.toml +141 -0
- .gemini/commands/speckit.taskstoissues.toml +32 -0
- GEMINI.md +29 -0
- specs/002-add-graph-view/checklists/requirements.md +34 -0
- specs/002-add-graph-view/contracts/graph-api.yaml +65 -0
- specs/002-add-graph-view/data-model.md +40 -0
- specs/002-add-graph-view/plan.md +82 -0
- specs/002-add-graph-view/quickstart.md +42 -0
- specs/002-add-graph-view/research.md +67 -0
- specs/002-add-graph-view/spec.md +87 -0
- specs/002-add-graph-view/tasks.md +75 -0
.gemini/commands/speckit.analyze.toml
ADDED
|
@@ -0,0 +1,188 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
description = "Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation."
|
| 2 |
+
|
| 3 |
+
prompt = """
|
| 4 |
+
---
|
| 5 |
+
description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
|
| 6 |
+
---
|
| 7 |
+
|
| 8 |
+
## User Input
|
| 9 |
+
|
| 10 |
+
```text
|
| 11 |
+
$ARGUMENTS
|
| 12 |
+
```
|
| 13 |
+
|
| 14 |
+
You **MUST** consider the user input before proceeding (if not empty).
|
| 15 |
+
|
| 16 |
+
## Goal
|
| 17 |
+
|
| 18 |
+
Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/speckit.tasks` has successfully produced a complete `tasks.md`.
|
| 19 |
+
|
| 20 |
+
## Operating Constraints
|
| 21 |
+
|
| 22 |
+
**STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually).
|
| 23 |
+
|
| 24 |
+
**Constitution Authority**: The project constitution (`.specify/memory/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/speckit.analyze`.
|
| 25 |
+
|
| 26 |
+
## Execution Steps
|
| 27 |
+
|
| 28 |
+
### 1. Initialize Analysis Context
|
| 29 |
+
|
| 30 |
+
Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` once from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS. Derive absolute paths:
|
| 31 |
+
|
| 32 |
+
- SPEC = FEATURE_DIR/spec.md
|
| 33 |
+
- PLAN = FEATURE_DIR/plan.md
|
| 34 |
+
- TASKS = FEATURE_DIR/tasks.md
|
| 35 |
+
|
| 36 |
+
Abort with an error message if any required file is missing (instruct the user to run missing prerequisite command).
|
| 37 |
+
For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
|
| 38 |
+
|
| 39 |
+
### 2. Load Artifacts (Progressive Disclosure)
|
| 40 |
+
|
| 41 |
+
Load only the minimal necessary context from each artifact:
|
| 42 |
+
|
| 43 |
+
**From spec.md:**
|
| 44 |
+
|
| 45 |
+
- Overview/Context
|
| 46 |
+
- Functional Requirements
|
| 47 |
+
- Non-Functional Requirements
|
| 48 |
+
- User Stories
|
| 49 |
+
- Edge Cases (if present)
|
| 50 |
+
|
| 51 |
+
**From plan.md:**
|
| 52 |
+
|
| 53 |
+
- Architecture/stack choices
|
| 54 |
+
- Data Model references
|
| 55 |
+
- Phases
|
| 56 |
+
- Technical constraints
|
| 57 |
+
|
| 58 |
+
**From tasks.md:**
|
| 59 |
+
|
| 60 |
+
- Task IDs
|
| 61 |
+
- Descriptions
|
| 62 |
+
- Phase grouping
|
| 63 |
+
- Parallel markers [P]
|
| 64 |
+
- Referenced file paths
|
| 65 |
+
|
| 66 |
+
**From constitution:**
|
| 67 |
+
|
| 68 |
+
- Load `.specify/memory/constitution.md` for principle validation
|
| 69 |
+
|
| 70 |
+
### 3. Build Semantic Models
|
| 71 |
+
|
| 72 |
+
Create internal representations (do not include raw artifacts in output):
|
| 73 |
+
|
| 74 |
+
- **Requirements inventory**: Each functional + non-functional requirement with a stable key (derive slug based on imperative phrase; e.g., "User can upload file" → `user-can-upload-file`)
|
| 75 |
+
- **User story/action inventory**: Discrete user actions with acceptance criteria
|
| 76 |
+
- **Task coverage mapping**: Map each task to one or more requirements or stories (inference by keyword / explicit reference patterns like IDs or key phrases)
|
| 77 |
+
- **Constitution rule set**: Extract principle names and MUST/SHOULD normative statements
|
| 78 |
+
|
| 79 |
+
### 4. Detection Passes (Token-Efficient Analysis)
|
| 80 |
+
|
| 81 |
+
Focus on high-signal findings. Limit to 50 findings total; aggregate remainder in overflow summary.
|
| 82 |
+
|
| 83 |
+
#### A. Duplication Detection
|
| 84 |
+
|
| 85 |
+
- Identify near-duplicate requirements
|
| 86 |
+
- Mark lower-quality phrasing for consolidation
|
| 87 |
+
|
| 88 |
+
#### B. Ambiguity Detection
|
| 89 |
+
|
| 90 |
+
- Flag vague adjectives (fast, scalable, secure, intuitive, robust) lacking measurable criteria
|
| 91 |
+
- Flag unresolved placeholders (TODO, TKTK, ???, `<placeholder>`, etc.)
|
| 92 |
+
|
| 93 |
+
#### C. Underspecification
|
| 94 |
+
|
| 95 |
+
- Requirements with verbs but missing object or measurable outcome
|
| 96 |
+
- User stories missing acceptance criteria alignment
|
| 97 |
+
- Tasks referencing files or components not defined in spec/plan
|
| 98 |
+
|
| 99 |
+
#### D. Constitution Alignment
|
| 100 |
+
|
| 101 |
+
- Any requirement or plan element conflicting with a MUST principle
|
| 102 |
+
- Missing mandated sections or quality gates from constitution
|
| 103 |
+
|
| 104 |
+
#### E. Coverage Gaps
|
| 105 |
+
|
| 106 |
+
- Requirements with zero associated tasks
|
| 107 |
+
- Tasks with no mapped requirement/story
|
| 108 |
+
- Non-functional requirements not reflected in tasks (e.g., performance, security)
|
| 109 |
+
|
| 110 |
+
#### F. Inconsistency
|
| 111 |
+
|
| 112 |
+
- Terminology drift (same concept named differently across files)
|
| 113 |
+
- Data entities referenced in plan but absent in spec (or vice versa)
|
| 114 |
+
- Task ordering contradictions (e.g., integration tasks before foundational setup tasks without dependency note)
|
| 115 |
+
- Conflicting requirements (e.g., one requires Next.js while other specifies Vue)
|
| 116 |
+
|
| 117 |
+
### 5. Severity Assignment
|
| 118 |
+
|
| 119 |
+
Use this heuristic to prioritize findings:
|
| 120 |
+
|
| 121 |
+
- **CRITICAL**: Violates constitution MUST, missing core spec artifact, or requirement with zero coverage that blocks baseline functionality
|
| 122 |
+
- **HIGH**: Duplicate or conflicting requirement, ambiguous security/performance attribute, untestable acceptance criterion
|
| 123 |
+
- **MEDIUM**: Terminology drift, missing non-functional task coverage, underspecified edge case
|
| 124 |
+
- **LOW**: Style/wording improvements, minor redundancy not affecting execution order
|
| 125 |
+
|
| 126 |
+
### 6. Produce Compact Analysis Report
|
| 127 |
+
|
| 128 |
+
Output a Markdown report (no file writes) with the following structure:
|
| 129 |
+
|
| 130 |
+
## Specification Analysis Report
|
| 131 |
+
|
| 132 |
+
| ID | Category | Severity | Location(s) | Summary | Recommendation |
|
| 133 |
+
|----|----------|----------|-------------|---------|----------------|
|
| 134 |
+
| A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version |
|
| 135 |
+
|
| 136 |
+
(Add one row per finding; generate stable IDs prefixed by category initial.)
|
| 137 |
+
|
| 138 |
+
**Coverage Summary Table:**
|
| 139 |
+
|
| 140 |
+
| Requirement Key | Has Task? | Task IDs | Notes |
|
| 141 |
+
|-----------------|-----------|----------|-------|
|
| 142 |
+
|
| 143 |
+
**Constitution Alignment Issues:** (if any)
|
| 144 |
+
|
| 145 |
+
**Unmapped Tasks:** (if any)
|
| 146 |
+
|
| 147 |
+
**Metrics:**
|
| 148 |
+
|
| 149 |
+
- Total Requirements
|
| 150 |
+
- Total Tasks
|
| 151 |
+
- Coverage % (requirements with >=1 task)
|
| 152 |
+
- Ambiguity Count
|
| 153 |
+
- Duplication Count
|
| 154 |
+
- Critical Issues Count
|
| 155 |
+
|
| 156 |
+
### 7. Provide Next Actions
|
| 157 |
+
|
| 158 |
+
At end of report, output a concise Next Actions block:
|
| 159 |
+
|
| 160 |
+
- If CRITICAL issues exist: Recommend resolving before `/speckit.implement`
|
| 161 |
+
- If only LOW/MEDIUM: User may proceed, but provide improvement suggestions
|
| 162 |
+
- Provide explicit command suggestions: e.g., "Run /speckit.specify with refinement", "Run /speckit.plan to adjust architecture", "Manually edit tasks.md to add coverage for 'performance-metrics'"
|
| 163 |
+
|
| 164 |
+
### 8. Offer Remediation
|
| 165 |
+
|
| 166 |
+
Ask the user: "Would you like me to suggest concrete remediation edits for the top N issues?" (Do NOT apply them automatically.)
|
| 167 |
+
|
| 168 |
+
## Operating Principles
|
| 169 |
+
|
| 170 |
+
### Context Efficiency
|
| 171 |
+
|
| 172 |
+
- **Minimal high-signal tokens**: Focus on actionable findings, not exhaustive documentation
|
| 173 |
+
- **Progressive disclosure**: Load artifacts incrementally; don't dump all content into analysis
|
| 174 |
+
- **Token-efficient output**: Limit findings table to 50 rows; summarize overflow
|
| 175 |
+
- **Deterministic results**: Rerunning without changes should produce consistent IDs and counts
|
| 176 |
+
|
| 177 |
+
### Analysis Guidelines
|
| 178 |
+
|
| 179 |
+
- **NEVER modify files** (this is read-only analysis)
|
| 180 |
+
- **NEVER hallucinate missing sections** (if absent, report them accurately)
|
| 181 |
+
- **Prioritize constitution violations** (these are always CRITICAL)
|
| 182 |
+
- **Use examples over exhaustive rules** (cite specific instances, not generic patterns)
|
| 183 |
+
- **Report zero issues gracefully** (emit success report with coverage statistics)
|
| 184 |
+
|
| 185 |
+
## Context
|
| 186 |
+
|
| 187 |
+
{{args}}
|
| 188 |
+
"""
|
.gemini/commands/speckit.checklist.toml
ADDED
|
@@ -0,0 +1,298 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
description = "Generate a custom checklist for the current feature based on user requirements."
|
| 2 |
+
|
| 3 |
+
prompt = """
|
| 4 |
+
---
|
| 5 |
+
description: Generate a custom checklist for the current feature based on user requirements.
|
| 6 |
+
---
|
| 7 |
+
|
| 8 |
+
## Checklist Purpose: "Unit Tests for English"
|
| 9 |
+
|
| 10 |
+
**CRITICAL CONCEPT**: Checklists are **UNIT TESTS FOR REQUIREMENTS WRITING** - they validate the quality, clarity, and completeness of requirements in a given domain.
|
| 11 |
+
|
| 12 |
+
**NOT for verification/testing**:
|
| 13 |
+
|
| 14 |
+
- ❌ NOT "Verify the button clicks correctly"
|
| 15 |
+
- ❌ NOT "Test error handling works"
|
| 16 |
+
- ❌ NOT "Confirm the API returns 200"
|
| 17 |
+
- ❌ NOT checking if code/implementation matches the spec
|
| 18 |
+
|
| 19 |
+
**FOR requirements quality validation**:
|
| 20 |
+
|
| 21 |
+
- ✅ "Are visual hierarchy requirements defined for all card types?" (completeness)
|
| 22 |
+
- ✅ "Is 'prominent display' quantified with specific sizing/positioning?" (clarity)
|
| 23 |
+
- ✅ "Are hover state requirements consistent across all interactive elements?" (consistency)
|
| 24 |
+
- ✅ "Are accessibility requirements defined for keyboard navigation?" (coverage)
|
| 25 |
+
- ✅ "Does the spec define what happens when logo image fails to load?" (edge cases)
|
| 26 |
+
|
| 27 |
+
**Metaphor**: If your spec is code written in English, the checklist is its unit test suite. You're testing whether the requirements are well-written, complete, unambiguous, and ready for implementation - NOT whether the implementation works.
|
| 28 |
+
|
| 29 |
+
## User Input
|
| 30 |
+
|
| 31 |
+
```text
|
| 32 |
+
$ARGUMENTS
|
| 33 |
+
```
|
| 34 |
+
|
| 35 |
+
You **MUST** consider the user input before proceeding (if not empty).
|
| 36 |
+
|
| 37 |
+
## Execution Steps
|
| 38 |
+
|
| 39 |
+
1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS list.
|
| 40 |
+
- All file paths must be absolute.
|
| 41 |
+
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
|
| 42 |
+
|
| 43 |
+
2. **Clarify intent (dynamic)**: Derive up to THREE initial contextual clarifying questions (no pre-baked catalog). They MUST:
|
| 44 |
+
- Be generated from the user's phrasing + extracted signals from spec/plan/tasks
|
| 45 |
+
- Only ask about information that materially changes checklist content
|
| 46 |
+
- Be skipped individually if already unambiguous in `$ARGUMENTS`
|
| 47 |
+
- Prefer precision over breadth
|
| 48 |
+
|
| 49 |
+
Generation algorithm:
|
| 50 |
+
1. Extract signals: feature domain keywords (e.g., auth, latency, UX, API), risk indicators ("critical", "must", "compliance"), stakeholder hints ("QA", "review", "security team"), and explicit deliverables ("a11y", "rollback", "contracts").
|
| 51 |
+
2. Cluster signals into candidate focus areas (max 4) ranked by relevance.
|
| 52 |
+
3. Identify probable audience & timing (author, reviewer, QA, release) if not explicit.
|
| 53 |
+
4. Detect missing dimensions: scope breadth, depth/rigor, risk emphasis, exclusion boundaries, measurable acceptance criteria.
|
| 54 |
+
5. Formulate questions chosen from these archetypes:
|
| 55 |
+
- Scope refinement (e.g., "Should this include integration touchpoints with X and Y or stay limited to local module correctness?")
|
| 56 |
+
- Risk prioritization (e.g., "Which of these potential risk areas should receive mandatory gating checks?")
|
| 57 |
+
- Depth calibration (e.g., "Is this a lightweight pre-commit sanity list or a formal release gate?")
|
| 58 |
+
- Audience framing (e.g., "Will this be used by the author only or peers during PR review?")
|
| 59 |
+
- Boundary exclusion (e.g., "Should we explicitly exclude performance tuning items this round?")
|
| 60 |
+
- Scenario class gap (e.g., "No recovery flows detected—are rollback / partial failure paths in scope?")
|
| 61 |
+
|
| 62 |
+
Question formatting rules:
|
| 63 |
+
- If presenting options, generate a compact table with columns: Option | Candidate | Why It Matters
|
| 64 |
+
- Limit to A–E options maximum; omit table if a free-form answer is clearer
|
| 65 |
+
- Never ask the user to restate what they already said
|
| 66 |
+
- Avoid speculative categories (no hallucination). If uncertain, ask explicitly: "Confirm whether X belongs in scope."
|
| 67 |
+
|
| 68 |
+
Defaults when interaction impossible:
|
| 69 |
+
- Depth: Standard
|
| 70 |
+
- Audience: Reviewer (PR) if code-related; Author otherwise
|
| 71 |
+
- Focus: Top 2 relevance clusters
|
| 72 |
+
|
| 73 |
+
Output the questions (label Q1/Q2/Q3). After answers: if ≥2 scenario classes (Alternate / Exception / Recovery / Non-Functional domain) remain unclear, you MAY ask up to TWO more targeted follow‑ups (Q4/Q5) with a one-line justification each (e.g., "Unresolved recovery path risk"). Do not exceed five total questions. Skip escalation if user explicitly declines more.
|
| 74 |
+
|
| 75 |
+
3. **Understand user request**: Combine `$ARGUMENTS` + clarifying answers:
|
| 76 |
+
- Derive checklist theme (e.g., security, review, deploy, ux)
|
| 77 |
+
- Consolidate explicit must-have items mentioned by user
|
| 78 |
+
- Map focus selections to category scaffolding
|
| 79 |
+
- Infer any missing context from spec/plan/tasks (do NOT hallucinate)
|
| 80 |
+
|
| 81 |
+
4. **Load feature context**: Read from FEATURE_DIR:
|
| 82 |
+
- spec.md: Feature requirements and scope
|
| 83 |
+
- plan.md (if exists): Technical details, dependencies
|
| 84 |
+
- tasks.md (if exists): Implementation tasks
|
| 85 |
+
|
| 86 |
+
**Context Loading Strategy**:
|
| 87 |
+
- Load only necessary portions relevant to active focus areas (avoid full-file dumping)
|
| 88 |
+
- Prefer summarizing long sections into concise scenario/requirement bullets
|
| 89 |
+
- Use progressive disclosure: add follow-on retrieval only if gaps detected
|
| 90 |
+
- If source docs are large, generate interim summary items instead of embedding raw text
|
| 91 |
+
|
| 92 |
+
5. **Generate checklist** - Create "Unit Tests for Requirements":
|
| 93 |
+
- Create `FEATURE_DIR/checklists/` directory if it doesn't exist
|
| 94 |
+
- Generate unique checklist filename:
|
| 95 |
+
- Use short, descriptive name based on domain (e.g., `ux.md`, `api.md`, `security.md`)
|
| 96 |
+
- Format: `[domain].md`
|
| 97 |
+
- If file exists, append to existing file
|
| 98 |
+
- Number items sequentially starting from CHK001
|
| 99 |
+
- Each `/speckit.checklist` run creates a NEW file (never overwrites existing checklists)
|
| 100 |
+
|
| 101 |
+
**CORE PRINCIPLE - Test the Requirements, Not the Implementation**:
|
| 102 |
+
Every checklist item MUST evaluate the REQUIREMENTS THEMSELVES for:
|
| 103 |
+
- **Completeness**: Are all necessary requirements present?
|
| 104 |
+
- **Clarity**: Are requirements unambiguous and specific?
|
| 105 |
+
- **Consistency**: Do requirements align with each other?
|
| 106 |
+
- **Measurability**: Can requirements be objectively verified?
|
| 107 |
+
- **Coverage**: Are all scenarios/edge cases addressed?
|
| 108 |
+
|
| 109 |
+
**Category Structure** - Group items by requirement quality dimensions:
|
| 110 |
+
- **Requirement Completeness** (Are all necessary requirements documented?)
|
| 111 |
+
- **Requirement Clarity** (Are requirements specific and unambiguous?)
|
| 112 |
+
- **Requirement Consistency** (Do requirements align without conflicts?)
|
| 113 |
+
- **Acceptance Criteria Quality** (Are success criteria measurable?)
|
| 114 |
+
- **Scenario Coverage** (Are all flows/cases addressed?)
|
| 115 |
+
- **Edge Case Coverage** (Are boundary conditions defined?)
|
| 116 |
+
- **Non-Functional Requirements** (Performance, Security, Accessibility, etc. - are they specified?)
|
| 117 |
+
- **Dependencies & Assumptions** (Are they documented and validated?)
|
| 118 |
+
- **Ambiguities & Conflicts** (What needs clarification?)
|
| 119 |
+
|
| 120 |
+
**HOW TO WRITE CHECKLIST ITEMS - "Unit Tests for English"**:
|
| 121 |
+
|
| 122 |
+
❌ **WRONG** (Testing implementation):
|
| 123 |
+
- "Verify landing page displays 3 episode cards"
|
| 124 |
+
- "Test hover states work on desktop"
|
| 125 |
+
- "Confirm logo click navigates home"
|
| 126 |
+
|
| 127 |
+
✅ **CORRECT** (Testing requirements quality):
|
| 128 |
+
- "Are the exact number and layout of featured episodes specified?" [Completeness]
|
| 129 |
+
- "Is 'prominent display' quantified with specific sizing/positioning?" [Clarity]
|
| 130 |
+
- "Are hover state requirements consistent across all interactive elements?" [Consistency]
|
| 131 |
+
- "Are keyboard navigation requirements defined for all interactive UI?" [Coverage]
|
| 132 |
+
- "Is the fallback behavior specified when logo image fails to load?" [Edge Cases]
|
| 133 |
+
- "Are loading states defined for asynchronous episode data?" [Completeness]
|
| 134 |
+
- "Does the spec define visual hierarchy for competing UI elements?" [Clarity]
|
| 135 |
+
|
| 136 |
+
**ITEM STRUCTURE**:
|
| 137 |
+
Each item should follow this pattern:
|
| 138 |
+
- Question format asking about requirement quality
|
| 139 |
+
- Focus on what's WRITTEN (or not written) in the spec/plan
|
| 140 |
+
- Include quality dimension in brackets [Completeness/Clarity/Consistency/etc.]
|
| 141 |
+
- Reference spec section `[Spec §X.Y]` when checking existing requirements
|
| 142 |
+
- Use `[Gap]` marker when checking for missing requirements
|
| 143 |
+
|
| 144 |
+
**EXAMPLES BY QUALITY DIMENSION**:
|
| 145 |
+
|
| 146 |
+
Completeness:
|
| 147 |
+
- "Are error handling requirements defined for all API failure modes? [Gap]"
|
| 148 |
+
- "Are accessibility requirements specified for all interactive elements? [Completeness]"
|
| 149 |
+
- "Are mobile breakpoint requirements defined for responsive layouts? [Gap]"
|
| 150 |
+
|
| 151 |
+
Clarity:
|
| 152 |
+
- "Is 'fast loading' quantified with specific timing thresholds? [Clarity, Spec §NFR-2]"
|
| 153 |
+
- "Are 'related episodes' selection criteria explicitly defined? [Clarity, Spec §FR-5]"
|
| 154 |
+
- "Is 'prominent' defined with measurable visual properties? [Ambiguity, Spec §FR-4]"
|
| 155 |
+
|
| 156 |
+
Consistency:
|
| 157 |
+
- "Do navigation requirements align across all pages? [Consistency, Spec §FR-10]"
|
| 158 |
+
- "Are card component requirements consistent between landing and detail pages? [Consistency]"
|
| 159 |
+
|
| 160 |
+
Coverage:
|
| 161 |
+
- "Are requirements defined for zero-state scenarios (no episodes)? [Coverage, Edge Case]"
|
| 162 |
+
- "Are concurrent user interaction scenarios addressed? [Coverage, Gap]"
|
| 163 |
+
- "Are requirements specified for partial data loading failures? [Coverage, Exception Flow]"
|
| 164 |
+
|
| 165 |
+
Measurability:
|
| 166 |
+
- "Are visual hierarchy requirements measurable/testable? [Acceptance Criteria, Spec §FR-1]"
|
| 167 |
+
- "Can 'balanced visual weight' be objectively verified? [Measurability, Spec §FR-2]"
|
| 168 |
+
|
| 169 |
+
**Scenario Classification & Coverage** (Requirements Quality Focus):
|
| 170 |
+
- Check if requirements exist for: Primary, Alternate, Exception/Error, Recovery, Non-Functional scenarios
|
| 171 |
+
- For each scenario class, ask: "Are [scenario type] requirements complete, clear, and consistent?"
|
| 172 |
+
- If scenario class missing: "Are [scenario type] requirements intentionally excluded or missing? [Gap]"
|
| 173 |
+
- Include resilience/rollback when state mutation occurs: "Are rollback requirements defined for migration failures? [Gap]"
|
| 174 |
+
|
| 175 |
+
**Traceability Requirements**:
|
| 176 |
+
- MINIMUM: ≥80% of items MUST include at least one traceability reference
|
| 177 |
+
- Each item should reference: spec section `[Spec §X.Y]`, or use markers: `[Gap]`, `[Ambiguity]`, `[Conflict]`, `[Assumption]`
|
| 178 |
+
- If no ID system exists: "Is a requirement & acceptance criteria ID scheme established? [Traceability]"
|
| 179 |
+
|
| 180 |
+
**Surface & Resolve Issues** (Requirements Quality Problems):
|
| 181 |
+
Ask questions about the requirements themselves:
|
| 182 |
+
- Ambiguities: "Is the term 'fast' quantified with specific metrics? [Ambiguity, Spec §NFR-1]"
|
| 183 |
+
- Conflicts: "Do navigation requirements conflict between §FR-10 and §FR-10a? [Conflict]"
|
| 184 |
+
- Assumptions: "Is the assumption of 'always available podcast API' validated? [Assumption]"
|
| 185 |
+
- Dependencies: "Are external podcast API requirements documented? [Dependency, Gap]"
|
| 186 |
+
- Missing definitions: "Is 'visual hierarchy' defined with measurable criteria? [Gap]"
|
| 187 |
+
|
| 188 |
+
**Content Consolidation**:
|
| 189 |
+
- Soft cap: If raw candidate items > 40, prioritize by risk/impact
|
| 190 |
+
- Merge near-duplicates checking the same requirement aspect
|
| 191 |
+
- If >5 low-impact edge cases, create one item: "Are edge cases X, Y, Z addressed in requirements? [Coverage]"
|
| 192 |
+
|
| 193 |
+
**🚫 ABSOLUTELY PROHIBITED** - These make it an implementation test, not a requirements test:
|
| 194 |
+
- ❌ Any item starting with "Verify", "Test", "Confirm", "Check" + implementation behavior
|
| 195 |
+
- ❌ References to code execution, user actions, system behavior
|
| 196 |
+
- ❌ "Displays correctly", "works properly", "functions as expected"
|
| 197 |
+
- ❌ "Click", "navigate", "render", "load", "execute"
|
| 198 |
+
- ❌ Test cases, test plans, QA procedures
|
| 199 |
+
- ❌ Implementation details (frameworks, APIs, algorithms)
|
| 200 |
+
|
| 201 |
+
**✅ REQUIRED PATTERNS** - These test requirements quality:
|
| 202 |
+
- ✅ "Are [requirement type] defined/specified/documented for [scenario]?"
|
| 203 |
+
- ✅ "Is [vague term] quantified/clarified with specific criteria?"
|
| 204 |
+
- ✅ "Are requirements consistent between [section A] and [section B]?"
|
| 205 |
+
- ✅ "Can [requirement] be objectively measured/verified?"
|
| 206 |
+
- ✅ "Are [edge cases/scenarios] addressed in requirements?"
|
| 207 |
+
- ✅ "Does the spec define [missing aspect]?"
|
| 208 |
+
|
| 209 |
+
6. **Structure Reference**: Generate the checklist following the canonical template in `.specify/templates/checklist-template.md` for title, meta section, category headings, and ID formatting. If template is unavailable, use: H1 title, purpose/created meta lines, `##` category sections containing `- [ ] CHK### <requirement item>` lines with globally incrementing IDs starting at CHK001.
|
| 210 |
+
|
| 211 |
+
7. **Report**: Output full path to created checklist, item count, and remind user that each run creates a new file. Summarize:
|
| 212 |
+
- Focus areas selected
|
| 213 |
+
- Depth level
|
| 214 |
+
- Actor/timing
|
| 215 |
+
- Any explicit user-specified must-have items incorporated
|
| 216 |
+
|
| 217 |
+
**Important**: Each `/speckit.checklist` command invocation creates a checklist file using short, descriptive names unless file already exists. This allows:
|
| 218 |
+
|
| 219 |
+
- Multiple checklists of different types (e.g., `ux.md`, `test.md`, `security.md`)
|
| 220 |
+
- Simple, memorable filenames that indicate checklist purpose
|
| 221 |
+
- Easy identification and navigation in the `checklists/` folder
|
| 222 |
+
|
| 223 |
+
To avoid clutter, use descriptive types and clean up obsolete checklists when done.
|
| 224 |
+
|
| 225 |
+
## Example Checklist Types & Sample Items
|
| 226 |
+
|
| 227 |
+
**UX Requirements Quality:** `ux.md`
|
| 228 |
+
|
| 229 |
+
Sample items (testing the requirements, NOT the implementation):
|
| 230 |
+
|
| 231 |
+
- "Are visual hierarchy requirements defined with measurable criteria? [Clarity, Spec §FR-1]"
|
| 232 |
+
- "Is the number and positioning of UI elements explicitly specified? [Completeness, Spec §FR-1]"
|
| 233 |
+
- "Are interaction state requirements (hover, focus, active) consistently defined? [Consistency]"
|
| 234 |
+
- "Are accessibility requirements specified for all interactive elements? [Coverage, Gap]"
|
| 235 |
+
- "Is fallback behavior defined when images fail to load? [Edge Case, Gap]"
|
| 236 |
+
- "Can 'prominent display' be objectively measured? [Measurability, Spec §FR-4]"
|
| 237 |
+
|
| 238 |
+
**API Requirements Quality:** `api.md`
|
| 239 |
+
|
| 240 |
+
Sample items:
|
| 241 |
+
|
| 242 |
+
- "Are error response formats specified for all failure scenarios? [Completeness]"
|
| 243 |
+
- "Are rate limiting requirements quantified with specific thresholds? [Clarity]"
|
| 244 |
+
- "Are authentication requirements consistent across all endpoints? [Consistency]"
|
| 245 |
+
- "Are retry/timeout requirements defined for external dependencies? [Coverage, Gap]"
|
| 246 |
+
- "Is versioning strategy documented in requirements? [Gap]"
|
| 247 |
+
|
| 248 |
+
**Performance Requirements Quality:** `performance.md`
|
| 249 |
+
|
| 250 |
+
Sample items:
|
| 251 |
+
|
| 252 |
+
- "Are performance requirements quantified with specific metrics? [Clarity]"
|
| 253 |
+
- "Are performance targets defined for all critical user journeys? [Coverage]"
|
| 254 |
+
- "Are performance requirements under different load conditions specified? [Completeness]"
|
| 255 |
+
- "Can performance requirements be objectively measured? [Measurability]"
|
| 256 |
+
- "Are degradation requirements defined for high-load scenarios? [Edge Case, Gap]"
|
| 257 |
+
|
| 258 |
+
**Security Requirements Quality:** `security.md`
|
| 259 |
+
|
| 260 |
+
Sample items:
|
| 261 |
+
|
| 262 |
+
- "Are authentication requirements specified for all protected resources? [Coverage]"
|
| 263 |
+
- "Are data protection requirements defined for sensitive information? [Completeness]"
|
| 264 |
+
- "Is the threat model documented and requirements aligned to it? [Traceability]"
|
| 265 |
+
- "Are security requirements consistent with compliance obligations? [Consistency]"
|
| 266 |
+
- "Are security failure/breach response requirements defined? [Gap, Exception Flow]"
|
| 267 |
+
|
| 268 |
+
## Anti-Examples: What NOT To Do
|
| 269 |
+
|
| 270 |
+
**❌ WRONG - These test implementation, not requirements:**
|
| 271 |
+
|
| 272 |
+
```markdown
|
| 273 |
+
- [ ] CHK001 - Verify landing page displays 3 episode cards [Spec §FR-001]
|
| 274 |
+
- [ ] CHK002 - Test hover states work correctly on desktop [Spec §FR-003]
|
| 275 |
+
- [ ] CHK003 - Confirm logo click navigates to home page [Spec §FR-010]
|
| 276 |
+
- [ ] CHK004 - Check that related episodes section shows 3-5 items [Spec §FR-005]
|
| 277 |
+
```
|
| 278 |
+
|
| 279 |
+
**✅ CORRECT - These test requirements quality:**
|
| 280 |
+
|
| 281 |
+
```markdown
|
| 282 |
+
- [ ] CHK001 - Are the number and layout of featured episodes explicitly specified? [Completeness, Spec §FR-001]
|
| 283 |
+
- [ ] CHK002 - Are hover state requirements consistently defined for all interactive elements? [Consistency, Spec §FR-003]
|
| 284 |
+
- [ ] CHK003 - Are navigation requirements clear for all clickable brand elements? [Clarity, Spec §FR-010]
|
| 285 |
+
- [ ] CHK004 - Is the selection criteria for related episodes documented? [Gap, Spec §FR-005]
|
| 286 |
+
- [ ] CHK005 - Are loading state requirements defined for asynchronous episode data? [Gap]
|
| 287 |
+
- [ ] CHK006 - Can "visual hierarchy" requirements be objectively measured? [Measurability, Spec §FR-001]
|
| 288 |
+
```
|
| 289 |
+
|
| 290 |
+
**Key Differences:**
|
| 291 |
+
|
| 292 |
+
- Wrong: Tests if the system works correctly
|
| 293 |
+
- Correct: Tests if the requirements are written correctly
|
| 294 |
+
- Wrong: Verification of behavior
|
| 295 |
+
- Correct: Validation of requirement quality
|
| 296 |
+
- Wrong: "Does it do X?"
|
| 297 |
+
- Correct: "Is X clearly specified?"
|
| 298 |
+
"""
|
.gemini/commands/speckit.clarify.toml
ADDED
|
@@ -0,0 +1,185 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
description = "Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec."
|
| 2 |
+
|
| 3 |
+
prompt = """
|
| 4 |
+
---
|
| 5 |
+
description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
|
| 6 |
+
handoffs:
|
| 7 |
+
- label: Build Technical Plan
|
| 8 |
+
agent: speckit.plan
|
| 9 |
+
prompt: Create a plan for the spec. I am building with...
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
## User Input
|
| 13 |
+
|
| 14 |
+
```text
|
| 15 |
+
$ARGUMENTS
|
| 16 |
+
```
|
| 17 |
+
|
| 18 |
+
You **MUST** consider the user input before proceeding (if not empty).
|
| 19 |
+
|
| 20 |
+
## Outline
|
| 21 |
+
|
| 22 |
+
Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file.
|
| 23 |
+
|
| 24 |
+
Note: This clarification workflow is expected to run (and be completed) BEFORE invoking `/speckit.plan`. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases.
|
| 25 |
+
|
| 26 |
+
Execution steps:
|
| 27 |
+
|
| 28 |
+
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --paths-only` from repo root **once** (combined `--json --paths-only` mode / `-Json -PathsOnly`). Parse minimal JSON payload fields:
|
| 29 |
+
- `FEATURE_DIR`
|
| 30 |
+
- `FEATURE_SPEC`
|
| 31 |
+
- (Optionally capture `IMPL_PLAN`, `TASKS` for future chained flows.)
|
| 32 |
+
- If JSON parsing fails, abort and instruct user to re-run `/speckit.specify` or verify feature branch environment.
|
| 33 |
+
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
|
| 34 |
+
|
| 35 |
+
2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked).
|
| 36 |
+
|
| 37 |
+
Functional Scope & Behavior:
|
| 38 |
+
- Core user goals & success criteria
|
| 39 |
+
- Explicit out-of-scope declarations
|
| 40 |
+
- User roles / personas differentiation
|
| 41 |
+
|
| 42 |
+
Domain & Data Model:
|
| 43 |
+
- Entities, attributes, relationships
|
| 44 |
+
- Identity & uniqueness rules
|
| 45 |
+
- Lifecycle/state transitions
|
| 46 |
+
- Data volume / scale assumptions
|
| 47 |
+
|
| 48 |
+
Interaction & UX Flow:
|
| 49 |
+
- Critical user journeys / sequences
|
| 50 |
+
- Error/empty/loading states
|
| 51 |
+
- Accessibility or localization notes
|
| 52 |
+
|
| 53 |
+
Non-Functional Quality Attributes:
|
| 54 |
+
- Performance (latency, throughput targets)
|
| 55 |
+
- Scalability (horizontal/vertical, limits)
|
| 56 |
+
- Reliability & availability (uptime, recovery expectations)
|
| 57 |
+
- Observability (logging, metrics, tracing signals)
|
| 58 |
+
- Security & privacy (authN/Z, data protection, threat assumptions)
|
| 59 |
+
- Compliance / regulatory constraints (if any)
|
| 60 |
+
|
| 61 |
+
Integration & External Dependencies:
|
| 62 |
+
- External services/APIs and failure modes
|
| 63 |
+
- Data import/export formats
|
| 64 |
+
- Protocol/versioning assumptions
|
| 65 |
+
|
| 66 |
+
Edge Cases & Failure Handling:
|
| 67 |
+
- Negative scenarios
|
| 68 |
+
- Rate limiting / throttling
|
| 69 |
+
- Conflict resolution (e.g., concurrent edits)
|
| 70 |
+
|
| 71 |
+
Constraints & Tradeoffs:
|
| 72 |
+
- Technical constraints (language, storage, hosting)
|
| 73 |
+
- Explicit tradeoffs or rejected alternatives
|
| 74 |
+
|
| 75 |
+
Terminology & Consistency:
|
| 76 |
+
- Canonical glossary terms
|
| 77 |
+
- Avoided synonyms / deprecated terms
|
| 78 |
+
|
| 79 |
+
Completion Signals:
|
| 80 |
+
- Acceptance criteria testability
|
| 81 |
+
- Measurable Definition of Done style indicators
|
| 82 |
+
|
| 83 |
+
Misc / Placeholders:
|
| 84 |
+
- TODO markers / unresolved decisions
|
| 85 |
+
- Ambiguous adjectives ("robust", "intuitive") lacking quantification
|
| 86 |
+
|
| 87 |
+
For each category with Partial or Missing status, add a candidate question opportunity unless:
|
| 88 |
+
- Clarification would not materially change implementation or validation strategy
|
| 89 |
+
- Information is better deferred to planning phase (note internally)
|
| 90 |
+
|
| 91 |
+
3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints:
|
| 92 |
+
- Maximum of 10 total questions across the whole session.
|
| 93 |
+
- Each question must be answerable with EITHER:
|
| 94 |
+
- A short multiple‑choice selection (2–5 distinct, mutually exclusive options), OR
|
| 95 |
+
- A one-word / short‑phrase answer (explicitly constrain: "Answer in <=5 words").
|
| 96 |
+
- Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation.
|
| 97 |
+
- Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved.
|
| 98 |
+
- Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness).
|
| 99 |
+
- Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests.
|
| 100 |
+
- If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic.
|
| 101 |
+
|
| 102 |
+
4. Sequential questioning loop (interactive):
|
| 103 |
+
- Present EXACTLY ONE question at a time.
|
| 104 |
+
- For multiple‑choice questions:
|
| 105 |
+
- **Analyze all options** and determine the **most suitable option** based on:
|
| 106 |
+
- Best practices for the project type
|
| 107 |
+
- Common patterns in similar implementations
|
| 108 |
+
- Risk reduction (security, performance, maintainability)
|
| 109 |
+
- Alignment with any explicit project goals or constraints visible in the spec
|
| 110 |
+
- Present your **recommended option prominently** at the top with clear reasoning (1-2 sentences explaining why this is the best choice).
|
| 111 |
+
- Format as: `**Recommended:** Option [X] - <reasoning>`
|
| 112 |
+
- Then render all options as a Markdown table:
|
| 113 |
+
|
| 114 |
+
| Option | Description |
|
| 115 |
+
|--------|-------------|
|
| 116 |
+
| A | <Option A description> |
|
| 117 |
+
| B | <Option B description> |
|
| 118 |
+
| C | <Option C description> (add D/E as needed up to 5) |
|
| 119 |
+
| Short | Provide a different short answer (<=5 words) (Include only if free-form alternative is appropriate) |
|
| 120 |
+
|
| 121 |
+
- After the table, add: `You can reply with the option letter (e.g., "A"), accept the recommendation by saying "yes" or "recommended", or provide your own short answer.`
|
| 122 |
+
- For short‑answer style (no meaningful discrete options):
|
| 123 |
+
- Provide your **suggested answer** based on best practices and context.
|
| 124 |
+
- Format as: `**Suggested:** <your proposed answer> - <brief reasoning>`
|
| 125 |
+
- Then output: `Format: Short answer (<=5 words). You can accept the suggestion by saying "yes" or "suggested", or provide your own answer.`
|
| 126 |
+
- After the user answers:
|
| 127 |
+
- If the user replies with "yes", "recommended", or "suggested", use your previously stated recommendation/suggestion as the answer.
|
| 128 |
+
- Otherwise, validate the answer maps to one option or fits the <=5 word constraint.
|
| 129 |
+
- If ambiguous, ask for a quick disambiguation (count still belongs to same question; do not advance).
|
| 130 |
+
- Once satisfactory, record it in working memory (do not yet write to disk) and move to the next queued question.
|
| 131 |
+
- Stop asking further questions when:
|
| 132 |
+
- All critical ambiguities resolved early (remaining queued items become unnecessary), OR
|
| 133 |
+
- User signals completion ("done", "good", "no more"), OR
|
| 134 |
+
- You reach 5 asked questions.
|
| 135 |
+
- Never reveal future queued questions in advance.
|
| 136 |
+
- If no valid questions exist at start, immediately report no critical ambiguities.
|
| 137 |
+
|
| 138 |
+
5. Integration after EACH accepted answer (incremental update approach):
|
| 139 |
+
- Maintain in-memory representation of the spec (loaded once at start) plus the raw file contents.
|
| 140 |
+
- For the first integrated answer in this session:
|
| 141 |
+
- Ensure a `## Clarifications` section exists (create it just after the highest-level contextual/overview section per the spec template if missing).
|
| 142 |
+
- Under it, create (if not present) a `### Session YYYY-MM-DD` subheading for today.
|
| 143 |
+
- Append a bullet line immediately after acceptance: `- Q: <question> → A: <final answer>`.
|
| 144 |
+
- Then immediately apply the clarification to the most appropriate section(s):
|
| 145 |
+
- Functional ambiguity → Update or add a bullet in Functional Requirements.
|
| 146 |
+
- User interaction / actor distinction → Update User Stories or Actors subsection (if present) with clarified role, constraint, or scenario.
|
| 147 |
+
- Data shape / entities → Update Data Model (add fields, types, relationships) preserving ordering; note added constraints succinctly.
|
| 148 |
+
- Non-functional constraint → Add/modify measurable criteria in Non-Functional / Quality Attributes section (convert vague adjective to metric or explicit target).
|
| 149 |
+
- Edge case / negative flow → Add a new bullet under Edge Cases / Error Handling (or create such subsection if template provides placeholder for it).
|
| 150 |
+
- Terminology conflict → Normalize term across spec; retain original only if necessary by adding `(formerly referred to as "X")` once.
|
| 151 |
+
- If the clarification invalidates an earlier ambiguous statement, replace that statement instead of duplicating; leave no obsolete contradictory text.
|
| 152 |
+
- Save the spec file AFTER each integration to minimize risk of context loss (atomic overwrite).
|
| 153 |
+
- Preserve formatting: do not reorder unrelated sections; keep heading hierarchy intact.
|
| 154 |
+
- Keep each inserted clarification minimal and testable (avoid narrative drift).
|
| 155 |
+
|
| 156 |
+
6. Validation (performed after EACH write plus final pass):
|
| 157 |
+
- Clarifications session contains exactly one bullet per accepted answer (no duplicates).
|
| 158 |
+
- Total asked (accepted) questions ≤ 5.
|
| 159 |
+
- Updated sections contain no lingering vague placeholders the new answer was meant to resolve.
|
| 160 |
+
- No contradictory earlier statement remains (scan for now-invalid alternative choices removed).
|
| 161 |
+
- Markdown structure valid; only allowed new headings: `## Clarifications`, `### Session YYYY-MM-DD`.
|
| 162 |
+
- Terminology consistency: same canonical term used across all updated sections.
|
| 163 |
+
|
| 164 |
+
7. Write the updated spec back to `FEATURE_SPEC`.
|
| 165 |
+
|
| 166 |
+
8. Report completion (after questioning loop ends or early termination):
|
| 167 |
+
- Number of questions asked & answered.
|
| 168 |
+
- Path to updated spec.
|
| 169 |
+
- Sections touched (list names).
|
| 170 |
+
- Coverage summary table listing each taxonomy category with Status: Resolved (was Partial/Missing and addressed), Deferred (exceeds question quota or better suited for planning), Clear (already sufficient), Outstanding (still Partial/Missing but low impact).
|
| 171 |
+
- If any Outstanding or Deferred remain, recommend whether to proceed to `/speckit.plan` or run `/speckit.clarify` again later post-plan.
|
| 172 |
+
- Suggested next command.
|
| 173 |
+
|
| 174 |
+
Behavior rules:
|
| 175 |
+
|
| 176 |
+
- If no meaningful ambiguities found (or all potential questions would be low-impact), respond: "No critical ambiguities detected worth formal clarification." and suggest proceeding.
|
| 177 |
+
- If spec file missing, instruct user to run `/speckit.specify` first (do not create a new spec here).
|
| 178 |
+
- Never exceed 5 total asked questions (clarification retries for a single question do not count as new questions).
|
| 179 |
+
- Avoid speculative tech stack questions unless the absence blocks functional clarity.
|
| 180 |
+
- Respect user early termination signals ("stop", "done", "proceed").
|
| 181 |
+
- If no questions asked due to full coverage, output a compact coverage summary (all categories Clear) then suggest advancing.
|
| 182 |
+
- If quota reached with unresolved high-impact categories remaining, explicitly flag them under Deferred with rationale.
|
| 183 |
+
|
| 184 |
+
Context for prioritization: {{args}}
|
| 185 |
+
"""
|
.gemini/commands/speckit.constitution.toml
ADDED
|
@@ -0,0 +1,86 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
description = "Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync."
|
| 2 |
+
|
| 3 |
+
prompt = """
|
| 4 |
+
---
|
| 5 |
+
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync.
|
| 6 |
+
handoffs:
|
| 7 |
+
- label: Build Specification
|
| 8 |
+
agent: speckit.specify
|
| 9 |
+
prompt: Implement the feature specification based on the updated constitution. I want to build...
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
## User Input
|
| 13 |
+
|
| 14 |
+
```text
|
| 15 |
+
$ARGUMENTS
|
| 16 |
+
```
|
| 17 |
+
|
| 18 |
+
You **MUST** consider the user input before proceeding (if not empty).
|
| 19 |
+
|
| 20 |
+
## Outline
|
| 21 |
+
|
| 22 |
+
You are updating the project constitution at `.specify/memory/constitution.md`. This file is a TEMPLATE containing placeholder tokens in square brackets (e.g. `[PROJECT_NAME]`, `[PRINCIPLE_1_NAME]`). Your job is to (a) collect/derive concrete values, (b) fill the template precisely, and (c) propagate any amendments across dependent artifacts.
|
| 23 |
+
|
| 24 |
+
Follow this execution flow:
|
| 25 |
+
|
| 26 |
+
1. Load the existing constitution template at `.specify/memory/constitution.md`.
|
| 27 |
+
- Identify every placeholder token of the form `[ALL_CAPS_IDENTIFIER]`.
|
| 28 |
+
**IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly.
|
| 29 |
+
|
| 30 |
+
2. Collect/derive values for placeholders:
|
| 31 |
+
- If user input (conversation) supplies a value, use it.
|
| 32 |
+
- Otherwise infer from existing repo context (README, docs, prior constitution versions if embedded).
|
| 33 |
+
- For governance dates: `RATIFICATION_DATE` is the original adoption date (if unknown ask or mark TODO), `LAST_AMENDED_DATE` is today if changes are made, otherwise keep previous.
|
| 34 |
+
- `CONSTITUTION_VERSION` must increment according to semantic versioning rules:
|
| 35 |
+
- MAJOR: Backward incompatible governance/principle removals or redefinitions.
|
| 36 |
+
- MINOR: New principle/section added or materially expanded guidance.
|
| 37 |
+
- PATCH: Clarifications, wording, typo fixes, non-semantic refinements.
|
| 38 |
+
- If version bump type ambiguous, propose reasoning before finalizing.
|
| 39 |
+
|
| 40 |
+
3. Draft the updated constitution content:
|
| 41 |
+
- Replace every placeholder with concrete text (no bracketed tokens left except intentionally retained template slots that the project has chosen not to define yet—explicitly justify any left).
|
| 42 |
+
- Preserve heading hierarchy and comments can be removed once replaced unless they still add clarifying guidance.
|
| 43 |
+
- Ensure each Principle section: succinct name line, paragraph (or bullet list) capturing non‑negotiable rules, explicit rationale if not obvious.
|
| 44 |
+
- Ensure Governance section lists amendment procedure, versioning policy, and compliance review expectations.
|
| 45 |
+
|
| 46 |
+
4. Consistency propagation checklist (convert prior checklist into active validations):
|
| 47 |
+
- Read `.specify/templates/plan-template.md` and ensure any "Constitution Check" or rules align with updated principles.
|
| 48 |
+
- Read `.specify/templates/spec-template.md` for scope/requirements alignment—update if constitution adds/removes mandatory sections or constraints.
|
| 49 |
+
- Read `.specify/templates/tasks-template.md` and ensure task categorization reflects new or removed principle-driven task types (e.g., observability, versioning, testing discipline).
|
| 50 |
+
- Read each command file in `.specify/templates/commands/*.md` (including this one) to verify no outdated references (agent-specific names like CLAUDE only) remain when generic guidance is required.
|
| 51 |
+
- Read any runtime guidance docs (e.g., `README.md`, `docs/quickstart.md`, or agent-specific guidance files if present). Update references to principles changed.
|
| 52 |
+
|
| 53 |
+
5. Produce a Sync Impact Report (prepend as an HTML comment at top of the constitution file after update):
|
| 54 |
+
- Version change: old → new
|
| 55 |
+
- List of modified principles (old title → new title if renamed)
|
| 56 |
+
- Added sections
|
| 57 |
+
- Removed sections
|
| 58 |
+
- Templates requiring updates (✅ updated / ⚠ pending) with file paths
|
| 59 |
+
- Follow-up TODOs if any placeholders intentionally deferred.
|
| 60 |
+
|
| 61 |
+
6. Validation before final output:
|
| 62 |
+
- No remaining unexplained bracket tokens.
|
| 63 |
+
- Version line matches report.
|
| 64 |
+
- Dates ISO format YYYY-MM-DD.
|
| 65 |
+
- Principles are declarative, testable, and free of vague language ("should" → replace with MUST/SHOULD rationale where appropriate).
|
| 66 |
+
|
| 67 |
+
7. Write the completed constitution back to `.specify/memory/constitution.md` (overwrite).
|
| 68 |
+
|
| 69 |
+
8. Output a final summary to the user with:
|
| 70 |
+
- New version and bump rationale.
|
| 71 |
+
- Any files flagged for manual follow-up.
|
| 72 |
+
- Suggested commit message (e.g., `docs: amend constitution to vX.Y.Z (principle additions + governance update)`).
|
| 73 |
+
|
| 74 |
+
Formatting & Style Requirements:
|
| 75 |
+
|
| 76 |
+
- Use Markdown headings exactly as in the template (do not demote/promote levels).
|
| 77 |
+
- Wrap long rationale lines to keep readability (<100 chars ideally) but do not hard enforce with awkward breaks.
|
| 78 |
+
- Keep a single blank line between sections.
|
| 79 |
+
- Avoid trailing whitespace.
|
| 80 |
+
|
| 81 |
+
If the user supplies partial updates (e.g., only one principle revision), still perform validation and version decision steps.
|
| 82 |
+
|
| 83 |
+
If critical info missing (e.g., ratification date truly unknown), insert `TODO(<FIELD_NAME>): explanation` and include in the Sync Impact Report under deferred items.
|
| 84 |
+
|
| 85 |
+
Do not create a new template; always operate on the existing `.specify/memory/constitution.md` file.
|
| 86 |
+
"""
|
.gemini/commands/speckit.implement.toml
ADDED
|
@@ -0,0 +1,139 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
description = "Execute the implementation plan by processing and executing all tasks defined in tasks.md"
|
| 2 |
+
|
| 3 |
+
prompt = """
|
| 4 |
+
---
|
| 5 |
+
description: Execute the implementation plan by processing and executing all tasks defined in tasks.md
|
| 6 |
+
---
|
| 7 |
+
|
| 8 |
+
## User Input
|
| 9 |
+
|
| 10 |
+
```text
|
| 11 |
+
$ARGUMENTS
|
| 12 |
+
```
|
| 13 |
+
|
| 14 |
+
You **MUST** consider the user input before proceeding (if not empty).
|
| 15 |
+
|
| 16 |
+
## Outline
|
| 17 |
+
|
| 18 |
+
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
|
| 19 |
+
|
| 20 |
+
2. **Check checklists status** (if FEATURE_DIR/checklists/ exists):
|
| 21 |
+
- Scan all checklist files in the checklists/ directory
|
| 22 |
+
- For each checklist, count:
|
| 23 |
+
- Total items: All lines matching `- [ ]` or `- [X]` or `- [x]`
|
| 24 |
+
- Completed items: Lines matching `- [X]` or `- [x]`
|
| 25 |
+
- Incomplete items: Lines matching `- [ ]`
|
| 26 |
+
- Create a status table:
|
| 27 |
+
|
| 28 |
+
```text
|
| 29 |
+
| Checklist | Total | Completed | Incomplete | Status |
|
| 30 |
+
|-----------|-------|-----------|------------|--------|
|
| 31 |
+
| ux.md | 12 | 12 | 0 | ✓ PASS |
|
| 32 |
+
| test.md | 8 | 5 | 3 | ✗ FAIL |
|
| 33 |
+
| security.md | 6 | 6 | 0 | ✓ PASS |
|
| 34 |
+
```
|
| 35 |
+
|
| 36 |
+
- Calculate overall status:
|
| 37 |
+
- **PASS**: All checklists have 0 incomplete items
|
| 38 |
+
- **FAIL**: One or more checklists have incomplete items
|
| 39 |
+
|
| 40 |
+
- **If any checklist is incomplete**:
|
| 41 |
+
- Display the table with incomplete item counts
|
| 42 |
+
- **STOP** and ask: "Some checklists are incomplete. Do you want to proceed with implementation anyway? (yes/no)"
|
| 43 |
+
- Wait for user response before continuing
|
| 44 |
+
- If user says "no" or "wait" or "stop", halt execution
|
| 45 |
+
- If user says "yes" or "proceed" or "continue", proceed to step 3
|
| 46 |
+
|
| 47 |
+
- **If all checklists are complete**:
|
| 48 |
+
- Display the table showing all checklists passed
|
| 49 |
+
- Automatically proceed to step 3
|
| 50 |
+
|
| 51 |
+
3. Load and analyze the implementation context:
|
| 52 |
+
- **REQUIRED**: Read tasks.md for the complete task list and execution plan
|
| 53 |
+
- **REQUIRED**: Read plan.md for tech stack, architecture, and file structure
|
| 54 |
+
- **IF EXISTS**: Read data-model.md for entities and relationships
|
| 55 |
+
- **IF EXISTS**: Read contracts/ for API specifications and test requirements
|
| 56 |
+
- **IF EXISTS**: Read research.md for technical decisions and constraints
|
| 57 |
+
- **IF EXISTS**: Read quickstart.md for integration scenarios
|
| 58 |
+
|
| 59 |
+
4. **Project Setup Verification**:
|
| 60 |
+
- **REQUIRED**: Create/verify ignore files based on actual project setup:
|
| 61 |
+
|
| 62 |
+
**Detection & Creation Logic**:
|
| 63 |
+
- Check if the following command succeeds to determine if the repository is a git repo (create/verify .gitignore if so):
|
| 64 |
+
|
| 65 |
+
```sh
|
| 66 |
+
git rev-parse --git-dir 2>/dev/null
|
| 67 |
+
```
|
| 68 |
+
|
| 69 |
+
- Check if Dockerfile* exists or Docker in plan.md → create/verify .dockerignore
|
| 70 |
+
- Check if .eslintrc* exists → create/verify .eslintignore
|
| 71 |
+
- Check if eslint.config.* exists → ensure the config's `ignores` entries cover required patterns
|
| 72 |
+
- Check if .prettierrc* exists → create/verify .prettierignore
|
| 73 |
+
- Check if .npmrc or package.json exists → create/verify .npmignore (if publishing)
|
| 74 |
+
- Check if terraform files (*.tf) exist → create/verify .terraformignore
|
| 75 |
+
- Check if .helmignore needed (helm charts present) → create/verify .helmignore
|
| 76 |
+
|
| 77 |
+
**If ignore file already exists**: Verify it contains essential patterns, append missing critical patterns only
|
| 78 |
+
**If ignore file missing**: Create with full pattern set for detected technology
|
| 79 |
+
|
| 80 |
+
**Common Patterns by Technology** (from plan.md tech stack):
|
| 81 |
+
- **Node.js/JavaScript/TypeScript**: `node_modules/`, `dist/`, `build/`, `*.log`, `.env*`
|
| 82 |
+
- **Python**: `__pycache__/`, `*.pyc`, `.venv/`, `venv/`, `dist/`, `*.egg-info/`
|
| 83 |
+
- **Java**: `target/`, `*.class`, `*.jar`, `.gradle/`, `build/`
|
| 84 |
+
- **C#/.NET**: `bin/`, `obj/`, `*.user`, `*.suo`, `packages/`
|
| 85 |
+
- **Go**: `*.exe`, `*.test`, `vendor/`, `*.out`
|
| 86 |
+
- **Ruby**: `.bundle/`, `log/`, `tmp/`, `*.gem`, `vendor/bundle/`
|
| 87 |
+
- **PHP**: `vendor/`, `*.log`, `*.cache`, `*.env`
|
| 88 |
+
- **Rust**: `target/`, `debug/`, `release/`, `*.rs.bk`, `*.rlib`, `*.prof*`, `.idea/`, `*.log`, `.env*`
|
| 89 |
+
- **Kotlin**: `build/`, `out/`, `.gradle/`, `.idea/`, `*.class`, `*.jar`, `*.iml`, `*.log`, `.env*`
|
| 90 |
+
- **C++**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.so`, `*.a`, `*.exe`, `*.dll`, `.idea/`, `*.log`, `.env*`
|
| 91 |
+
- **C**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.a`, `*.so`, `*.exe`, `Makefile`, `config.log`, `.idea/`, `*.log`, `.env*`
|
| 92 |
+
- **Swift**: `.build/`, `DerivedData/`, `*.swiftpm/`, `Packages/`
|
| 93 |
+
- **R**: `.Rproj.user/`, `.Rhistory`, `.RData`, `.Ruserdata`, `*.Rproj`, `packrat/`, `renv/`
|
| 94 |
+
- **Universal**: `.DS_Store`, `Thumbs.db`, `*.tmp`, `*.swp`, `.vscode/`, `.idea/`
|
| 95 |
+
|
| 96 |
+
**Tool-Specific Patterns**:
|
| 97 |
+
- **Docker**: `node_modules/`, `.git/`, `Dockerfile*`, `.dockerignore`, `*.log*`, `.env*`, `coverage/`
|
| 98 |
+
- **ESLint**: `node_modules/`, `dist/`, `build/`, `coverage/`, `*.min.js`
|
| 99 |
+
- **Prettier**: `node_modules/`, `dist/`, `build/`, `coverage/`, `package-lock.json`, `yarn.lock`, `pnpm-lock.yaml`
|
| 100 |
+
- **Terraform**: `.terraform/`, `*.tfstate*`, `*.tfvars`, `.terraform.lock.hcl`
|
| 101 |
+
- **Kubernetes/k8s**: `*.secret.yaml`, `secrets/`, `.kube/`, `kubeconfig*`, `*.key`, `*.crt`
|
| 102 |
+
|
| 103 |
+
5. Parse tasks.md structure and extract:
|
| 104 |
+
- **Task phases**: Setup, Tests, Core, Integration, Polish
|
| 105 |
+
- **Task dependencies**: Sequential vs parallel execution rules
|
| 106 |
+
- **Task details**: ID, description, file paths, parallel markers [P]
|
| 107 |
+
- **Execution flow**: Order and dependency requirements
|
| 108 |
+
|
| 109 |
+
6. Execute implementation following the task plan:
|
| 110 |
+
- **Phase-by-phase execution**: Complete each phase before moving to the next
|
| 111 |
+
- **Respect dependencies**: Run sequential tasks in order, parallel tasks [P] can run together
|
| 112 |
+
- **Follow TDD approach**: Execute test tasks before their corresponding implementation tasks
|
| 113 |
+
- **File-based coordination**: Tasks affecting the same files must run sequentially
|
| 114 |
+
- **Validation checkpoints**: Verify each phase completion before proceeding
|
| 115 |
+
|
| 116 |
+
7. Implementation execution rules:
|
| 117 |
+
- **Setup first**: Initialize project structure, dependencies, configuration
|
| 118 |
+
- **Tests before code**: If you need to write tests for contracts, entities, and integration scenarios
|
| 119 |
+
- **Core development**: Implement models, services, CLI commands, endpoints
|
| 120 |
+
- **Integration work**: Database connections, middleware, logging, external services
|
| 121 |
+
- **Polish and validation**: Unit tests, performance optimization, documentation
|
| 122 |
+
|
| 123 |
+
8. Progress tracking and error handling:
|
| 124 |
+
- Report progress after each completed task
|
| 125 |
+
- Halt execution if any non-parallel task fails
|
| 126 |
+
- For parallel tasks [P], continue with successful tasks, report failed ones
|
| 127 |
+
- Provide clear error messages with context for debugging
|
| 128 |
+
- Suggest next steps if implementation cannot proceed
|
| 129 |
+
- **IMPORTANT** For completed tasks, make sure to mark the task off as [X] in the tasks file.
|
| 130 |
+
|
| 131 |
+
9. Completion validation:
|
| 132 |
+
- Verify all required tasks are completed
|
| 133 |
+
- Check that implemented features match the original specification
|
| 134 |
+
- Validate that tests pass and coverage meets requirements
|
| 135 |
+
- Confirm the implementation follows the technical plan
|
| 136 |
+
- Report final status with summary of completed work
|
| 137 |
+
|
| 138 |
+
Note: This command assumes a complete task breakdown exists in tasks.md. If tasks are incomplete or missing, suggest running `/speckit.tasks` first to regenerate the task list.
|
| 139 |
+
"""
|
.gemini/commands/speckit.plan.toml
ADDED
|
@@ -0,0 +1,93 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
description = "Execute the implementation planning workflow using the plan template to generate design artifacts."
|
| 2 |
+
|
| 3 |
+
prompt = """
|
| 4 |
+
---
|
| 5 |
+
description: Execute the implementation planning workflow using the plan template to generate design artifacts.
|
| 6 |
+
handoffs:
|
| 7 |
+
- label: Create Tasks
|
| 8 |
+
agent: speckit.tasks
|
| 9 |
+
prompt: Break the plan into tasks
|
| 10 |
+
send: true
|
| 11 |
+
- label: Create Checklist
|
| 12 |
+
agent: speckit.checklist
|
| 13 |
+
prompt: Create a checklist for the following domain...
|
| 14 |
+
---
|
| 15 |
+
|
| 16 |
+
## User Input
|
| 17 |
+
|
| 18 |
+
```text
|
| 19 |
+
$ARGUMENTS
|
| 20 |
+
```
|
| 21 |
+
|
| 22 |
+
You **MUST** consider the user input before proceeding (if not empty).
|
| 23 |
+
|
| 24 |
+
## Outline
|
| 25 |
+
|
| 26 |
+
1. **Setup**: Run `.specify/scripts/bash/setup-plan.sh --json` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
|
| 27 |
+
|
| 28 |
+
2. **Load context**: Read FEATURE_SPEC and `.specify/memory/constitution.md`. Load IMPL_PLAN template (already copied).
|
| 29 |
+
|
| 30 |
+
3. **Execute plan workflow**: Follow the structure in IMPL_PLAN template to:
|
| 31 |
+
- Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
|
| 32 |
+
- Fill Constitution Check section from constitution
|
| 33 |
+
- Evaluate gates (ERROR if violations unjustified)
|
| 34 |
+
- Phase 0: Generate research.md (resolve all NEEDS CLARIFICATION)
|
| 35 |
+
- Phase 1: Generate data-model.md, contracts/, quickstart.md
|
| 36 |
+
- Phase 1: Update agent context by running the agent script
|
| 37 |
+
- Re-evaluate Constitution Check post-design
|
| 38 |
+
|
| 39 |
+
4. **Stop and report**: Command ends after Phase 2 planning. Report branch, IMPL_PLAN path, and generated artifacts.
|
| 40 |
+
|
| 41 |
+
## Phases
|
| 42 |
+
|
| 43 |
+
### Phase 0: Outline & Research
|
| 44 |
+
|
| 45 |
+
1. **Extract unknowns from Technical Context** above:
|
| 46 |
+
- For each NEEDS CLARIFICATION → research task
|
| 47 |
+
- For each dependency → best practices task
|
| 48 |
+
- For each integration → patterns task
|
| 49 |
+
|
| 50 |
+
2. **Generate and dispatch research agents**:
|
| 51 |
+
|
| 52 |
+
```text
|
| 53 |
+
For each unknown in Technical Context:
|
| 54 |
+
Task: "Research {unknown} for {feature context}"
|
| 55 |
+
For each technology choice:
|
| 56 |
+
Task: "Find best practices for {tech} in {domain}"
|
| 57 |
+
```
|
| 58 |
+
|
| 59 |
+
3. **Consolidate findings** in `research.md` using format:
|
| 60 |
+
- Decision: [what was chosen]
|
| 61 |
+
- Rationale: [why chosen]
|
| 62 |
+
- Alternatives considered: [what else evaluated]
|
| 63 |
+
|
| 64 |
+
**Output**: research.md with all NEEDS CLARIFICATION resolved
|
| 65 |
+
|
| 66 |
+
### Phase 1: Design & Contracts
|
| 67 |
+
|
| 68 |
+
**Prerequisites:** `research.md` complete
|
| 69 |
+
|
| 70 |
+
1. **Extract entities from feature spec** → `data-model.md`:
|
| 71 |
+
- Entity name, fields, relationships
|
| 72 |
+
- Validation rules from requirements
|
| 73 |
+
- State transitions if applicable
|
| 74 |
+
|
| 75 |
+
2. **Generate API contracts** from functional requirements:
|
| 76 |
+
- For each user action → endpoint
|
| 77 |
+
- Use standard REST/GraphQL patterns
|
| 78 |
+
- Output OpenAPI/GraphQL schema to `/contracts/`
|
| 79 |
+
|
| 80 |
+
3. **Agent context update**:
|
| 81 |
+
- Run `.specify/scripts/bash/update-agent-context.sh gemini`
|
| 82 |
+
- These scripts detect which AI agent is in use
|
| 83 |
+
- Update the appropriate agent-specific context file
|
| 84 |
+
- Add only new technology from current plan
|
| 85 |
+
- Preserve manual additions between markers
|
| 86 |
+
|
| 87 |
+
**Output**: data-model.md, /contracts/*, quickstart.md, agent-specific file
|
| 88 |
+
|
| 89 |
+
## Key rules
|
| 90 |
+
|
| 91 |
+
- Use absolute paths
|
| 92 |
+
- ERROR on gate failures or unresolved clarifications
|
| 93 |
+
"""
|
.gemini/commands/speckit.specify.toml
ADDED
|
@@ -0,0 +1,261 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
description = "Create or update the feature specification from a natural language feature description."
|
| 2 |
+
|
| 3 |
+
prompt = """
|
| 4 |
+
---
|
| 5 |
+
description: Create or update the feature specification from a natural language feature description.
|
| 6 |
+
handoffs:
|
| 7 |
+
- label: Build Technical Plan
|
| 8 |
+
agent: speckit.plan
|
| 9 |
+
prompt: Create a plan for the spec. I am building with...
|
| 10 |
+
- label: Clarify Spec Requirements
|
| 11 |
+
agent: speckit.clarify
|
| 12 |
+
prompt: Clarify specification requirements
|
| 13 |
+
send: true
|
| 14 |
+
---
|
| 15 |
+
|
| 16 |
+
## User Input
|
| 17 |
+
|
| 18 |
+
```text
|
| 19 |
+
$ARGUMENTS
|
| 20 |
+
```
|
| 21 |
+
|
| 22 |
+
You **MUST** consider the user input before proceeding (if not empty).
|
| 23 |
+
|
| 24 |
+
## Outline
|
| 25 |
+
|
| 26 |
+
The text the user typed after `/speckit.specify` in the triggering message **is** the feature description. Assume you always have it available in this conversation even if `{{args}}` appears literally below. Do not ask the user to repeat it unless they provided an empty command.
|
| 27 |
+
|
| 28 |
+
Given that feature description, do this:
|
| 29 |
+
|
| 30 |
+
1. **Generate a concise short name** (2-4 words) for the branch:
|
| 31 |
+
- Analyze the feature description and extract the most meaningful keywords
|
| 32 |
+
- Create a 2-4 word short name that captures the essence of the feature
|
| 33 |
+
- Use action-noun format when possible (e.g., "add-user-auth", "fix-payment-bug")
|
| 34 |
+
- Preserve technical terms and acronyms (OAuth2, API, JWT, etc.)
|
| 35 |
+
- Keep it concise but descriptive enough to understand the feature at a glance
|
| 36 |
+
- Examples:
|
| 37 |
+
- "I want to add user authentication" → "user-auth"
|
| 38 |
+
- "Implement OAuth2 integration for the API" → "oauth2-api-integration"
|
| 39 |
+
- "Create a dashboard for analytics" → "analytics-dashboard"
|
| 40 |
+
- "Fix payment processing timeout bug" → "fix-payment-timeout"
|
| 41 |
+
|
| 42 |
+
2. **Check for existing branches before creating new one**:
|
| 43 |
+
|
| 44 |
+
a. First, fetch all remote branches to ensure we have the latest information:
|
| 45 |
+
```bash
|
| 46 |
+
git fetch --all --prune
|
| 47 |
+
```
|
| 48 |
+
|
| 49 |
+
b. Find the highest feature number across all sources for the short-name:
|
| 50 |
+
- Remote branches: `git ls-remote --heads origin | grep -E 'refs/heads/[0-9]+-<short-name>$'`
|
| 51 |
+
- Local branches: `git branch | grep -E '^[* ]*[0-9]+-<short-name>$'`
|
| 52 |
+
- Specs directories: Check for directories matching `specs/[0-9]+-<short-name>`
|
| 53 |
+
|
| 54 |
+
c. Determine the next available number:
|
| 55 |
+
- Extract all numbers from all three sources
|
| 56 |
+
- Find the highest number N
|
| 57 |
+
- Use N+1 for the new branch number
|
| 58 |
+
|
| 59 |
+
d. Run the script `.specify/scripts/bash/create-new-feature.sh --json "{{args}}"` with the calculated number and short-name:
|
| 60 |
+
- Pass `--number N+1` and `--short-name "your-short-name"` along with the feature description
|
| 61 |
+
- Bash example: `.specify/scripts/bash/create-new-feature.sh --json "{{args}}" --json --number 5 --short-name "user-auth" "Add user authentication"`
|
| 62 |
+
- PowerShell example: `.specify/scripts/bash/create-new-feature.sh --json "{{args}}" -Json -Number 5 -ShortName "user-auth" "Add user authentication"`
|
| 63 |
+
|
| 64 |
+
**IMPORTANT**:
|
| 65 |
+
- Check all three sources (remote branches, local branches, specs directories) to find the highest number
|
| 66 |
+
- Only match branches/directories with the exact short-name pattern
|
| 67 |
+
- If no existing branches/directories found with this short-name, start with number 1
|
| 68 |
+
- You must only ever run this script once per feature
|
| 69 |
+
- The JSON is provided in the terminal as output - always refer to it to get the actual content you're looking for
|
| 70 |
+
- The JSON output will contain BRANCH_NAME and SPEC_FILE paths
|
| 71 |
+
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot")
|
| 72 |
+
|
| 73 |
+
3. Load `.specify/templates/spec-template.md` to understand required sections.
|
| 74 |
+
|
| 75 |
+
4. Follow this execution flow:
|
| 76 |
+
|
| 77 |
+
1. Parse user description from Input
|
| 78 |
+
If empty: ERROR "No feature description provided"
|
| 79 |
+
2. Extract key concepts from description
|
| 80 |
+
Identify: actors, actions, data, constraints
|
| 81 |
+
3. For unclear aspects:
|
| 82 |
+
- Make informed guesses based on context and industry standards
|
| 83 |
+
- Only mark with [NEEDS CLARIFICATION: specific question] if:
|
| 84 |
+
- The choice significantly impacts feature scope or user experience
|
| 85 |
+
- Multiple reasonable interpretations exist with different implications
|
| 86 |
+
- No reasonable default exists
|
| 87 |
+
- **LIMIT: Maximum 3 [NEEDS CLARIFICATION] markers total**
|
| 88 |
+
- Prioritize clarifications by impact: scope > security/privacy > user experience > technical details
|
| 89 |
+
4. Fill User Scenarios & Testing section
|
| 90 |
+
If no clear user flow: ERROR "Cannot determine user scenarios"
|
| 91 |
+
5. Generate Functional Requirements
|
| 92 |
+
Each requirement must be testable
|
| 93 |
+
Use reasonable defaults for unspecified details (document assumptions in Assumptions section)
|
| 94 |
+
6. Define Success Criteria
|
| 95 |
+
Create measurable, technology-agnostic outcomes
|
| 96 |
+
Include both quantitative metrics (time, performance, volume) and qualitative measures (user satisfaction, task completion)
|
| 97 |
+
Each criterion must be verifiable without implementation details
|
| 98 |
+
7. Identify Key Entities (if data involved)
|
| 99 |
+
8. Return: SUCCESS (spec ready for planning)
|
| 100 |
+
|
| 101 |
+
5. Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings.
|
| 102 |
+
|
| 103 |
+
6. **Specification Quality Validation**: After writing the initial spec, validate it against quality criteria:
|
| 104 |
+
|
| 105 |
+
a. **Create Spec Quality Checklist**: Generate a checklist file at `FEATURE_DIR/checklists/requirements.md` using the checklist template structure with these validation items:
|
| 106 |
+
|
| 107 |
+
```markdown
|
| 108 |
+
# Specification Quality Checklist: [FEATURE NAME]
|
| 109 |
+
|
| 110 |
+
**Purpose**: Validate specification completeness and quality before proceeding to planning
|
| 111 |
+
**Created**: [DATE]
|
| 112 |
+
**Feature**: [Link to spec.md]
|
| 113 |
+
|
| 114 |
+
## Content Quality
|
| 115 |
+
|
| 116 |
+
- [ ] No implementation details (languages, frameworks, APIs)
|
| 117 |
+
- [ ] Focused on user value and business needs
|
| 118 |
+
- [ ] Written for non-technical stakeholders
|
| 119 |
+
- [ ] All mandatory sections completed
|
| 120 |
+
|
| 121 |
+
## Requirement Completeness
|
| 122 |
+
|
| 123 |
+
- [ ] No [NEEDS CLARIFICATION] markers remain
|
| 124 |
+
- [ ] Requirements are testable and unambiguous
|
| 125 |
+
- [ ] Success criteria are measurable
|
| 126 |
+
- [ ] Success criteria are technology-agnostic (no implementation details)
|
| 127 |
+
- [ ] All acceptance scenarios are defined
|
| 128 |
+
- [ ] Edge cases are identified
|
| 129 |
+
- [ ] Scope is clearly bounded
|
| 130 |
+
- [ ] Dependencies and assumptions identified
|
| 131 |
+
|
| 132 |
+
## Feature Readiness
|
| 133 |
+
|
| 134 |
+
- [ ] All functional requirements have clear acceptance criteria
|
| 135 |
+
- [ ] User scenarios cover primary flows
|
| 136 |
+
- [ ] Feature meets measurable outcomes defined in Success Criteria
|
| 137 |
+
- [ ] No implementation details leak into specification
|
| 138 |
+
|
| 139 |
+
## Notes
|
| 140 |
+
|
| 141 |
+
- Items marked incomplete require spec updates before `/speckit.clarify` or `/speckit.plan`
|
| 142 |
+
```
|
| 143 |
+
|
| 144 |
+
b. **Run Validation Check**: Review the spec against each checklist item:
|
| 145 |
+
- For each item, determine if it passes or fails
|
| 146 |
+
- Document specific issues found (quote relevant spec sections)
|
| 147 |
+
|
| 148 |
+
c. **Handle Validation Results**:
|
| 149 |
+
|
| 150 |
+
- **If all items pass**: Mark checklist complete and proceed to step 6
|
| 151 |
+
|
| 152 |
+
- **If items fail (excluding [NEEDS CLARIFICATION])**:
|
| 153 |
+
1. List the failing items and specific issues
|
| 154 |
+
2. Update the spec to address each issue
|
| 155 |
+
3. Re-run validation until all items pass (max 3 iterations)
|
| 156 |
+
4. If still failing after 3 iterations, document remaining issues in checklist notes and warn user
|
| 157 |
+
|
| 158 |
+
- **If [NEEDS CLARIFICATION] markers remain**:
|
| 159 |
+
1. Extract all [NEEDS CLARIFICATION: ...] markers from the spec
|
| 160 |
+
2. **LIMIT CHECK**: If more than 3 markers exist, keep only the 3 most critical (by scope/security/UX impact) and make informed guesses for the rest
|
| 161 |
+
3. For each clarification needed (max 3), present options to user in this format:
|
| 162 |
+
|
| 163 |
+
```markdown
|
| 164 |
+
## Question [N]: [Topic]
|
| 165 |
+
|
| 166 |
+
**Context**: [Quote relevant spec section]
|
| 167 |
+
|
| 168 |
+
**What we need to know**: [Specific question from NEEDS CLARIFICATION marker]
|
| 169 |
+
|
| 170 |
+
**Suggested Answers**:
|
| 171 |
+
|
| 172 |
+
| Option | Answer | Implications |
|
| 173 |
+
|--------|--------|--------------|
|
| 174 |
+
| A | [First suggested answer] | [What this means for the feature] |
|
| 175 |
+
| B | [Second suggested answer] | [What this means for the feature] |
|
| 176 |
+
| C | [Third suggested answer] | [What this means for the feature] |
|
| 177 |
+
| Custom | Provide your own answer | [Explain how to provide custom input] |
|
| 178 |
+
|
| 179 |
+
**Your choice**: _[Wait for user response]_
|
| 180 |
+
```
|
| 181 |
+
|
| 182 |
+
4. **CRITICAL - Table Formatting**: Ensure markdown tables are properly formatted:
|
| 183 |
+
- Use consistent spacing with pipes aligned
|
| 184 |
+
- Each cell should have spaces around content: `| Content |` not `|Content|`
|
| 185 |
+
- Header separator must have at least 3 dashes: `|--------|`
|
| 186 |
+
- Test that the table renders correctly in markdown preview
|
| 187 |
+
5. Number questions sequentially (Q1, Q2, Q3 - max 3 total)
|
| 188 |
+
6. Present all questions together before waiting for responses
|
| 189 |
+
7. Wait for user to respond with their choices for all questions (e.g., "Q1: A, Q2: Custom - [details], Q3: B")
|
| 190 |
+
8. Update the spec by replacing each [NEEDS CLARIFICATION] marker with the user's selected or provided answer
|
| 191 |
+
9. Re-run validation after all clarifications are resolved
|
| 192 |
+
|
| 193 |
+
d. **Update Checklist**: After each validation iteration, update the checklist file with current pass/fail status
|
| 194 |
+
|
| 195 |
+
7. Report completion with branch name, spec file path, checklist results, and readiness for the next phase (`/speckit.clarify` or `/speckit.plan`).
|
| 196 |
+
|
| 197 |
+
**NOTE:** The script creates and checks out the new branch and initializes the spec file before writing.
|
| 198 |
+
|
| 199 |
+
## General Guidelines
|
| 200 |
+
|
| 201 |
+
## Quick Guidelines
|
| 202 |
+
|
| 203 |
+
- Focus on **WHAT** users need and **WHY**.
|
| 204 |
+
- Avoid HOW to implement (no tech stack, APIs, code structure).
|
| 205 |
+
- Written for business stakeholders, not developers.
|
| 206 |
+
- DO NOT create any checklists that are embedded in the spec. That will be a separate command.
|
| 207 |
+
|
| 208 |
+
### Section Requirements
|
| 209 |
+
|
| 210 |
+
- **Mandatory sections**: Must be completed for every feature
|
| 211 |
+
- **Optional sections**: Include only when relevant to the feature
|
| 212 |
+
- When a section doesn't apply, remove it entirely (don't leave as "N/A")
|
| 213 |
+
|
| 214 |
+
### For AI Generation
|
| 215 |
+
|
| 216 |
+
When creating this spec from a user prompt:
|
| 217 |
+
|
| 218 |
+
1. **Make informed guesses**: Use context, industry standards, and common patterns to fill gaps
|
| 219 |
+
2. **Document assumptions**: Record reasonable defaults in the Assumptions section
|
| 220 |
+
3. **Limit clarifications**: Maximum 3 [NEEDS CLARIFICATION] markers - use only for critical decisions that:
|
| 221 |
+
- Significantly impact feature scope or user experience
|
| 222 |
+
- Have multiple reasonable interpretations with different implications
|
| 223 |
+
- Lack any reasonable default
|
| 224 |
+
4. **Prioritize clarifications**: scope > security/privacy > user experience > technical details
|
| 225 |
+
5. **Think like a tester**: Every vague requirement should fail the "testable and unambiguous" checklist item
|
| 226 |
+
6. **Common areas needing clarification** (only if no reasonable default exists):
|
| 227 |
+
- Feature scope and boundaries (include/exclude specific use cases)
|
| 228 |
+
- User types and permissions (if multiple conflicting interpretations possible)
|
| 229 |
+
- Security/compliance requirements (when legally/financially significant)
|
| 230 |
+
|
| 231 |
+
**Examples of reasonable defaults** (don't ask about these):
|
| 232 |
+
|
| 233 |
+
- Data retention: Industry-standard practices for the domain
|
| 234 |
+
- Performance targets: Standard web/mobile app expectations unless specified
|
| 235 |
+
- Error handling: User-friendly messages with appropriate fallbacks
|
| 236 |
+
- Authentication method: Standard session-based or OAuth2 for web apps
|
| 237 |
+
- Integration patterns: RESTful APIs unless specified otherwise
|
| 238 |
+
|
| 239 |
+
### Success Criteria Guidelines
|
| 240 |
+
|
| 241 |
+
Success criteria must be:
|
| 242 |
+
|
| 243 |
+
1. **Measurable**: Include specific metrics (time, percentage, count, rate)
|
| 244 |
+
2. **Technology-agnostic**: No mention of frameworks, languages, databases, or tools
|
| 245 |
+
3. **User-focused**: Describe outcomes from user/business perspective, not system internals
|
| 246 |
+
4. **Verifiable**: Can be tested/validated without knowing implementation details
|
| 247 |
+
|
| 248 |
+
**Good examples**:
|
| 249 |
+
|
| 250 |
+
- "Users can complete checkout in under 3 minutes"
|
| 251 |
+
- "System supports 10,000 concurrent users"
|
| 252 |
+
- "95% of searches return results in under 1 second"
|
| 253 |
+
- "Task completion rate improves by 40%"
|
| 254 |
+
|
| 255 |
+
**Bad examples** (implementation-focused):
|
| 256 |
+
|
| 257 |
+
- "API response time is under 200ms" (too technical, use "Users see results instantly")
|
| 258 |
+
- "Database can handle 1000 TPS" (implementation detail, use user-facing metric)
|
| 259 |
+
- "React components render efficiently" (framework-specific)
|
| 260 |
+
- "Redis cache hit rate above 80%" (technology-specific)
|
| 261 |
+
"""
|
.gemini/commands/speckit.tasks.toml
ADDED
|
@@ -0,0 +1,141 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
description = "Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts."
|
| 2 |
+
|
| 3 |
+
prompt = """
|
| 4 |
+
---
|
| 5 |
+
description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
|
| 6 |
+
handoffs:
|
| 7 |
+
- label: Analyze For Consistency
|
| 8 |
+
agent: speckit.analyze
|
| 9 |
+
prompt: Run a project analysis for consistency
|
| 10 |
+
send: true
|
| 11 |
+
- label: Implement Project
|
| 12 |
+
agent: speckit.implement
|
| 13 |
+
prompt: Start the implementation in phases
|
| 14 |
+
send: true
|
| 15 |
+
---
|
| 16 |
+
|
| 17 |
+
## User Input
|
| 18 |
+
|
| 19 |
+
```text
|
| 20 |
+
$ARGUMENTS
|
| 21 |
+
```
|
| 22 |
+
|
| 23 |
+
You **MUST** consider the user input before proceeding (if not empty).
|
| 24 |
+
|
| 25 |
+
## Outline
|
| 26 |
+
|
| 27 |
+
1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
|
| 28 |
+
|
| 29 |
+
2. **Load design documents**: Read from FEATURE_DIR:
|
| 30 |
+
- **Required**: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities)
|
| 31 |
+
- **Optional**: data-model.md (entities), contracts/ (API endpoints), research.md (decisions), quickstart.md (test scenarios)
|
| 32 |
+
- Note: Not all projects have all documents. Generate tasks based on what's available.
|
| 33 |
+
|
| 34 |
+
3. **Execute task generation workflow**:
|
| 35 |
+
- Load plan.md and extract tech stack, libraries, project structure
|
| 36 |
+
- Load spec.md and extract user stories with their priorities (P1, P2, P3, etc.)
|
| 37 |
+
- If data-model.md exists: Extract entities and map to user stories
|
| 38 |
+
- If contracts/ exists: Map endpoints to user stories
|
| 39 |
+
- If research.md exists: Extract decisions for setup tasks
|
| 40 |
+
- Generate tasks organized by user story (see Task Generation Rules below)
|
| 41 |
+
- Generate dependency graph showing user story completion order
|
| 42 |
+
- Create parallel execution examples per user story
|
| 43 |
+
- Validate task completeness (each user story has all needed tasks, independently testable)
|
| 44 |
+
|
| 45 |
+
4. **Generate tasks.md**: Use `.specify.specify/templates/tasks-template.md` as structure, fill with:
|
| 46 |
+
- Correct feature name from plan.md
|
| 47 |
+
- Phase 1: Setup tasks (project initialization)
|
| 48 |
+
- Phase 2: Foundational tasks (blocking prerequisites for all user stories)
|
| 49 |
+
- Phase 3+: One phase per user story (in priority order from spec.md)
|
| 50 |
+
- Each phase includes: story goal, independent test criteria, tests (if requested), implementation tasks
|
| 51 |
+
- Final Phase: Polish & cross-cutting concerns
|
| 52 |
+
- All tasks must follow the strict checklist format (see Task Generation Rules below)
|
| 53 |
+
- Clear file paths for each task
|
| 54 |
+
- Dependencies section showing story completion order
|
| 55 |
+
- Parallel execution examples per story
|
| 56 |
+
- Implementation strategy section (MVP first, incremental delivery)
|
| 57 |
+
|
| 58 |
+
5. **Report**: Output path to generated tasks.md and summary:
|
| 59 |
+
- Total task count
|
| 60 |
+
- Task count per user story
|
| 61 |
+
- Parallel opportunities identified
|
| 62 |
+
- Independent test criteria for each story
|
| 63 |
+
- Suggested MVP scope (typically just User Story 1)
|
| 64 |
+
- Format validation: Confirm ALL tasks follow the checklist format (checkbox, ID, labels, file paths)
|
| 65 |
+
|
| 66 |
+
Context for task generation: {{args}}
|
| 67 |
+
|
| 68 |
+
The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without additional context.
|
| 69 |
+
|
| 70 |
+
## Task Generation Rules
|
| 71 |
+
|
| 72 |
+
**CRITICAL**: Tasks MUST be organized by user story to enable independent implementation and testing.
|
| 73 |
+
|
| 74 |
+
**Tests are OPTIONAL**: Only generate test tasks if explicitly requested in the feature specification or if user requests TDD approach.
|
| 75 |
+
|
| 76 |
+
### Checklist Format (REQUIRED)
|
| 77 |
+
|
| 78 |
+
Every task MUST strictly follow this format:
|
| 79 |
+
|
| 80 |
+
```text
|
| 81 |
+
- [ ] [TaskID] [P?] [Story?] Description with file path
|
| 82 |
+
```
|
| 83 |
+
|
| 84 |
+
**Format Components**:
|
| 85 |
+
|
| 86 |
+
1. **Checkbox**: ALWAYS start with `- [ ]` (markdown checkbox)
|
| 87 |
+
2. **Task ID**: Sequential number (T001, T002, T003...) in execution order
|
| 88 |
+
3. **[P] marker**: Include ONLY if task is parallelizable (different files, no dependencies on incomplete tasks)
|
| 89 |
+
4. **[Story] label**: REQUIRED for user story phase tasks only
|
| 90 |
+
- Format: [US1], [US2], [US3], etc. (maps to user stories from spec.md)
|
| 91 |
+
- Setup phase: NO story label
|
| 92 |
+
- Foundational phase: NO story label
|
| 93 |
+
- User Story phases: MUST have story label
|
| 94 |
+
- Polish phase: NO story label
|
| 95 |
+
5. **Description**: Clear action with exact file path
|
| 96 |
+
|
| 97 |
+
**Examples**:
|
| 98 |
+
|
| 99 |
+
- ✅ CORRECT: `- [ ] T001 Create project structure per implementation plan`
|
| 100 |
+
- ✅ CORRECT: `- [ ] T005 [P] Implement authentication middleware in src/middleware/auth.py`
|
| 101 |
+
- ✅ CORRECT: `- [ ] T012 [P] [US1] Create User model in src/models/user.py`
|
| 102 |
+
- ✅ CORRECT: `- [ ] T014 [US1] Implement UserService in src/services/user_service.py`
|
| 103 |
+
- ❌ WRONG: `- [ ] Create User model` (missing ID and Story label)
|
| 104 |
+
- ❌ WRONG: `T001 [US1] Create model` (missing checkbox)
|
| 105 |
+
- ❌ WRONG: `- [ ] [US1] Create User model` (missing Task ID)
|
| 106 |
+
- ❌ WRONG: `- [ ] T001 [US1] Create model` (missing file path)
|
| 107 |
+
|
| 108 |
+
### Task Organization
|
| 109 |
+
|
| 110 |
+
1. **From User Stories (spec.md)** - PRIMARY ORGANIZATION:
|
| 111 |
+
- Each user story (P1, P2, P3...) gets its own phase
|
| 112 |
+
- Map all related components to their story:
|
| 113 |
+
- Models needed for that story
|
| 114 |
+
- Services needed for that story
|
| 115 |
+
- Endpoints/UI needed for that story
|
| 116 |
+
- If tests requested: Tests specific to that story
|
| 117 |
+
- Mark story dependencies (most stories should be independent)
|
| 118 |
+
|
| 119 |
+
2. **From Contracts**:
|
| 120 |
+
- Map each contract/endpoint → to the user story it serves
|
| 121 |
+
- If tests requested: Each contract → contract test task [P] before implementation in that story's phase
|
| 122 |
+
|
| 123 |
+
3. **From Data Model**:
|
| 124 |
+
- Map each entity to the user story(ies) that need it
|
| 125 |
+
- If entity serves multiple stories: Put in earliest story or Setup phase
|
| 126 |
+
- Relationships → service layer tasks in appropriate story phase
|
| 127 |
+
|
| 128 |
+
4. **From Setup/Infrastructure**:
|
| 129 |
+
- Shared infrastructure → Setup phase (Phase 1)
|
| 130 |
+
- Foundational/blocking tasks → Foundational phase (Phase 2)
|
| 131 |
+
- Story-specific setup → within that story's phase
|
| 132 |
+
|
| 133 |
+
### Phase Structure
|
| 134 |
+
|
| 135 |
+
- **Phase 1**: Setup (project initialization)
|
| 136 |
+
- **Phase 2**: Foundational (blocking prerequisites - MUST complete before user stories)
|
| 137 |
+
- **Phase 3+**: User Stories in priority order (P1, P2, P3...)
|
| 138 |
+
- Within each story: Tests (if requested) → Models → Services → Endpoints → Integration
|
| 139 |
+
- Each phase should be a complete, independently testable increment
|
| 140 |
+
- **Final Phase**: Polish & Cross-Cutting Concerns
|
| 141 |
+
"""
|
.gemini/commands/speckit.taskstoissues.toml
ADDED
|
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
description = "Convert existing tasks into actionable, dependency-ordered GitHub issues for the feature based on available design artifacts."
|
| 2 |
+
|
| 3 |
+
prompt = """
|
| 4 |
+
---
|
| 5 |
+
description: Convert existing tasks into actionable, dependency-ordered GitHub issues for the feature based on available design artifacts.
|
| 6 |
+
tools: ['github/github-mcp-server/issue_write']
|
| 7 |
+
---
|
| 8 |
+
|
| 9 |
+
## User Input
|
| 10 |
+
|
| 11 |
+
```text
|
| 12 |
+
$ARGUMENTS
|
| 13 |
+
```
|
| 14 |
+
|
| 15 |
+
You **MUST** consider the user input before proceeding (if not empty).
|
| 16 |
+
|
| 17 |
+
## Outline
|
| 18 |
+
|
| 19 |
+
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
|
| 20 |
+
1. From the executed script, extract the path to **tasks**.
|
| 21 |
+
1. Get the Git remote by running:
|
| 22 |
+
|
| 23 |
+
```bash
|
| 24 |
+
git config --get remote.origin.url
|
| 25 |
+
```
|
| 26 |
+
|
| 27 |
+
**ONLY PROCEED TO NEXT STEPS IF THE REMOTE IS A GITHUB URL**
|
| 28 |
+
|
| 29 |
+
1. For each task in the list, use the GitHub MCP server to create a new issue in the repository that is representative of the Git remote.
|
| 30 |
+
|
| 31 |
+
**UNDER NO CIRCUMSTANCES EVER CREATE ISSUES IN REPOSITORIES THAT DO NOT MATCH THE REMOTE URL**
|
| 32 |
+
"""
|
GEMINI.md
ADDED
|
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Document-MCP Development Guidelines
|
| 2 |
+
|
| 3 |
+
Auto-generated from all feature plans. Last updated: 2025-11-25
|
| 4 |
+
|
| 5 |
+
## Active Technologies
|
| 6 |
+
|
| 7 |
+
- Python 3.11+ (Backend), TypeScript/React 18 (Frontend) + `react-force-graph-2d` (New), `FastAPI`, `sqlite3` (002-add-graph-view)
|
| 8 |
+
|
| 9 |
+
## Project Structure
|
| 10 |
+
|
| 11 |
+
```text
|
| 12 |
+
src/
|
| 13 |
+
tests/
|
| 14 |
+
```
|
| 15 |
+
|
| 16 |
+
## Commands
|
| 17 |
+
|
| 18 |
+
cd src [ONLY COMMANDS FOR ACTIVE TECHNOLOGIES][ONLY COMMANDS FOR ACTIVE TECHNOLOGIES] pytest [ONLY COMMANDS FOR ACTIVE TECHNOLOGIES][ONLY COMMANDS FOR ACTIVE TECHNOLOGIES] ruff check .
|
| 19 |
+
|
| 20 |
+
## Code Style
|
| 21 |
+
|
| 22 |
+
Python 3.11+ (Backend), TypeScript/React 18 (Frontend): Follow standard conventions
|
| 23 |
+
|
| 24 |
+
## Recent Changes
|
| 25 |
+
|
| 26 |
+
- 002-add-graph-view: Added Python 3.11+ (Backend), TypeScript/React 18 (Frontend) + `react-force-graph-2d` (New), `FastAPI`, `sqlite3`
|
| 27 |
+
|
| 28 |
+
<!-- MANUAL ADDITIONS START -->
|
| 29 |
+
<!-- MANUAL ADDITIONS END -->
|
specs/002-add-graph-view/checklists/requirements.md
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Specification Quality Checklist: Interactive Graph View
|
| 2 |
+
|
| 3 |
+
**Purpose**: Validate specification completeness and quality before proceeding to planning
|
| 4 |
+
**Created**: 2025-11-25
|
| 5 |
+
**Feature**: [spec.md](../spec.md)
|
| 6 |
+
|
| 7 |
+
## Content Quality
|
| 8 |
+
|
| 9 |
+
- [x] No implementation details (languages, frameworks, APIs)
|
| 10 |
+
- [x] Focused on user value and business needs
|
| 11 |
+
- [x] Written for non-technical stakeholders
|
| 12 |
+
- [x] All mandatory sections completed
|
| 13 |
+
|
| 14 |
+
## Requirement Completeness
|
| 15 |
+
|
| 16 |
+
- [x] No [NEEDS CLARIFICATION] markers remain
|
| 17 |
+
- [x] Requirements are testable and unambiguous
|
| 18 |
+
- [x] Success criteria are measurable
|
| 19 |
+
- [x] Success criteria are technology-agnostic (no implementation details)
|
| 20 |
+
- [x] All acceptance scenarios are defined
|
| 21 |
+
- [x] Edge cases are identified
|
| 22 |
+
- [x] Scope is clearly bounded
|
| 23 |
+
- [x] Dependencies and assumptions identified
|
| 24 |
+
|
| 25 |
+
## Feature Readiness
|
| 26 |
+
|
| 27 |
+
- [x] All functional requirements have clear acceptance criteria
|
| 28 |
+
- [x] User scenarios cover primary flows
|
| 29 |
+
- [x] Feature meets measurable outcomes defined in Success Criteria
|
| 30 |
+
- [x] No implementation details leak into specification
|
| 31 |
+
|
| 32 |
+
## Notes
|
| 33 |
+
|
| 34 |
+
- Initial validation passed. Spec is clean of technical debt and specific library references while clearly defining behavior.
|
specs/002-add-graph-view/contracts/graph-api.yaml
ADDED
|
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
openapi: 3.0.0
|
| 2 |
+
info:
|
| 3 |
+
title: Document MCP Graph API
|
| 4 |
+
version: 1.0.0
|
| 5 |
+
paths:
|
| 6 |
+
/api/graph:
|
| 7 |
+
get:
|
| 8 |
+
summary: Retrieve graph visualization data
|
| 9 |
+
operationId: getGraphData
|
| 10 |
+
responses:
|
| 11 |
+
'200':
|
| 12 |
+
description: Successful graph data retrieval
|
| 13 |
+
content:
|
| 14 |
+
application/json:
|
| 15 |
+
schema:
|
| 16 |
+
$ref: '#/components/schemas/GraphData'
|
| 17 |
+
'500':
|
| 18 |
+
description: Server error
|
| 19 |
+
components:
|
| 20 |
+
schemas:
|
| 21 |
+
GraphNode:
|
| 22 |
+
type: object
|
| 23 |
+
required:
|
| 24 |
+
- id
|
| 25 |
+
- label
|
| 26 |
+
properties:
|
| 27 |
+
id:
|
| 28 |
+
type: string
|
| 29 |
+
description: Unique identifier (note path)
|
| 30 |
+
label:
|
| 31 |
+
type: string
|
| 32 |
+
description: Note title
|
| 33 |
+
val:
|
| 34 |
+
type: integer
|
| 35 |
+
default: 1
|
| 36 |
+
description: Node size/weight based on connectivity
|
| 37 |
+
group:
|
| 38 |
+
type: string
|
| 39 |
+
description: Categorical group (e.g. folder)
|
| 40 |
+
GraphLink:
|
| 41 |
+
type: object
|
| 42 |
+
required:
|
| 43 |
+
- source
|
| 44 |
+
- target
|
| 45 |
+
properties:
|
| 46 |
+
source:
|
| 47 |
+
type: string
|
| 48 |
+
description: Source note ID
|
| 49 |
+
target:
|
| 50 |
+
type: string
|
| 51 |
+
description: Target note ID
|
| 52 |
+
GraphData:
|
| 53 |
+
type: object
|
| 54 |
+
required:
|
| 55 |
+
- nodes
|
| 56 |
+
- links
|
| 57 |
+
properties:
|
| 58 |
+
nodes:
|
| 59 |
+
type: array
|
| 60 |
+
items:
|
| 61 |
+
$ref: '#/components/schemas/GraphNode'
|
| 62 |
+
links:
|
| 63 |
+
type: array
|
| 64 |
+
items:
|
| 65 |
+
$ref: '#/components/schemas/GraphLink'
|
specs/002-add-graph-view/data-model.md
ADDED
|
@@ -0,0 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Data Model: Graph View
|
| 2 |
+
|
| 3 |
+
## Entities
|
| 4 |
+
|
| 5 |
+
### GraphNode
|
| 6 |
+
Represents a single note in the graph.
|
| 7 |
+
|
| 8 |
+
| Field | Type | Description |
|
| 9 |
+
|-------|------|-------------|
|
| 10 |
+
| `id` | `str` | Unique identifier (Note Path). |
|
| 11 |
+
| `label` | `str` | Display title of the note. |
|
| 12 |
+
| `val` | `int` | Weight/Size of the node (default: 1 + link_degree). |
|
| 13 |
+
| `group` | `str` | Grouping category (e.g., top-level folder). |
|
| 14 |
+
|
| 15 |
+
### GraphLink
|
| 16 |
+
Represents a directed connection between two notes.
|
| 17 |
+
|
| 18 |
+
| Field | Type | Description |
|
| 19 |
+
|-------|------|-------------|
|
| 20 |
+
| `source` | `str` | ID of the source note. |
|
| 21 |
+
| `target` | `str` | ID of the target note. |
|
| 22 |
+
|
| 23 |
+
### GraphData
|
| 24 |
+
The top-level payload returned by the API.
|
| 25 |
+
|
| 26 |
+
| Field | Type | Description |
|
| 27 |
+
|-------|------|-------------|
|
| 28 |
+
| `nodes` | `List[GraphNode]` | All notes in the vault. |
|
| 29 |
+
| `links` | `List[GraphLink]` | All resolved connections. |
|
| 30 |
+
|
| 31 |
+
## Database Mapping
|
| 32 |
+
|
| 33 |
+
- **Nodes**: Sourced from `note_metadata` table.
|
| 34 |
+
- `id` <- `note_path`
|
| 35 |
+
- `label` <- `title`
|
| 36 |
+
- `group` <- `note_path` (parsed parent directory)
|
| 37 |
+
- **Links**: Sourced from `note_links` table.
|
| 38 |
+
- `source` <- `source_path`
|
| 39 |
+
- `target` <- `target_path`
|
| 40 |
+
- Filter: `is_resolved = 1`
|
specs/002-add-graph-view/plan.md
ADDED
|
@@ -0,0 +1,82 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Implementation Plan: Interactive Graph View
|
| 2 |
+
|
| 3 |
+
**Branch**: `002-add-graph-view` | **Date**: 2025-11-25 | **Spec**: [spec.md](spec.md)
|
| 4 |
+
**Input**: Feature specification from `/specs/002-add-graph-view/spec.md`
|
| 5 |
+
|
| 6 |
+
**Note**: This template is filled in by the `/speckit.plan` command. See `.specify/templates/commands/plan.md` for the execution workflow.
|
| 7 |
+
|
| 8 |
+
## Summary
|
| 9 |
+
|
| 10 |
+
Add an interactive force-directed graph visualization to the frontend (`react-force-graph-2d`) backed by a new API endpoint (`GET /api/graph`) that serves node and link data derived from the existing SQLite index.
|
| 11 |
+
|
| 12 |
+
## Technical Context
|
| 13 |
+
|
| 14 |
+
**Language/Version**: Python 3.11+ (Backend), TypeScript/React 18 (Frontend)
|
| 15 |
+
**Primary Dependencies**: `react-force-graph-2d` (New), `FastAPI`, `sqlite3`
|
| 16 |
+
**Storage**: SQLite (`note_metadata`, `note_links` tables)
|
| 17 |
+
**Testing**: `pytest` (Backend), Manual/E2E (Frontend)
|
| 18 |
+
**Target Platform**: Web Browser (Canvas/WebGL support required)
|
| 19 |
+
**Project Type**: Web Application (Full Stack)
|
| 20 |
+
**Performance Goals**: Render <1000 nodes in <2 seconds.
|
| 21 |
+
**Constraints**: Must support Light/Dark mode dynamically.
|
| 22 |
+
**Scale/Scope**: Personal knowledge base scale (hundreds to thousands of notes).
|
| 23 |
+
|
| 24 |
+
## Constitution Check
|
| 25 |
+
|
| 26 |
+
*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
|
| 27 |
+
|
| 28 |
+
- **Library-First**: N/A (Application feature).
|
| 29 |
+
- **CLI Interface**: N/A (UI feature).
|
| 30 |
+
- **Test-First**: Backend endpoint will be tested via `pytest`.
|
| 31 |
+
- **Simplicity**: Using a proven library (`react-force-graph`) to avoid complex custom D3 implementations.
|
| 32 |
+
|
| 33 |
+
## Project Structure
|
| 34 |
+
|
| 35 |
+
### Documentation (this feature)
|
| 36 |
+
|
| 37 |
+
```text
|
| 38 |
+
specs/002-add-graph-view/
|
| 39 |
+
├── plan.md # This file
|
| 40 |
+
├── research.md # Phase 0 output
|
| 41 |
+
├── data-model.md # Phase 1 output
|
| 42 |
+
├── quickstart.md # Phase 1 output
|
| 43 |
+
├── contracts/ # Phase 1 output
|
| 44 |
+
│ └── graph-api.yaml
|
| 45 |
+
└── tasks.md # Phase 2 output
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
+
### Source Code (repository root)
|
| 49 |
+
|
| 50 |
+
```text
|
| 51 |
+
backend/
|
| 52 |
+
├── src/
|
| 53 |
+
│ ├── api/
|
| 54 |
+
│ │ └── routes/
|
| 55 |
+
│ │ └── graph.py # New endpoint
|
| 56 |
+
│ ├── models/
|
| 57 |
+
│ │ └── graph.py # New Pydantic models
|
| 58 |
+
│ └── services/
|
| 59 |
+
│ └── indexer.py # Update: add get_graph_data()
|
| 60 |
+
└── tests/
|
| 61 |
+
└── unit/
|
| 62 |
+
└── test_graph_api.py # New tests
|
| 63 |
+
|
| 64 |
+
frontend/
|
| 65 |
+
├── src/
|
| 66 |
+
│ ├── components/
|
| 67 |
+
│ │ └── GraphView.tsx # New component
|
| 68 |
+
│ ├── pages/
|
| 69 |
+
│ │ └── MainApp.tsx # Update: Add toggle
|
| 70 |
+
│ ├── services/
|
| 71 |
+
│ │ └── api.ts # Update: Add getGraphData()
|
| 72 |
+
│ └── types/
|
| 73 |
+
│ └── graph.ts # New types
|
| 74 |
+
```
|
| 75 |
+
|
| 76 |
+
**Structure Decision**: Standard Full-Stack layout.
|
| 77 |
+
|
| 78 |
+
## Complexity Tracking
|
| 79 |
+
|
| 80 |
+
| Violation | Why Needed | Simpler Alternative Rejected Because |
|
| 81 |
+
|-----------|------------|-------------------------------------|
|
| 82 |
+
| N/A | | |
|
specs/002-add-graph-view/quickstart.md
ADDED
|
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Quickstart: Graph View
|
| 2 |
+
|
| 3 |
+
## Prerequisites
|
| 4 |
+
- Existing backend running with indexed notes.
|
| 5 |
+
- Frontend dependencies installed.
|
| 6 |
+
|
| 7 |
+
## Installation
|
| 8 |
+
|
| 9 |
+
1. **Frontend Dependencies**:
|
| 10 |
+
```bash
|
| 11 |
+
cd frontend
|
| 12 |
+
npm install react-force-graph-2d
|
| 13 |
+
```
|
| 14 |
+
|
| 15 |
+
2. **Backend Setup**:
|
| 16 |
+
No extra packages needed (uses existing `sqlite3`).
|
| 17 |
+
|
| 18 |
+
## Verification Steps
|
| 19 |
+
|
| 20 |
+
1. **Start Backend**:
|
| 21 |
+
```bash
|
| 22 |
+
./start-dev.sh
|
| 23 |
+
```
|
| 24 |
+
|
| 25 |
+
2. **Open Application**:
|
| 26 |
+
Navigate to `http://localhost:5173`.
|
| 27 |
+
|
| 28 |
+
3. **Switch to Graph View**:
|
| 29 |
+
- Locate the "Graph" icon/button in the top toolbar (next to "New Note").
|
| 30 |
+
- Click it.
|
| 31 |
+
- **Verify**: The center panel should replace the note editor with a dark/light canvas showing nodes.
|
| 32 |
+
|
| 33 |
+
4. **Interact**:
|
| 34 |
+
- **Hover**: Mouse over a node; see the tooltip with the note title.
|
| 35 |
+
- **Drag**: Click and drag a node; it should move and pull connected nodes.
|
| 36 |
+
- **Click**: Click a node; the view should switch back to "Note View" and open that specific note.
|
| 37 |
+
- **Zoom**: Scroll wheel to zoom in/out.
|
| 38 |
+
|
| 39 |
+
5. **Check Unlinked Notes**:
|
| 40 |
+
- Create a new note with no links.
|
| 41 |
+
- Switch to Graph View.
|
| 42 |
+
- **Verify**: The new note appears as a standalone node.
|
specs/002-add-graph-view/research.md
ADDED
|
@@ -0,0 +1,67 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Phase 0: Research & Technical Decisions
|
| 2 |
+
|
| 3 |
+
## 1. Graph Visualization Library
|
| 4 |
+
|
| 5 |
+
**Decision**: Use `react-force-graph-2d`.
|
| 6 |
+
|
| 7 |
+
**Rationale**:
|
| 8 |
+
- **Performance**: Uses HTML5 Canvas/WebGL for rendering, capable of handling thousands of nodes (meeting the "personal knowledge base" scale requirement).
|
| 9 |
+
- **React Integration**: Native React component wrapper around `force-graph`, managing the lifecycle and updates declarative.
|
| 10 |
+
- **Feature Set**: Built-in zoom/pan, auto-centering, node/link interactions (hover/click), and flexible styling.
|
| 11 |
+
- **Maintainability**: Widely used, active community.
|
| 12 |
+
|
| 13 |
+
**Alternatives Considered**:
|
| 14 |
+
- `vis-network`: Good but heavier and imperative API is harder to integrate cleanly with modern React hooks.
|
| 15 |
+
- `d3-force` (raw): Too low-level. Would require rebuilding canvas rendering, zoom/pan logic, and interaction handlers from scratch.
|
| 16 |
+
- `cytoscape.js`: Powerful but focused more on graph theory analysis; visual customization is CSS-like but sometimes more complex for "floating particle" aesthetics.
|
| 17 |
+
|
| 18 |
+
## 2. Data Structure & API
|
| 19 |
+
|
| 20 |
+
**Decision**: Flat structure with `nodes` and `links` arrays.
|
| 21 |
+
|
| 22 |
+
**Schema**:
|
| 23 |
+
```json
|
| 24 |
+
{
|
| 25 |
+
"nodes": [
|
| 26 |
+
{ "id": "path/to/note.md", "label": "Note Title", "val": 1, "group": "folder-name" }
|
| 27 |
+
],
|
| 28 |
+
"links": [
|
| 29 |
+
{ "source": "path/to/source.md", "target": "path/to/target.md" }
|
| 30 |
+
]
|
| 31 |
+
}
|
| 32 |
+
```
|
| 33 |
+
|
| 34 |
+
**Rationale**:
|
| 35 |
+
- Matches the expected input format of `react-force-graph`.
|
| 36 |
+
- `val` property allows automatic node sizing based on degree (calculated on backend or frontend). Backend calculation is preferred for caching/performance.
|
| 37 |
+
- `id` uses the file path to ensure uniqueness and easy mapping back to navigation events.
|
| 38 |
+
|
| 39 |
+
## 3. Theme Integration
|
| 40 |
+
|
| 41 |
+
**Decision**: Pass dynamic colors via React props, reading from CSS variables or a ThemeContext.
|
| 42 |
+
|
| 43 |
+
**Strategy**:
|
| 44 |
+
- The `GraphView` component will hook into the current theme (light/dark).
|
| 45 |
+
- Colors for background, nodes, and text will be passed to `<ForceGraph2D />`.
|
| 46 |
+
- **Light Mode**: White background, dark grey nodes/links.
|
| 47 |
+
- **Dark Mode**: `hsl(var(--background))` (usually dark), light grey nodes.
|
| 48 |
+
- **Groups**: Use a categorical color scale for folders (e.g., D3 scale or a fixed palette).
|
| 49 |
+
|
| 50 |
+
## 4. Unlinked Notes
|
| 51 |
+
|
| 52 |
+
**Decision**: Include all notes in the `nodes` array, even if they have no entries in `links`.
|
| 53 |
+
|
| 54 |
+
**Physics**:
|
| 55 |
+
- The force simulation will naturally push unconnected nodes away from the center cluster but keep them within the viewport if a bounding box or gravity is applied.
|
| 56 |
+
- We will apply a weak `d3.forceManyBody` (repulsion) and a central `d3.forceCenter` to keep the "cloud" visible.
|
| 57 |
+
|
| 58 |
+
## 5. Backend Implementation
|
| 59 |
+
|
| 60 |
+
**Decision**: Add `get_graph_data` to `IndexerService`.
|
| 61 |
+
|
| 62 |
+
**Logic**:
|
| 63 |
+
1. Fetch all notes from `note_metadata` (id, title).
|
| 64 |
+
2. Fetch all links from `note_links` where `is_resolved=1`.
|
| 65 |
+
3. Compute link counts for node sizing (optional optimization: do this in SQL or Python).
|
| 66 |
+
4. Return JSON.
|
| 67 |
+
5. **Caching**: Use a simple in-memory cache with a short TTL (e.g., 5 minutes) or invalidation on note create/update events to ensure sub-2s response times for large vaults. *For V1, direct SQL query is likely fast enough for <1000 notes.*
|
specs/002-add-graph-view/spec.md
ADDED
|
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Feature Specification: Interactive Graph View
|
| 2 |
+
|
| 3 |
+
**Feature**: 002-add-graph-view
|
| 4 |
+
**Status**: Draft
|
| 5 |
+
**Created**: 2025-11-25
|
| 6 |
+
|
| 7 |
+
## 1. Summary
|
| 8 |
+
|
| 9 |
+
An interactive, force-directed graph visualization of the note vault that displays notes as nodes and wikilinks as connections. This view provides users with a high-level understanding of their knowledge base's structure and an alternative method for navigation.
|
| 10 |
+
|
| 11 |
+
## 2. Problem Statement
|
| 12 |
+
|
| 13 |
+
**Context**: Currently, users browse notes via a linear directory tree or search results.
|
| 14 |
+
**Problem**: These views do not reveal the structural relationships, clusters, or connectivity density of the knowledge base. Users cannot easily see which notes are central "hubs" or which are isolated "orphans."
|
| 15 |
+
**Impact**: Reduces the ability to maintain a well-connected knowledge garden and limits discovery of related concepts.
|
| 16 |
+
|
| 17 |
+
## 3. Goals & Non-Goals
|
| 18 |
+
|
| 19 |
+
### Goals
|
| 20 |
+
- Provide a visual representation of the vault's link structure.
|
| 21 |
+
- Enable intuitive navigation by clicking on nodes in the graph.
|
| 22 |
+
- Highlight important notes through visual properties (e.g., node size based on connectivity).
|
| 23 |
+
- Identify unlinked (orphan) notes visually.
|
| 24 |
+
|
| 25 |
+
### Non-Goals
|
| 26 |
+
- Full 3D visualization (2D is sufficient for V1).
|
| 27 |
+
- Complex graph editing (e.g., creating links by dragging lines between nodes).
|
| 28 |
+
- Advanced graph analytics (centrality metrics beyond simple degree).
|
| 29 |
+
|
| 30 |
+
## 4. User Scenarios
|
| 31 |
+
|
| 32 |
+
### Scenario 1: Structural Overview
|
| 33 |
+
**User**: A researcher managing a large knowledge base.
|
| 34 |
+
**Action**: Clicks the "Graph View" toggle button in the application toolbar.
|
| 35 |
+
**Result**: The note editor is replaced by a full-panel canvas showing a web of connected nodes. The user pans and zooms to explore different clusters of notes, identifying a dense cluster related to "Project X."
|
| 36 |
+
|
| 37 |
+
### Scenario 2: Visual Navigation
|
| 38 |
+
**User**: Looking for a specific core concept note.
|
| 39 |
+
**Action**: Identifies a large node in the center of the graph (indicating many connections). hovers over it to confirm the title "Core Concepts," and clicks it.
|
| 40 |
+
**Result**: The application switches back to the standard note view, displaying the "Core Concepts" note.
|
| 41 |
+
|
| 42 |
+
### Scenario 3: Orphan Identification
|
| 43 |
+
**User**: Wants to improve note connectivity.
|
| 44 |
+
**Action**: Opens the graph view and looks for small nodes floating unconnected at the periphery of the main cluster.
|
| 45 |
+
**Result**: Identifies an isolated note, clicks it to open, and adds links to connect it to the rest of the graph.
|
| 46 |
+
|
| 47 |
+
## 5. Functional Requirements
|
| 48 |
+
|
| 49 |
+
### 5.1 Graph Visualization
|
| 50 |
+
- **Nodes**: Represent individual notes.
|
| 51 |
+
- **Edges**: Represent resolved wikilinks between notes.
|
| 52 |
+
- **Node Sizing**: Nodes must scale dynamically based on their number of connections (link degree); heavily linked notes appear larger.
|
| 53 |
+
- **Unlinked Notes**: Notes with no connections must be visible, floating freely within the simulation (not hidden).
|
| 54 |
+
- **Theme Compatibility**: The graph background and element colors must adapt to the application's current theme (Light/Dark mode).
|
| 55 |
+
|
| 56 |
+
### 5.2 Interaction
|
| 57 |
+
- **Navigation**: Clicking a node must activate that note in the main view and switch away from the graph.
|
| 58 |
+
- **Controls**: Users must be able to pan the canvas and zoom in/out.
|
| 59 |
+
- **Hover Details**: Hovering over a node must display a tooltip with the note's title.
|
| 60 |
+
- **Physics**: Nodes should naturally repel each other to reduce overlap, with links acting as springs to hold connected notes together.
|
| 61 |
+
|
| 62 |
+
### 5.3 UI Integration
|
| 63 |
+
- **Access Control**: A toggle control (e.g., "Graph" vs. "Note") must be available in the main application toolbar.
|
| 64 |
+
- **Persistence**: The graph view should retain its state (zoom level, position) transiently while the app is open, if possible, or reload quickly.
|
| 65 |
+
|
| 66 |
+
### 5.4 Data Source
|
| 67 |
+
- **Graph Data**: The system must generate a graph payload containing:
|
| 68 |
+
- Nodes: ID (path), Label (title), Size metric (link count), Grouping (folder).
|
| 69 |
+
- Links: Source, Target.
|
| 70 |
+
- **Folder Grouping**: Nodes should ideally be visually distinct (e.g., by color) based on their top-level folder or category.
|
| 71 |
+
|
| 72 |
+
## 6. Success Criteria
|
| 73 |
+
|
| 74 |
+
1. **Load Time**: Graph renders with < 2 seconds latency for a vault of up to 1,000 notes.
|
| 75 |
+
2. **Visual Clarity**: Unlinked notes are clearly distinguishable from the main connected component.
|
| 76 |
+
3. **Navigation Accuracy**: Clicking a node opens the correct corresponding note 100% of the time.
|
| 77 |
+
4. **Theme Consistency**: Switching between light and dark modes updates the graph colors immediately or upon next render without requiring a reload.
|
| 78 |
+
|
| 79 |
+
## 7. Assumptions & Dependencies
|
| 80 |
+
|
| 81 |
+
- **Browser Support**: The user's environment supports WebGL or HTML5 Canvas for performant rendering.
|
| 82 |
+
- **Data Volume**: The initial implementation target is for personal knowledge bases (hundreds to low thousands of notes), not enterprise scale (millions).
|
| 83 |
+
- **Link Resolution**: Only "resolved" links (links pointing to existing notes) generate edges.
|
| 84 |
+
|
| 85 |
+
## 8. Questions & Clarifications
|
| 86 |
+
|
| 87 |
+
*(None required at this stage. Standard force-directed graph behavior is assumed for layout logic.)*
|
specs/002-add-graph-view/tasks.md
ADDED
|
@@ -0,0 +1,75 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Tasks: Interactive Graph View
|
| 2 |
+
|
| 3 |
+
**Feature**: 002-add-graph-view
|
| 4 |
+
**Status**: Pending
|
| 5 |
+
**Spec**: [spec.md](spec.md)
|
| 6 |
+
|
| 7 |
+
## Phase 1: Setup (Project Initialization)
|
| 8 |
+
|
| 9 |
+
*Goal: Install dependencies and define core data structures.*
|
| 10 |
+
|
| 11 |
+
- [ ] T001 [P] Install `react-force-graph-2d` dependency in frontend/package.json
|
| 12 |
+
- [ ] T002 [P] Define Pydantic models (`GraphNode`, `GraphLink`, `GraphData`) in backend/src/models/graph.py
|
| 13 |
+
- [ ] T003 [P] Define TypeScript interfaces (`GraphNode`, `GraphLink`, `GraphData`) in frontend/src/types/graph.ts
|
| 14 |
+
|
| 15 |
+
## Phase 2: Foundational Tasks
|
| 16 |
+
|
| 17 |
+
*Goal: Establish core backend logic required for all user stories.*
|
| 18 |
+
|
| 19 |
+
- [ ] T004 Update `IndexerService` with `get_graph_data()` method in backend/src/services/indexer.py
|
| 20 |
+
- [ ] T005 Implement `GET /api/graph` endpoint in backend/src/api/routes/graph.py
|
| 21 |
+
- [ ] T006 Register graph router in backend/src/api/main.py
|
| 22 |
+
- [ ] T007 [P] Add `getGraphData` function to frontend/src/services/api.ts
|
| 23 |
+
- [ ] T008 Create unit tests for graph API in backend/tests/unit/test_graph_api.py
|
| 24 |
+
|
| 25 |
+
## Phase 3: User Story 1 - Structural Overview
|
| 26 |
+
|
| 27 |
+
*Goal: Visualize the note vault structure as an interactive graph.*
|
| 28 |
+
|
| 29 |
+
- [ ] T009 [US1] Create `GraphView` component with basic force-directed graph in frontend/src/components/GraphView.tsx
|
| 30 |
+
- [ ] T010 [US1] Integrate `getGraphData` hook into `GraphView` to load real data in frontend/src/components/GraphView.tsx
|
| 31 |
+
- [ ] T011 [US1] Add "Graph View" toggle button and conditional rendering logic in frontend/src/pages/MainApp.tsx
|
| 32 |
+
- [ ] T012 [US1] Style the graph container to occupy the full center panel in frontend/src/pages/MainApp.tsx
|
| 33 |
+
|
| 34 |
+
## Phase 4: User Story 2 - Visual Navigation
|
| 35 |
+
|
| 36 |
+
*Goal: Enable navigation from the graph to specific notes.*
|
| 37 |
+
|
| 38 |
+
- [ ] T013 [US2] Implement `onNodeClick` handler to trigger `onSelectNote` callback in frontend/src/components/GraphView.tsx
|
| 39 |
+
- [ ] T014 [US2] Verify node hover tooltips display note titles correctly in frontend/src/components/GraphView.tsx
|
| 40 |
+
- [ ] T015 [US2] Ensure switching back from Graph View restores the standard Note View in frontend/src/pages/MainApp.tsx
|
| 41 |
+
|
| 42 |
+
## Phase 5: User Story 3 - Orphan Identification & Visuals
|
| 43 |
+
|
| 44 |
+
*Goal: Highlight node connectivity and grouping.*
|
| 45 |
+
|
| 46 |
+
- [ ] T016 [US3] Implement node sizing logic (`val` based on link count) in backend/src/services/indexer.py
|
| 47 |
+
- [ ] T017 [US3] [P] Implement theme support (Dynamic Light/Dark colors) in frontend/src/components/GraphView.tsx
|
| 48 |
+
- [ ] T018 [US3] [P] Implement categorical node coloring based on `group` (folder) in frontend/src/components/GraphView.tsx
|
| 49 |
+
|
| 50 |
+
## Final Phase: Polish
|
| 51 |
+
|
| 52 |
+
*Goal: Ensure stability and good UX.*
|
| 53 |
+
|
| 54 |
+
- [ ] T019 Implement loading state spinner in `GraphView` while fetching data in frontend/src/components/GraphView.tsx
|
| 55 |
+
- [ ] T020 Implement error handling banner in `GraphView` in frontend/src/components/GraphView.tsx
|
| 56 |
+
|
| 57 |
+
## Dependencies
|
| 58 |
+
|
| 59 |
+
1. **Setup (T001-T003)**: Must be done first.
|
| 60 |
+
2. **Foundational (T004-T008)**: Depends on Setup. Required for all US phases.
|
| 61 |
+
3. **US1 (T009-T012)**: Depends on Foundational.
|
| 62 |
+
4. **US2 (T013-T015)**: Depends on US1 (needs GraphView component).
|
| 63 |
+
5. **US3 (T016-T018)**: Partially parallel with US2, but T016 (Backend sizing) is backend-only. T017/T018 modify GraphView.
|
| 64 |
+
6. **Polish (T019-T020)**: Can be done anytime after US1.
|
| 65 |
+
|
| 66 |
+
## Parallel Execution Strategy
|
| 67 |
+
|
| 68 |
+
- **Backend vs Frontend**: Once models (T002/T003) are agreed upon, T004-T006 (Backend) can run parallel to T007-T009 (Frontend).
|
| 69 |
+
- **Within US3**: T017 (Theme) and T018 (Group Colors) are independent UI tasks.
|
| 70 |
+
|
| 71 |
+
## Implementation Strategy
|
| 72 |
+
|
| 73 |
+
1. **MVP**: Complete Phases 1, 2, and 3 (US1). This delivers the core value: seeing the graph.
|
| 74 |
+
2. **Interaction**: Phase 4 adds the critical navigation workflow.
|
| 75 |
+
3. **Visuals**: Phase 5 enhances utility (orphans, importance).
|