Commit bff28124 authored by jwf's avatar jwf

Initial import of investment-risk-management project

parents

Too many changes to show.

To preserve performance only 1000 of 1000+ files are displayed.

# Project Scaffold Skill Intent
Detailed model: see `docs/feedback-layer-model.md`.
## Canonical Purpose
This skill exists to help an AI agent turn a specific repository into a
project-specific autonomous development harness.
The goal is not "generate some docs" and not "self-test the skill in
isolation." The goal is to give the agent an environment where it can:
- understand what the project is trying to build
- see the rules it must not violate
- get fast feedback when it breaks something
- tell the difference between a passing edit and a correct edit
- keep iterating without losing context or maintainability
If a future change improves template coverage but does not improve the
project's feedback loops, constraints, verification, or recovery ability,
that change is not aligned with this skill's purpose.
## What The Harness Must Provide
For each target project, the scaffolded harness should make the following
concrete:
1. Project intent and current scope
2. Architecture and implementation constraints
3. Quality gates the agent can run mechanically
4. Runtime or behavior visibility when something fails
5. Milestone and iteration boundaries
6. Recovery paths when an edit goes wrong
7. Documentation that stays useful during long-running work
The exact artifacts can vary by project. The contract does not.
## Core Principle
Do not hardcode project answers into the skill.
Instead, the skill should:
- discover what is true from the repository
- ask the human only for decisions that cannot be safely inferred
- encode the resulting rules, checks, and operating context back into the repo
- prefer machine-checkable feedback over prose promises
This means prompts, manifests, scripts, checks, and operating docs should act as
the project's working harness, not as generic paperwork.
## Operating Model
The operating model is:
1. Before development, discover what is true in the repo and ask the human only
for decisions that materially shape the harness or priorities.
2. During development, rely primarily on repo-encoded constraints and
environment feedback, not repeated human intervention.
3. When the agent fails, the harness should help it see what failed, why it
failed, and what evidence should guide the next attempt.
Human input is for ambiguity, priorities, and irreversible choices. Routine
iteration should be driven by deterministic checks, runtime evidence, and
project-local rules.
## Success Criteria
The skill is successful when a downstream agent can enter a repo and answer:
- What am I building right now?
- What files or layers am I allowed to change?
- What commands tell me whether I broke something?
- What constraints must my code obey?
- How do I know this milestone is actually done?
- If I get stuck or regress quality, what is the recovery path?
## Non-Goals
This repository is not primarily optimized for:
- maximizing the number of generated files
- enforcing one fixed document bundle on every project
- writing beautiful process prose with weak enforcement
- proving the skill repo can grade itself while target projects remain weakly instrumented
## Drift Check
When making changes to this skill, prefer the option that makes the target
project's autonomous development environment stronger.
Use this document as the tie-breaker when README text, template choices, eval
logic, or sub-skill behavior start drifting toward generic scaffolding instead
of project-specific harness construction.
This diff is collapsed.
---
name: project-scaffold
description: >
Project-specific autonomous development harness builder. Bootstraps any
project (greenfield or existing) with an adaptive harness for continuous,
autonomous, high-quality development. Discovers what the project needs,
confirms with the human, generates only the confirmed artifacts, then
operates continuously while strengthening the project's feedback loops,
constraints, and recovery paths. Drop this skill into any repo's
.agents/skills/ to activate.
---
# Project Scaffold — Full-Loop AI Development Operating System
## Overview
Canonical purpose: see [INTENT.md](INTENT.md).
This skill turns any codebase into a **project-specific autonomous development
harness** for a continuously operating AI agent. It is not just a one-time
scaffold. It is an ongoing development loop with mechanical quality
enforcement, explicit constraints, and feedback paths that help the agent tell
when it changed the wrong thing.
**What it does:**
1. **Discovers** the project's structure, tech stack, quality tools, and existing docs
2. **Assesses** what harness artifacts (docs, scripts, sub-skills) the project needs
3. **Confirms** the harness shape with the human — nothing is generated without agreement
4. **Generates** only the confirmed artifacts from templates (adapted, not rigid)
5. **Operates** continuously: iterate > verify > record > audit/promote
6. **Evolves** the harness itself through use — docs, scripts, constraints, and a small operational prompt set improve over time
### Two modes
| Scenario | What happens |
|---|---|
| **Greenfield** (no docs exist) | Discover > assess > confirm > generate from templates |
| **Messy-existing** (docs exist but disorganized) | Audit > diagnose > confirm > reorganize > fill gaps |
### Design principles
1. **Repo is the single source of truth** — anything not in the repo doesn't exist for the agent
2. **AGENTS.md is a table of contents, not an encyclopedia** — max 150 lines, points to deeper docs
3. **Rules in code, not just prose** — every constraint that can be automated, is automated
4. **Progressive disclosure** — agent reads this file first, then navigates to phase details as needed
5. **Prefer boring technology** — composable, stable, well-represented in training data
6. **Mechanical enforcement over trust** — linters, scripts, CI checks > documentation promises
7. **Continuous operation** — scaffold is Phase 0-4; daily work is Phase 5 (runs forever)
8. **Safe rollback only** — revert only files from the active iteration; never use repository-wide rollback commands
9. **Prompt first, discovery second, enforcement last** — never encode project-specific answers; only encode universal safety and verification contracts
10. **Harness is a living system** — documents, scripts, constraints, and the operational prompt set evolve through use, not just during scaffold setup
11. **Configuration emerges from discovery and dialogue** — do not hardcode project-specific answers into templates; derive them from Phase 0 discovery and Phase 1 human alignment
### Uncertainty protocol (applies to ALL phases and sub-skills)
When the agent encounters something it is not sure about:
1. **Try to discover the answer** from the repo first (read code, read docs, run commands)
2. If a candidate answer is found, **tag it as `inferred`** and continue,
but confirm with the human at the next natural interaction point
3. If no answer can be inferred, **ask the human immediately**, do not guess
4. If the cost of guessing wrong is high (affects architecture, security, hard
constraints, or would require significant rework), **ask the human even if
you have a candidate answer**
5. **Record every "ask the human" event** — what was asked, why, and the answer.
Use this to improve the harness so the same question doesn't arise again
(add a doc, add a constraint, add context to AGENTS.md)
This protocol applies uniformly to scaffold generation (Phase 0-4) and continuous
operation (Phase 5).
---
## Phase Navigation
Each phase has detailed instructions in a separate file under `phases/`.
Read only the phase you are currently executing — do not pre-load all phases.
| Phase | Purpose | Details |
|---|---|---|
| **Phase 0: Discovery** | Explore the project autonomously, assess harness needs | `phases/phase-0-discovery.md` |
| **Phase 1: Alignment** | Ask the human for decisions, confirm harness shape | `phases/phase-1-alignment.md` |
| **Phase 2: Milestones** | Decompose the path to North Star into milestones | `phases/phase-2-milestones.md` |
| **Phase 3: Generation** | Generate only the confirmed artifacts | `phases/phase-3-generation.md` |
| **Phase 4: Validation** | Mechanical self-check, then present to user | `phases/phase-4-validation.md` |
| **Phase 5: Operation** | Continuous loop: iterate, audit, promote, evolve harness | `phases/phase-5-operation.md` |
**Anti-patterns**: `phases/anti-patterns.md`
**Self-test mode** (fixture runs): `evals/SELF-TEST-MODE.md`
### Phase flow
```
Phase 0 (autonomous) --> Phase 1 (human) --> Phase 2 (milestones)
--> Phase 3 (generate) --> Phase 4 (validate) --> Phase 5 (forever)
```
The agent pauses for human input only at:
- **Phase 1**: core decisions, inference confirmation, harness shape
- **Phase 4**: final review before entering continuous operation
- **During Phase 5**: only when the Uncertainty Protocol or L4 escalation triggers
---
## Key Concepts (quick reference)
### Harness Needs Assessment (Phase 0k)
Instead of always generating a fixed bundle of docs, scripts, and operational prompts,
the agent assesses what the project actually needs based on discovery.
Document roles, scripts, and operational prompts are each marked as
`needed` / `recommended` / `not-needed-yet`. The human confirms in Phase 1d.
The confirmed result is then encoded in two machine-readable contracts:
`docs/harness-manifest.json` for artifact inventory + execution contract, and
`docs/feedback-profile.json` for feedback surfaces + gaps + business-flow coverage.
### Adaptive Quality Dimensions (Phase 1d + quality-audit)
Quality scoring dimensions are NOT a fixed 5x20 grid. They are discovered in
Phase 0, confirmed with the human in Phase 1d, and recorded in project docs.
Dimensions can be added, removed, or re-weighted during operation.
### Harness Evolution (Phase 5)
The harness is not frozen after Phase 4. During continuous operation:
- After every L2/L3 recovery: ask what was missing from the environment
- After every milestone: review whether the harness needs an upgrade
- After every quality-audit: check harness health
- Record all harness improvements in the Decision Log
### M0 Adaptive (Phase 2)
M0 is not always "Quality Infrastructure." Its content depends on what Phase 0
discovered. The invariant is: **before any feature work, at least one quality
gate command must be runnable.**
---
## Templates and Sub-skills
Templates in `templates/` are **reference starting points**, not rigid structures.
Adapt, extend, or omit sections based on what was discovered and confirmed.
### Available templates
| Role | Template |
|---|---|
| `agents_doc` | `templates/AGENTS.md.tmpl` |
| `project_spec_doc` | `templates/project_spec.md.tmpl` |
| `prompt_doc` | `templates/prompt.md.tmpl` |
| `plan_doc` | `templates/plan.md.tmpl` |
| `implement_doc` | `templates/implement.md.tmpl` |
| `documentation_doc` | `templates/documentation.md.tmpl` |
| `failure_taxonomy_doc` | `templates/taxonomy.md.tmpl` |
| `compaction_doc` | `templates/compaction-strategy.md.tmpl` |
| `harness_manifest` | `templates/harness-manifest.json.tmpl` |
| `feedback_profile` | `templates/feedback-profile.json.tmpl` |
### Available operational sub-skills
| Skill | Purpose |
|---|---|
| `iterate` | Execute one iteration cycle within current milestone |
| `quality-audit` | Full quality audit + tech debt scan + quality score |
| `milestone-promote` | Safely promote from one milestone to the next |
### Tools
| Tool | Purpose |
|---|---|
| `tools/render-scaffold.mjs` | Mechanically render templates from a values JSON file |
---
## Usage
To activate this skill in a new project:
```text
PowerShell:
Copy-Item -Recurse -Force project-scaffold-skill "<your-project>/.agents/skills/project-scaffold"
Unix:
cp -R project-scaffold-skill/ <your-project>/.agents/skills/project-scaffold/
Then tell Codex/Claude:
Run the $project-scaffold skill to set up this repo for autonomous development.
```
The AI agent will execute Phase 0-4 automatically, only pausing for Phase 1
(human decisions) and Phase 4 (final review). After handoff, the agent
enters Phase 5 and operates continuously using the installed operational sub-skills.
# Feedback Layer Model
This document defines how `project-scaffold-skill` should think about
constraints, feedback, and business-flow verification when building a
project-specific autonomous development harness.
Use this document together with `INTENT.md`.
## Why This Exists
The harness is not complete when it has produced a few planning documents.
It is complete only when the target project gives the agent enough feedback to:
- know which layer it is changing
- know which boundaries it must not violate
- know what commands or tools prove the change is valid
- know what evidence to inspect when the change fails
- know how to recover without guessing
The feedback layer is therefore part of the environment, not just part of the
documentation.
## Canonical Machine-Readable Contracts
The skill should produce two complementary machine-readable contracts:
- `docs/harness-manifest.json`
This answers: what artifacts were intentionally generated, which scripts and
sub-skills exist, and what is the repo's execution contract?
- `docs/feedback-profile.json`
This answers: which feedback surfaces exist, how strong are they, which
business flows are covered, where does evidence live, and which gaps still
need human confirmation or future work?
Do not overload one file with both responsibilities. Artifact inventory and
feedback-layer state evolve together, but they are not the same thing.
## Reading The Architecture Diagram
The diagram discussed in design review implies a layered system, for example:
- `Providers`
- `Service`
- `Runtime`
- `UI`
- `Types`
- `Config`
- `Repo`
- `Utils`
- `App Wiring + UI`
The exact layer names vary by project. The important point is that the harness
must understand:
1. Which layers exist
2. Which directions of dependency are allowed
3. Which user or business flows cross those layers
4. Which feedback surface should catch failure at each point
The goal is not only "the app runs." The goal is "the agent can tell which
layer is wrong, which contract was broken, and which evidence proves it."
## The Feedback Layer Is Multi-Surface
The feedback layer is not one script and not one document. It is a set of
surfaces around the codebase.
### 1. Static Constraint Feedback
This catches illegal structure before runtime.
Examples:
- dependency direction rules
- forbidden imports
- directory ownership rules
- architecture boundary checks
- schema drift checks
Typical mechanisms:
- lint rules
- import/dependency graph tooling
- architecture tests
- manifest or structure validation
### 2. Contract Feedback
This catches mismatches between layers.
Examples:
- wrong request or response shape
- broken repo or provider contract
- incompatible domain types
- invalid config shape
Typical mechanisms:
- typecheck
- schema validation
- API contract tests
- integration tests between layers
### 3. Runtime Feedback
This catches failures that only appear while the program is running.
Examples:
- runtime exceptions
- browser console errors
- failed network calls
- bad state transitions
- missing environment configuration
Typical mechanisms:
- structured logs
- local traces
- screenshots
- browser automation traces
- smoke tests
### 4. Business-Flow Feedback
This catches "the code passes, but the product behavior is still wrong."
Examples:
- a primary user path appears to succeed but violates an access rule
- a multi-step workflow completes but leaves downstream state inconsistent
- a successful response is returned but the intended side effect does not happen
- a command exits successfully but the observable result is still wrong
Typical mechanisms:
- scenario tests
- acceptance tests
- end-to-end tests
- fixture-based integration flows
- golden output tests
## Constraint Encoding Model
Constraints should be encoded in three layers:
### 1. Explanation Layer
Used so the agent understands intent and priorities.
Typical artifacts:
- `AGENTS.md`
- `docs/project_spec.md`
- `docs/prompt.md`
### 2. Contract Layer
Used so the repo has machine-readable or mechanically checkable agreements.
Typical artifacts:
- `docs/harness-manifest.json`
- `docs/feedback-profile.json`
- schemas
- API contracts
- directory or layer dependency declarations
- explicit command contracts
### 3. Enforcement Layer
Used so the environment can reject bad changes.
Typical artifacts:
- `scripts/quality-gates.*`
- `scripts/validate-docs.*`
- lint
- typecheck
- unit tests
- integration tests
- scenario smoke tests
- end-to-end tests
The skill should prefer turning important constraints into enforcement. If a
constraint exists only in prose, the harness is weak.
## Business Flow Verification Is The Hard Part
Business-flow validation is usually the least inferable part of the harness.
Static structure can often be discovered from code. Business truth often cannot.
Because of that, the skill should separate business-flow inputs into three
classes:
### 1. Repo-Confirmed
Safe to trust because it already exists in the repository.
Examples:
- requirements documents
- product specs
- API contracts
- existing user flows in tests
- design files linked from the repo
### 2. Inferred
Reasonable to propose, but not safe to treat as final truth.
Examples:
- likely primary route based on router structure
- likely layer boundaries based on imports
- likely core workflow shape based on file and endpoint naming
These should be tagged as inferred until confirmed.
### 3. Human-Confirmed
Must be explicitly asked when getting it wrong would distort the harness.
Examples:
- the top 1 to 3 user-critical flows
- failure modes that are unacceptable in production
- the real milestone acceptance criteria
- external systems that must not be assumed
- role, permission, billing, compliance, or approval rules
## When The Skill Should Ask The Human
The skill should ask during harness setup when business truth is needed to
design the feedback layer well. It should not keep asking during routine
implementation if that truth could have been captured once and written into the
repo.
Good setup-time questions:
- What are the most critical user or operator flows?
- Which failure is most expensive or unacceptable?
- What makes milestone one genuinely done?
- Which systems, permissions, or environments must not be assumed?
Bad routine questions:
- Should I run tests again?
- Is this error probably important?
- Should I check the logs?
Those should be answered by the harness itself.
## Empty Project With Requirements Docs
An empty repo with requirements docs can still get a strong feedback layer.
The skill should:
1. Read the requirements docs from the repo
2. Extract likely system shape, primary users, and key flows
3. Identify which parts are confirmed vs inferred vs unknown
4. Ask the human only for the business truths that cannot be safely inferred
5. Scaffold the minimum runnable and verifiable environment
6. Encode the accepted flows and constraints back into repo artifacts
For an empty project, this means the feedback layer is often bootstrapped from
the intended behavior, not from existing code.
Examples:
- web app: minimal route, lint, typecheck, browser smoke, trace path
- API service: minimal endpoint, request smoke, schema checks, logs
- CLI tool: minimal command, golden output test, exit-code checks
- library: public API example, unit tests, integration sample
## What "Done" Looks Like
The harness is strong enough when a downstream agent can answer:
- Which layer am I editing?
- Which boundaries must not be crossed?
- Which command or test proves this change is valid?
- Where do I look when this fails?
- Which business flow does this change affect?
- Which acceptance rule determines whether the milestone is actually complete?
If the agent still has to guess these answers, the feedback layer is incomplete.
# Self-Test Mode
If the repo root contains `.fixture-meta.json` and `.fixture-seed-prompt.txt`,
treat the run as a **fixture self-test**.
In self-test mode:
1. Read `.fixture-seed-prompt.txt` before asking any questions.
2. Treat the approved design summary and Phase 1 answers from the fixture input
as already approved by the human, unless a safety-critical ambiguity remains.
3. Do NOT open a fresh brainstorming or design-approval loop.
4. Do NOT read unrelated meta-skills such as `brainstorming` during the fixture run.
The approved design summary already satisfies that requirement.
5. For greenfield fixture runs, install only the manifest-declared shipped sub-skills by copying them as-is.
Do NOT read every sub-skill file before generation unless validation later proves
one of them must be inspected to fix a concrete failure.
6. Complete **Phase 0 through Phase 4 only**. Do NOT enter Phase 5.
7. Keep context tight:
- Read only the templates needed for the next concrete generation step.
- If both `.fixture-scaffold-values.json` and `tools/render-scaffold.mjs` are present, do NOT open `.tmpl` files before the first renderer invocation.
- Prefer generating core docs/scripts first, then validating, before inspecting optional details.
- Do NOT recursively enumerate every file under `.agents/skills/project-scaffold/` during fixture discovery.
Treat the installed skill bundle as trusted input; open specific templates by path when needed.
- Do NOT exhaustively dump every template file before writing if the fixture
constraints already determine the scaffold shape.
8. Use fixture runs to test scaffold behavior, not to invent project features.
## Self-test fast path for greenfield fixtures
If self-test mode is active and Phase 0 discovery shows a greenfield repo with no
existing scaffold docs, no dependency manifests, and no runnable app surface:
1. Read only these repo files first:
- `README.md`
- `.fixture-meta.json`
- `.fixture-seed-prompt.txt`
- `.fixture-scaffold-values.json` (if present)
2. If `.fixture-meta.json` contains a `template_read_plan` or `generation_order`,
follow those lists directly instead of discovering your own read order.
3. Write the Discovery Report from those inputs plus the repo root listing.
4. Record the approved Phase 1 answers from the fixture input without asking again.
5. Decompose milestones immediately, with the initial milestone focused on the missing verification capability and current repo constraints.
6. Generate the scaffold in this order:
- `AGENTS.md`
- `docs/project_spec.md`
- `docs/plan.md`
- `docs/prompt.md`
- `docs/implement.md`
- `docs/documentation.md`
- `docs/failures/taxonomy.md`
- `docs/compaction-strategy.md`
- `docs/feedback-profile.json`
- `docs/harness-manifest.json`
- the primary `scripts/validate-docs.*` and `scripts/quality-gates.*` pair for the repo's primary shell
- install only the sub-skills declared in `docs/harness-manifest.json` into `.agents/skills/`
7. Only after those files are written, run validation and inspect any template or
sub-skill source needed to fix a concrete validation failure.
If `.agents/skills/project-scaffold/tools/render-scaffold.mjs` is present, prefer:
1. Start from `.fixture-scaffold-values.json` when it exists; otherwise create one scaffold values JSON file with the resolved placeholders.
2. Run the renderer once to materialize the manifest-declared docs/scripts.
3. Validate and patch only the concrete failures that remain.
4. Treat any `template_read_plan` from fixture metadata as a fallback-only recovery path after a concrete render or validation failure, not as a required pre-render checklist.
This fast path exists to keep fixture self-tests focused on generation and validation,
not on redundant exploration of the installed skill bundle.
## Self-test fast path for messy-existing fixtures
If self-test mode is active and the repo already contains scaffold-like docs
such as `AGENTS.md`, `docs/plan.md`, or `docs/prompt.md`:
1. Read the existing repo evidence before generating replacements:
- `README.md`
- `AGENTS.md` (if present)
- `docs/plan.md` (if present)
- `docs/prompt.md` (if present)
- `.fixture-meta.json`
- `.fixture-seed-prompt.txt`
2. Classify those docs as `healthy`, `salvageable`, `stale`, or `redundant`
before opening templates or running the renderer.
3. If a doc is `stale`, archive the old version into `docs/archive/` with a
dated filename before rendering a replacement.
4. If a doc is `healthy`, preserve it unless discovery proves it is inaccurate.
5. Only after the audit and any required archiving should you update scaffold
values, run the renderer, or patch the generated outputs.
This path exists so fixture self-tests exercise real reorganize behavior instead
of silently treating an existing repo like a blank slate.
#!/usr/bin/env node
import { appendFileSync, existsSync, mkdirSync, writeFileSync } from "node:fs";
import { join, resolve } from "node:path";
function usage() {
console.error(
[
"Usage:",
" node evals/capture-failure-case.mjs --id <id> --prompt <text> [--should-trigger true|false] [--category <name>] [--notes <text>]",
].join("\n")
);
}
function getArg(name, fallback = null) {
const index = process.argv.indexOf(name);
if (index === -1 || index === process.argv.length - 1) return fallback;
return process.argv[index + 1];
}
const id = getArg("--id");
const prompt = getArg("--prompt");
const shouldTrigger = getArg("--should-trigger", "true");
const category = getArg("--category", "captured-failure");
const notes = getArg("--notes", "");
if (!id || !prompt) {
usage();
process.exit(1);
}
const repoRoot = process.cwd();
const promptsCsvPath = join(repoRoot, "evals", "prompts.csv");
const failuresDir = join(repoRoot, "evals", "failure-cases");
const failureJsonPath = join(failuresDir, `${id}.json`);
if (!existsSync(promptsCsvPath)) {
console.error(`prompts.csv not found at ${promptsCsvPath}`);
process.exit(1);
}
mkdirSync(failuresDir, { recursive: true });
const escapedPrompt = `"${prompt.replace(/"/g, '""')}"`;
appendFileSync(promptsCsvPath, `\n${id},${shouldTrigger},${category},${escapedPrompt}\n`);
writeFileSync(
failureJsonPath,
JSON.stringify(
{
id,
should_trigger: shouldTrigger === "true",
category,
prompt,
notes,
captured_at: new Date().toISOString(),
},
null,
2
) + "\n"
);
console.log(
JSON.stringify(
{
captured: true,
prompt_csv: resolve(promptsCsvPath),
failure_case: resolve(failureJsonPath),
},
null,
2
)
);
This diff is collapsed.
{
"id": "empty-repo",
"description": "Blank starter repository with no code and no user-facing surface yet.",
"category": "empty-repo",
"seed_prompt": "Use the $project-scaffold skill to set up this repo for autonomous development.",
"approved_design": [
"Use the greenfield path and generate the scaffold from templates.",
"Use a PowerShell-first execution contract on Windows and generate only the manifest-declared primary script pair unless cross-platform support is explicitly required.",
"Keep M0 focused on quality infrastructure, docs, validation scripts, and sub-skill installation.",
"Do not invent application layers, deployment targets, or E2E flows because the repo has no user-facing surface yet.",
"Stop after the Phase 4 handoff summary instead of entering the continuous Phase 5 loop."
],
"template_read_plan": [
".agents/skills/project-scaffold/templates/AGENTS.md.tmpl",
".agents/skills/project-scaffold/templates/project_spec.md.tmpl",
".agents/skills/project-scaffold/templates/plan.md.tmpl",
".agents/skills/project-scaffold/templates/prompt.md.tmpl",
".agents/skills/project-scaffold/templates/implement.md.tmpl",
".agents/skills/project-scaffold/templates/documentation.md.tmpl",
".agents/skills/project-scaffold/templates/taxonomy.md.tmpl",
".agents/skills/project-scaffold/templates/compaction-strategy.md.tmpl",
".agents/skills/project-scaffold/templates/feedback-profile.json.tmpl",
".agents/skills/project-scaffold/templates/validate-docs.ps1.tmpl",
".agents/skills/project-scaffold/templates/quality-gates.ps1.tmpl",
".agents/skills/project-scaffold/templates/validate-docs.sh.tmpl",
".agents/skills/project-scaffold/templates/quality-gates.sh.tmpl"
],
"generation_order": [
"AGENTS.md",
"docs/project_spec.md",
"docs/plan.md",
"docs/prompt.md",
"docs/implement.md",
"docs/documentation.md",
"docs/failures/taxonomy.md",
"docs/compaction-strategy.md",
"docs/feedback-profile.json",
"docs/harness-manifest.json",
"primary scripts/validate-docs.* and scripts/quality-gates.* pair from the manifest",
".agents/skills/iterate/SKILL.md",
".agents/skills/quality-audit/SKILL.md",
".agents/skills/milestone-promote/SKILL.md"
],
"phase1_answers": {
"north_star": "Bootstrap a clean starter repository that is ready for an AI agent to grow into a small internal tool without accumulating process debt.",
"success_metrics": [
"Mechanical validation commands exist and run from the repo's primary shell.",
"The initial milestone focuses on quality infrastructure, not feature invention.",
"No browser/API E2E gate is invented before a real user-facing surface exists."
],
"hard_constraints": "Do not invent product features, deployment targets, or E2E flows that are not grounded in the repository.",
"biggest_pain_point": "The repo has no working development operating system yet.",
"target_architecture_style": "Start with a minimal scaffold-first structure; defer application layering decisions until real code exists.",
"additional_notes": [
"This repository intentionally has no UI, API, or CLI surface yet.",
"It is acceptable to defer environment variables and deployment specifics into the Needs Manifest."
]
},
"expectations": {
"e2e_gate": "forbidden"
}
}
# Empty Starter
This fixture intentionally starts almost blank.
There is no application entry point, no API, no UI, and no CLI yet.
The scaffold should set up quality infrastructure without inventing business-specific behavior.
{
"id": "node-cli",
"description": "Node.js CLI fixture with an existing smoke test.",
"category": "user-facing-cli",
"seed_prompt": "Use the $project-scaffold skill to set up this repo for autonomous development.",
"approved_design": [
"Use the greenfield path while preserving the existing CLI entry point and smoke test.",
"Use a PowerShell-first execution contract on Windows and generate only the manifest-declared primary script pair unless cross-platform support is explicitly required.",
"Keep M0 focused on quality infrastructure and command-line verification before any feature expansion.",
"Recognize the CLI as a user-facing surface and preserve an end-to-end or smoke verification gate.",
"Stop after the Phase 4 handoff summary instead of entering the continuous Phase 5 loop."
],
"template_read_plan": [
".agents/skills/project-scaffold/templates/AGENTS.md.tmpl",
".agents/skills/project-scaffold/templates/project_spec.md.tmpl",
".agents/skills/project-scaffold/templates/plan.md.tmpl",
".agents/skills/project-scaffold/templates/prompt.md.tmpl",
".agents/skills/project-scaffold/templates/implement.md.tmpl",
".agents/skills/project-scaffold/templates/documentation.md.tmpl",
".agents/skills/project-scaffold/templates/taxonomy.md.tmpl",
".agents/skills/project-scaffold/templates/compaction-strategy.md.tmpl",
".agents/skills/project-scaffold/templates/feedback-profile.json.tmpl",
".agents/skills/project-scaffold/templates/validate-docs.ps1.tmpl",
".agents/skills/project-scaffold/templates/quality-gates.ps1.tmpl",
".agents/skills/project-scaffold/templates/validate-docs.sh.tmpl",
".agents/skills/project-scaffold/templates/quality-gates.sh.tmpl"
],
"generation_order": [
"AGENTS.md",
"docs/project_spec.md",
"docs/plan.md",
"docs/prompt.md",
"docs/implement.md",
"docs/documentation.md",
"docs/failures/taxonomy.md",
"docs/compaction-strategy.md",
"docs/feedback-profile.json",
"docs/harness-manifest.json",
"primary scripts/validate-docs.* and scripts/quality-gates.* pair from the manifest",
".agents/skills/iterate/SKILL.md",
".agents/skills/quality-audit/SKILL.md",
".agents/skills/milestone-promote/SKILL.md"
],
"phase1_answers": {
"north_star": "Keep a small user-facing CLI reliable and easy for an AI agent to extend while preserving end-to-end behavior.",
"success_metrics": [
"The CLI smoke path remains mechanically verifiable from the repo's primary shell.",
"Quality gates cover linting, tests, docs, and a CLI end-to-end or smoke command.",
"The scaffold documents the CLI as a user-facing surface."
],
"hard_constraints": "Do not remove the existing CLI entry point or downgrade verification to unit-only checks.",
"biggest_pain_point": "The repo has code and smoke tests but no agent operating system around them.",
"target_architecture_style": "CLI entry point -> small application logic -> testable smoke verification.",
"additional_notes": [
"A CLI smoke or end-to-end check is appropriate for this fixture.",
"Keep the scaffold lightweight; this is still a tiny project."
]
},
"expectations": {
"e2e_gate": "required"
}
}
# Hello CLI
This fixture is a small user-facing command-line application.
It already has a CLI entry point and a smoke-style test, so the scaffold should
recognize that end-to-end CLI verification is relevant.
{
"name": "hello-cli",
"version": "0.1.0",
"private": true,
"type": "module",
"bin": {
"hello-cli": "src/index.mjs"
},
"scripts": {
"lint": "node --check src/index.mjs tests/cli-smoke.test.mjs",
"test": "node --test",
"smoke": "node src/index.mjs Codex"
}
}
#!/usr/bin/env node
const name = process.argv[2] || "world";
process.stdout.write(`Hello, ${name}!\n`);
import test from "node:test";
import assert from "node:assert/strict";
import { spawnSync } from "node:child_process";
test("CLI greets the provided name", () => {
const result = spawnSync(process.execPath, ["src/index.mjs", "Codex"], {
encoding: "utf8",
});
assert.equal(result.status, 0);
assert.match(result.stdout, /Hello, Codex!/);
});
{
"id": "python-lib",
"description": "Pure Python library fixture with tests and no user-facing runtime surface.",
"category": "pure-library",
"seed_prompt": "Use the $project-scaffold skill to set up this repo for autonomous development.",
"approved_design": [
"Use the greenfield path while preserving the existing Python library files and tests.",
"Use a PowerShell-first execution contract on Windows and generate only the manifest-declared primary script pair unless cross-platform support is explicitly required.",
"Keep M0 focused on quality infrastructure and library-safe documentation, not service scaffolding.",
"Do not invent browser/API E2E coverage because this is a pure library with no user-facing runtime surface.",
"Stop after the Phase 4 handoff summary instead of entering the continuous Phase 5 loop."
],
"template_read_plan": [
".agents/skills/project-scaffold/templates/AGENTS.md.tmpl",
".agents/skills/project-scaffold/templates/project_spec.md.tmpl",
".agents/skills/project-scaffold/templates/plan.md.tmpl",
".agents/skills/project-scaffold/templates/prompt.md.tmpl",
".agents/skills/project-scaffold/templates/implement.md.tmpl",
".agents/skills/project-scaffold/templates/documentation.md.tmpl",
".agents/skills/project-scaffold/templates/taxonomy.md.tmpl",
".agents/skills/project-scaffold/templates/compaction-strategy.md.tmpl",
".agents/skills/project-scaffold/templates/feedback-profile.json.tmpl",
".agents/skills/project-scaffold/templates/validate-docs.ps1.tmpl",
".agents/skills/project-scaffold/templates/quality-gates.ps1.tmpl",
".agents/skills/project-scaffold/templates/validate-docs.sh.tmpl",
".agents/skills/project-scaffold/templates/quality-gates.sh.tmpl"
],
"generation_order": [
"AGENTS.md",
"docs/project_spec.md",
"docs/plan.md",
"docs/prompt.md",
"docs/implement.md",
"docs/documentation.md",
"docs/failures/taxonomy.md",
"docs/compaction-strategy.md",
"docs/feedback-profile.json",
"docs/harness-manifest.json",
"primary scripts/validate-docs.* and scripts/quality-gates.* pair from the manifest",
".agents/skills/iterate/SKILL.md",
".agents/skills/quality-audit/SKILL.md",
".agents/skills/milestone-promote/SKILL.md"
],
"phase1_answers": {
"north_star": "Maintain a small, reliable Python utility library that an AI agent can extend without breaking its public import surface.",
"success_metrics": [
"Unit tests and quality gates pass from the repo's primary shell.",
"Architecture guidance reflects a pure library, not a service or UI app.",
"No browser/API E2E gate is added because the project has no user-facing runtime surface."
],
"hard_constraints": "Do not introduce service, web, or browser-specific assumptions into this pure library repository.",
"biggest_pain_point": "The repo lacks agent-readable planning and validation scaffolding.",
"target_architecture_style": "Simple library layering: public API module -> internal helpers -> tests.",
"additional_notes": [
"The existing test file is the main source of behavioral evidence.",
"A smoke-style CLI or browser test would be inappropriate here."
]
},
"expectations": {
"e2e_gate": "forbidden"
}
}
# Acme Math
`acme_math` is a tiny pure-Python utility library.
It exposes importable functions only. There is no HTTP server, browser UI, or CLI surface.
[build-system]
requires = ["setuptools>=68"]
build-backend = "setuptools.build_meta"
[project]
name = "acme-math"
version = "0.1.0"
description = "Tiny utility library used as a scaffold fixture"
requires-python = ">=3.11"
[tool.pytest.ini_options]
testpaths = ["tests"]
def add(left: int, right: int) -> int:
"""Return the sum of two integers."""
return left + right
from acme_math import add
def test_add():
assert add(2, 3) == 5
{
"id": "stale-docs-cli",
"description": "Node.js CLI fixture with stale AGENTS.md and plan docs that must be audited and archived before regeneration.",
"category": "user-facing-cli",
"seed_prompt": "Use the $project-scaffold skill to audit this repo, reorganize the stale docs, and set it up for autonomous development.",
"approved_design": [
"Use the reorganize path, not a blind greenfield overwrite.",
"Read the existing README.md, AGENTS.md, and docs/plan.md before generating replacements.",
"Archive stale docs into docs/archive/ with a dated filename before rendering fresh canonical docs.",
"Preserve the existing CLI entry point and smoke test, and keep a CLI smoke or end-to-end gate in the quality gates.",
"Use a PowerShell-first execution contract on Windows and generate only the manifest-declared primary script pair unless cross-platform support is explicitly required.",
"Stop after the Phase 4 handoff summary instead of entering the continuous Phase 5 loop."
],
"template_read_plan": [
".agents/skills/project-scaffold/templates/AGENTS.md.tmpl",
".agents/skills/project-scaffold/templates/project_spec.md.tmpl",
".agents/skills/project-scaffold/templates/plan.md.tmpl",
".agents/skills/project-scaffold/templates/prompt.md.tmpl",
".agents/skills/project-scaffold/templates/implement.md.tmpl",
".agents/skills/project-scaffold/templates/documentation.md.tmpl",
".agents/skills/project-scaffold/templates/taxonomy.md.tmpl",
".agents/skills/project-scaffold/templates/compaction-strategy.md.tmpl",
".agents/skills/project-scaffold/templates/feedback-profile.json.tmpl",
".agents/skills/project-scaffold/templates/validate-docs.ps1.tmpl",
".agents/skills/project-scaffold/templates/quality-gates.ps1.tmpl",
".agents/skills/project-scaffold/templates/validate-docs.sh.tmpl",
".agents/skills/project-scaffold/templates/quality-gates.sh.tmpl"
],
"generation_order": [
"AGENTS.md",
"docs/project_spec.md",
"docs/plan.md",
"docs/prompt.md",
"docs/implement.md",
"docs/documentation.md",
"docs/failures/taxonomy.md",
"docs/compaction-strategy.md",
"docs/feedback-profile.json",
"docs/harness-manifest.json",
"primary scripts/validate-docs.* and scripts/quality-gates.* pair from the manifest",
".agents/skills/iterate/SKILL.md",
".agents/skills/quality-audit/SKILL.md",
".agents/skills/milestone-promote/SKILL.md"
],
"phase1_answers": {
"north_star": "Stabilize the existing CLI repo by replacing stale agent docs with a current operating system while preserving the real smoke-tested command surface.",
"success_metrics": [
"Stale scaffold docs are archived before replacement and no stale canonical guidance remains.",
"The CLI smoke path remains mechanically verifiable from the repo's primary shell.",
"Quality gates cover linting, tests, docs, and a CLI smoke or end-to-end command."
],
"hard_constraints": "Do not remove the existing CLI entry point, do not discard stale docs without archiving them, and do not downgrade verification to unit-only checks.",
"biggest_pain_point": "The repo already has stale agent docs that point to deleted code and a stale execution contract.",
"target_architecture_style": "CLI entry point -> small application logic -> testable smoke verification, with docs and scripts as the repo operating system around that runtime path.",
"additional_notes": [
"This fixture intentionally contains stale agent docs that should be audited before generation.",
"The old docs reference deleted files and an outdated shell contract on purpose; keep that stale content only in docs/archive."
]
},
"expectations": {
"e2e_gate": "required",
"archive_contains": [
"pre-2024 shell wrapper",
"deleted legacy-main.js worker"
],
"forbidden_strings_in_canonical_docs": [
"pre-2024 shell wrapper",
"deleted legacy-main.js worker",
"bash scripts/quality-gates.sh"
],
"preserve_strings": [
{
"path": "README.md",
"includes": [
"This fixture intentionally starts with stale agent docs and a stale milestone plan."
]
}
],
"trace_reads_before_generation": [
"README.md",
"AGENTS.md",
"docs/plan.md"
]
}
}
# Legacy Agent Guide
This file belongs to the pre-2024 shell wrapper and is no longer accurate.
- Primary merge command: `bash scripts/quality-gates.sh`
- Main entry point: `src/legacy-main.js`
- Recovery rule: keep following the deleted migration plan until the legacy worker comes back
Do not treat this file as current truth for the repo.
# Hello CLI
This fixture intentionally starts with stale agent docs and a stale milestone plan.
The runtime surface is still the same small command-line application at `src/index.mjs`,
and `npm run smoke` is the end-to-end check that should be preserved.
# Legacy Plan
## M7: Revive the deleted legacy-main.js worker
This milestone plan predates the current CLI fixture and has no executable verification commands.
- [ ] Bring back `src/legacy-main.js`
- [ ] Restore the old bash-only merge checks
- [ ] Keep shipping until the deleted legacy-main.js worker is back
Current focus: keep working on M7 forever.
{
"name": "hello-cli",
"version": "0.1.0",
"private": true,
"type": "module",
"bin": {
"hello-cli": "src/index.mjs"
},
"scripts": {
"lint": "node --check src/index.mjs tests/cli-smoke.test.mjs",
"test": "node --test",
"smoke": "node src/index.mjs Codex"
}
}
#!/usr/bin/env node
const name = process.argv[2] || "world";
process.stdout.write(`Hello, ${name}!\n`);
import test from "node:test";
import assert from "node:assert/strict";
import { spawnSync } from "node:child_process";
test("CLI greets the provided name", () => {
const result = spawnSync(process.execPath, ["src/index.mjs", "Codex"], {
encoding: "utf8",
});
assert.equal(result.status, 0);
assert.match(result.stdout, /Hello, Codex!/);
});
This diff is collapsed.
id,should_trigger,category,prompt
# Explicit invocation — skill named directly
test-01,true,explicit,"Run the $project-scaffold skill to set up this repo for autonomous development."
test-02,true,explicit,"Use $project-scaffold to bootstrap my project with AGENTS.md and quality gates."
test-03,true,explicit,"Activate the project-scaffold skill on this codebase."
# Implicit invocation — describes the scenario without naming the skill
test-04,true,implicit,"Set up this repo so an AI agent can develop it autonomously with quality enforcement."
test-05,true,implicit,"I need AGENTS.md, a milestone plan, quality gates, and doc validation for this project."
test-06,true,implicit,"Bootstrap this codebase for continuous AI-driven development with mechanical enforcement."
test-07,true,implicit,"Create the full scaffolding needed for an AI agent to iterate on this project indefinitely."
# Contextual invocation — domain-specific but needs scaffold
test-08,true,contextual,"I have a new e-commerce project in Next.js. Set it up for AI-agent-driven development."
test-09,true,contextual,"This is a Spring Boot ERP system. I need the full agent scaffolding with quality gates."
test-10,true,contextual,"I'm building a Python data pipeline. Set up AGENTS.md and execution plans so Codex can work on it."
test-11,true,contextual,"We have a messy React Native app with some docs. Reorganize it for autonomous development."
# Greenfield vs messy-existing — mode detection
test-12,true,mode-detection,"This is a brand new empty repo. Set up everything from scratch for AI development."
test-13,true,mode-detection,"We have an existing codebase with outdated AGENTS.md and stale docs. Audit and fix them."
# Adaptive harness — tests that the skill asks before generating
test-21,true,adaptive,"Set up this repo for AI development, but I only need the basics — no quality audit or review workflow."
test-22,true,adaptive,"Bootstrap this project. We already have ESLint and Jest configured and passing."
test-23,true,adaptive,"This is a tiny single-milestone CLI tool. Don't over-scaffold it."
# Negative control — should NOT trigger this skill
test-14,false,negative,"Add a new screen to my React app."
test-15,false,negative,"Fix the failing unit tests in src/core."
test-16,false,negative,"Refactor an internal module to improve resource reuse."
test-17,false,negative,"Write AGENTS.md for this project."
test-18,false,negative,"Run the quality gates."
test-19,false,negative,"Create a milestone plan for the next sprint."
test-20,false,negative,"Set up ESLint and Prettier for my TypeScript project."
{
"type": "object",
"properties": {
"overall_pass": {
"type": "boolean",
"description": "Whether the scaffold output meets minimum quality bar"
},
"score": {
"type": "integer",
"minimum": 0,
"maximum": 100,
"description": "Overall quality score for the scaffold output"
},
"checks": {
"type": "array",
"items": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "Check identifier"
},
"pass": {
"type": "boolean"
},
"weight": {
"type": "integer",
"minimum": 1,
"maximum": 25,
"description": "Weight of this check in total score"
},
"notes": {
"type": "string",
"description": "Explanation of pass/fail reasoning"
}
},
"required": ["id", "pass", "weight", "notes"],
"additionalProperties": false
},
"description": "Individual check results"
},
"missing_files": {
"type": "array",
"items": { "type": "string" },
"description": "Required files that were not generated"
},
"placeholder_violations": {
"type": "array",
"items": { "type": "string" },
"description": "Files containing TBD/TODO/PLACEHOLDER text"
},
"cross_ref_errors": {
"type": "array",
"items": { "type": "string" },
"description": "Broken cross-references between documents"
}
},
"required": ["overall_pass", "score", "checks", "missing_files", "placeholder_violations", "cross_ref_errors"],
"additionalProperties": false
}
# Anti-patterns (do NOT do these)
1. **Don't ask questions you can answer by reading code.**
Wrong: "What language is this project in?"
Right: Read `package.json` -> "This is a TypeScript project with React 18."
2. **Don't generate documents with placeholder text.**
Wrong: `North Star: TBD`
Right: Omit the field and add to Needs Manifest.
3. **Don't create milestones without verification commands.**
Every milestone must have a concrete command that proves it's done.
4. **Don't put implementation details in AGENTS.md.**
AGENTS.md is navigation only. Details go in `docs/`.
5. **Don't assume the human knows what you need.**
Explicitly state every dependency, env var, service, and permission.
6. **Don't trust documentation without mechanical verification.**
If a rule matters, it must be checked by a script or linter, not just written down.
7. **Don't skip verification capability before feature work.**
No feature work until at least one quality gate command is runnable.
8. **Don't let quality score drop without action.**
If `$quality-audit` shows regression, next iteration is debt cleanup.
9. **Don't accumulate context without compaction.**
In long sessions, follow `docs/compaction-strategy.md` to stay coherent.
10. **Don't make changes across more than 3 files without decomposing.**
Large changes are error-prone. Break them into smaller, verifiable steps.
11. **Don't hardcode `bash scripts/...` as the default path in a Windows/PowerShell repo.**
Pick one primary command pair and keep every generated doc consistent with it.
12. **Don't use repository-wide rollback commands.**
Revert only the files touched in the active iteration, after inspecting the diff.
13. **Don't generate artifacts the project doesn't need.**
Only generate what was confirmed in the Harness Needs Assessment.
14. **Don't freeze the harness after scaffold.**
The harness evolves through use. Improve it when the agent struggles.
15. **Don't guess when the cost of being wrong is high.**
Follow the Uncertainty Protocol: discover first, ask if uncertain.
This diff is collapsed.
# Phase 1: Human Alignment
**Now ask the user — but ONLY for things you couldn't discover.**
If self-test mode is active and the fixture input already provides approved
answers for the core decisions below, record them and continue without asking.
## 1a. Core decisions (always ask)
These cannot be inferred from code:
1. **"What is your North Star goal?"**
One sentence. What does success look like in 3-6 months?
2. **"What are your success metrics?"**
How will you measure progress? List 2-5 observable metrics.
For each: is it currently measurable, or needs instrumentation?
3. **"What must NEVER happen?"**
Hard constraints. Things that are absolutely forbidden.
Examples: no data deletion without confirmation, no hardcoded secrets,
no task-specific heuristics, etc.
4. **"What's the current biggest pain point?"**
What should the first milestone fix?
5. **"What is the target architecture style?"**
Confirm or override the discovered architecture. Define the allowed
dependency directions between layers (e.g., UI → Service → Repo → Model).
This will be enforced mechanically.
## 1b. Confirm inferences
Present your Phase 0 inferences and ask:
- "I inferred X about your architecture. Correct?"
- "I see Y modules. Are there others I missed?"
- "I found Z tests passing. Is that the expected state?"
### 1b-ii. Confirm inferred business flows
Present the business flow table from Phase 0f. For each **inferred** flow:
- "I inferred this flow from code structure: [description]. Is this a real
user/operator flow? Is it critical?"
For flows classified as **human-confirmed**:
- "What are your top 1-3 critical user or operator flows?"
- "Which failure modes are unacceptable in production?"
After confirmation, update each flow's classification to repo-confirmed
or human-confirmed. Drop any inferred flows the human says are wrong.
### 1b-iii. Confirm feedback surface priorities
Present the Feedback Surface Inventory from Phase 0h. For each gap:
- "I found [gap]. Should the harness address this now, or defer it?"
The human decides which feedback gaps are worth closing in the initial
harness vs which can wait for a later milestone. Record the decision —
this directly shapes what Phase 3 generates, what `docs/feedback-profile.json`
claims, and what M0 targets.
## 1c. Resolve needs
For each `blocking: true` item in the Needs Manifest:
- Either get the answer from the user
- Or agree on a workaround
- Or mark it as "deferred to milestone N"
## 1d. Confirm harness shape
Present the Harness Needs Assessment from Phase 0k and ask:
- "I recommend generating these documents and scripts based on what I found.
Do you want to add, remove, or change anything?"
- "Which quality dimensions matter most for this project?"
(Let the user prioritize — do not impose a fixed 5×20 scoring grid)
- "Are there any tools, workflows, or artifacts specific to your team that I
should include in the harness?"
Record the confirmed harness shape. Phase 3 will generate only what was confirmed here.
The confirmed result should be written as machine-readable contracts in
`docs/harness-manifest.json` and `docs/feedback-profile.json` during generation.
If the user is unsure about some items, mark them as `recommended-unconfirmed` and
include them in the initial generation but flag them for review after the first milestone.
**Phase 1 is complete when all `blocking: true` needs are resolved,
all core decisions are recorded, and the harness shape is confirmed.**
After Phase 1 is complete, do NOT ask the human for routine confirmations during
milestone decomposition, generation, or normal iteration work.
Only ask again if:
- a safety-critical ambiguity remains unresolved
- a hard constraint would otherwise be violated
- escalation level L4 is reached
# Phase 2: Milestone Decomposition
**Based on Phase 0 + Phase 1, decompose the path to North Star into milestones.**
## Rules for milestones
1. **3-7 milestones total** (not more)
2. Each milestone must have:
- One-sentence goal
- 2-4 concrete deliverables
- Executable verification commands (copy-pasteable, in the repo's primary shell)
- Clear "Done When" criteria with observable evidence
- Estimated scope (which files/modules change)
- Maximum 3 parallel work items
- **Compaction checkpoint**: what the agent must remember if context is compressed
3. Milestones are **ordered by dependency**, not priority
4. First milestone should be achievable in 1-3 sessions
5. Verification commands must use tools already in the project
(or the milestone includes setting them up)
6. **M0 must ensure verification capability exists** — before any feature work,
the project must have at least one runnable verification command. How this is
achieved depends on what Phase 0 discovered:
- If quality infrastructure is **missing**: M0 is "Quality Infrastructure" (set up
lint, test, format, validation scripts)
- If quality infrastructure **already exists and passes**: M0 can be the first
feature milestone instead — but must still include verification commands
- If quality infrastructure is **partial**: M0 fills the gaps only
- Present the M0 recommendation to the human in Phase 1 for confirmation
## Milestone template
```markdown
### M<N>: <one-sentence goal>
**Scope**: <which modules/files change>
**Deliverables**:
- [ ] <deliverable 1>
- [ ] <deliverable 2>
**Verification**:
```text
<primary-shell command 1 that must pass>
<primary-shell command 2 that must pass>
```
**Compaction Checkpoint**: <2-3 sentences summarizing what was achieved and
what the agent must remember if context is compressed mid-milestone>
**Done When**:
- <observable condition 1>
- <observable condition 2>
- All verification commands pass
- <primary doc validation command> passes
**Depends on**: M<N-1> (or "none")
```
## M0: Ensure Verification Capability
The first milestone ensures mechanical verification works. Its content adapts to
what Phase 0 discovered:
**If quality infrastructure is missing** (common for greenfield):
```markdown
### M0: Quality Infrastructure
**Goal**: Every future change is mechanically validated before merge.
**Deliverables** (adapt to discovered needs — do not include tools the project doesn't need):
- [ ] Linter configured and passing (if applicable to the language)
- [ ] Formatter configured and enforced (if applicable)
- [ ] Test runner configured with baseline passing tests
- [ ] Validation scripts generated (only the confirmed variants)
- [ ] Quality gate script generated (only the confirmed variants)
- [ ] Architecture dependency rules documented (and lint-enforced where tooling exists)
```
**If quality infrastructure already exists and passes**:
- M0 can be the first feature milestone
- But it must still include verification commands that prove the harness works
**If quality infrastructure is partial**:
- M0 fills only the gaps discovered in Phase 0f
- Do not re-create what already works
**Invariant (always applies regardless of M0 content)**:
- Before any feature work begins, at least one quality gate command must be runnable
- The primary doc validation command must pass (if docs were generated)
## Present to user
Show all milestones and ask:
- "Does this ordering make sense?"
- "Is M1 small enough to start immediately?"
- "Any milestone missing?"
# Phase 3: Document & Script Generation
**Generate only the artifacts confirmed in the Harness Needs Assessment (Phase 0k + Phase 1d).
Use the templates in `templates/` as reference starting points, not rigid structures.**
## What to generate
Generate artifacts based on the confirmed harness shape from Phase 1d:
1. **Documents**: only the document roles marked `needed` or `recommended-unconfirmed`
2. **Scripts**: only the scripts the project actually requires
3. **Operational sub-skills**: only the operational prompts confirmed as needed
4. **Machine-readable contracts**:
- always emit `docs/harness-manifest.json` for artifact + execution contract
- always emit `docs/feedback-profile.json` for feedback surfaces + gaps + business-flow coverage
## Available templates (reference starting points)
| Role | Default file | Template |
|---|---|---|
| `agents_doc` | `AGENTS.md` | `templates/AGENTS.md.tmpl` |
| `project_spec_doc` | `docs/project_spec.md` | `templates/project_spec.md.tmpl` |
| `prompt_doc` | `docs/prompt.md` | `templates/prompt.md.tmpl` |
| `plan_doc` | `docs/plan.md` | `templates/plan.md.tmpl` |
| `implement_doc` | `docs/implement.md` | `templates/implement.md.tmpl` |
| `documentation_doc` | `docs/documentation.md` | `templates/documentation.md.tmpl` |
| `failure_taxonomy_doc` | `docs/failures/taxonomy.md` | `templates/taxonomy.md.tmpl` |
| `compaction_doc` | `docs/compaction-strategy.md` | `templates/compaction-strategy.md.tmpl` |
| `feedback_profile` | `docs/feedback-profile.json` | `templates/feedback-profile.json.tmpl` |
Templates are starting points. Adapt structure, add sections, or omit irrelevant
sections based on what was discovered and confirmed. Do not blindly fill every
template placeholder — if a section does not apply, omit it.
## Script suite
| File | Template | Generate when |
|---|---|---|
| `scripts/validate-docs.sh` | `templates/validate-docs.sh.tmpl` | Multiple docs generated AND Unix environment |
| `scripts/validate-docs.ps1` | `templates/validate-docs.ps1.tmpl` | Multiple docs generated AND Windows environment |
| `scripts/quality-gates.sh` | `templates/quality-gates.sh.tmpl` | Quality tools exist AND Unix environment |
| `scripts/quality-gates.ps1` | `templates/quality-gates.ps1.tmpl` | Quality tools exist AND Windows environment |
Generate cross-platform variants (.sh + .ps1) only when the team actually uses
multiple OS environments. If the project is single-platform, generate only the
primary variant and note the portable alternative is available if needed later.
**Execution contract** (applies to all generated artifacts):
- Choose one **primary shell** for the repo:
- Windows/PowerShell-first repo → `.ps1`
- Linux/macOS/bash-first repo → `.sh`
- Fill every generated document with the same:
- `Primary shell`
- `Primary quality gate command`
- `Primary doc validation command`
- The generated result is invalid if one file says PowerShell and another still uses `bash scripts/...` as the default path
## Operational sub-skills (copied to `.agents/skills/`)
Install only the operational sub-skills confirmed as needed in the Harness Needs Assessment:
| Skill | Purpose | Always needed? |
|---|---|---|
| `iterate` | Execute one iteration cycle within current milestone | Yes |
| `quality-audit` | Full quality audit + tech debt scan + quality score | Only if quality tools exist |
| `milestone-promote` | Safely promote from one milestone to the next | Only if multiple milestones |
Do not emit separate `doc-gardening` or `review-ready` sub-skills by default.
Inline those responsibilities into:
- `iterate` for routine doc-drift correction during normal work
- `milestone-promote` for review-packet capture and milestone-end doc sync
## Greenfield vs Messy-existing: which path to take
Based on the Documentation Health Report from Phase 0b, choose a path:
**Path A — Greenfield** (no existing docs, or all docs are `stale`):
- Generate confirmed documents from templates
- Fill placeholders with Phase 0 + Phase 1 information
**Path B — Reorganize** (some docs are `salvageable` or `healthy`):
- **Keep** `healthy` docs as-is (do not overwrite)
- **Rewrite** `salvageable` docs: extract useful content, restructure using
template as the target format, fill gaps with discovered/decided information
- **Replace** `stale` docs: archive the old version to `docs/archive/` with
a date suffix, then generate fresh from template
- **Merge** `redundant` docs: consolidate into the canonical file, delete duplicates
- **Create** any docs from the suite that don't exist yet
When reorganizing, preserve the user's existing terminology, naming conventions,
and any project-specific sections that aren't in the template. The template is
a minimum structure, not a maximum.
## Constraint → Enforcement pipeline
For each constraint discovered in Phase 0 or decided in Phase 1, the agent
must try to encode it as close to enforcement as possible. Prose-only
constraints are the weakest form. See `docs/feedback-layer-model.md` §
"Constraint Encoding Model."
| Constraint type | Enforcement target | Example |
|---|---|---|
| Layer dependency direction | Linter import rules | eslint `no-restricted-imports`, Go `internal/`, Python `import-linter`, Java ArchUnit |
| Forbidden patterns | Lint rule or pre-commit hook | No `console.log` in production, no `any` in TypeScript strict mode |
| API shape contract | Schema validation or typecheck | Zod schema, OpenAPI spec, Pydantic model, TypeScript strict |
| Config shape | Startup validation | Fail-fast on missing env vars, JSON schema for config files |
| Business-flow correctness | Scenario or E2E test | Playwright test for critical user path, golden output test for CLI |
| Safe rollback | Script guard | `implement.md` and sub-skills forbid `git checkout -- .` and `git reset --hard` |
**Process during generation:**
1. For each hard constraint from Phase 1, check: can it be encoded as a lint
rule, test, schema check, or script guard? If yes, generate that enforcement.
2. For each feedback surface gap from Phase 0e, check: did Phase 1 confirm this
gap should be closed now? If yes, generate the mechanism (test, config, rule).
3. Record what was enforced vs what remains prose-only in
`docs/feedback-profile.json``feedback_surfaces.*.mechanisms`, `*.commands`,
`*.evidence_paths`, and `*.gaps`.
4. If a constraint cannot be mechanically enforced yet, write it into
`AGENTS.md` hard constraints AND add a Needs Manifest item to track when
enforcement becomes possible.
## Generation rules
1. Fill templates with information from Phase 0 (discovered) and Phase 1 (decided)
If the scaffold renderer exists, prefer generating one values JSON that includes
the confirmed `HARNESS_MANIFEST`, then run the renderer instead of hand-writing
each file individually.
2. Read templates lazily:
- Open only the template needed for the next file you are generating
- Do NOT dump every template or sub-skill file before writing if the scaffold shape is already determined
3. For `salvageable` docs: diff the existing content against the template structure,
show the user what will change before overwriting
4. All cross-references between documents must be consistent
5. `docs/harness-manifest.json` is the machine-readable source of truth for which
docs, scripts, and sub-skills were intentionally generated
6. `docs/feedback-profile.json` is the machine-readable source of truth for which
feedback surfaces exist, how strong they are, and what gaps remain
6. Verification commands must be real commands that work in this project
7. Do NOT include placeholder text like "TBD" — if unknown, omit the section
and add it to the Needs Manifest
8. `AGENTS.md` must fit in < 150 lines — it's a navigation layer, not an encyclopedia
9. `docs/prompt.md` must describe only the current milestone, not the whole roadmap
10. The generated primary doc validation command must pass immediately after generation
11. The generated primary quality gate command must integrate all available quality tools
12. Architecture dependency rules must be encoded in linter config where the
language supports it (e.g., eslint import rules, Go internal packages,
Python import-linter, Java ArchUnit)
13. Sub-skills must not contain unreplaced template variables; they should reference the repo's execution contract and current docs instead of embedding project-specific placeholders
14. `docs/plan.md` must include an Operations Cadence table that records when `$quality-audit` and `$milestone-promote` last ran and when they are due next
15. `docs/implement.md` and installed sub-skills must forbid repository-wide rollback commands such as `git checkout -- .` and `git reset --hard`
16. Review-packet capture and doc-sync responsibilities should be encoded in the operating docs even if they are not separate sub-skills
17. Optional gates (type-check, architecture-check, E2E) must be included or omitted based on discovery; do not leave fake placeholder commands in generated scripts just to satisfy the template
18. `docs/harness-manifest.json` must reference `docs/feedback-profile.json`; do not duplicate detailed feedback-surface state inside the manifest
19. Optional template sections must end in one of two states only:
- omitted entirely, or
- materialized as real Markdown with concrete repo-specific content
Do not inject project-specific text into HTML comment blocks and leave it commented out.
20. `AGENTS.md` references must point to concrete manifest-declared files. Never use wildcard paths such as `scripts/*.ps1`, `scripts/*.sh`, or `docs/*` in agent-facing guidance.
# Phase 4: Validation & Handoff
## 4a. Mechanical self-check
Run these commands — do NOT skip to presenting results until they pass:
```text
# 1. Doc consistency
<primary doc validation command>
# 2. Quality gates
<primary quality gate command>
```
If either fails, fix the issue and re-run. Do NOT present to user with failures.
## 4b. Structural self-check
Before presenting to the user, also verify:
- [ ] `docs/harness-manifest.json` exists, is valid JSON, and matches the generated artifact set
- [ ] `docs/feedback-profile.json` exists, is valid JSON, and captures the current feedback surfaces, business-flow coverage, and human-input gaps
- [ ] Every document and script declared in `docs/harness-manifest.json` exists
- [ ] `docs/harness-manifest.json` points to `docs/feedback-profile.json`
- [ ] Every milestone has at least one executable verification command
- [ ] `AGENTS.md` references all generated docs and scripts
- [ ] `docs/prompt.md` scope matches M0 (or M1 if M0 is trivial) from `docs/plan.md`
- [ ] No placeholder/TBD text remains in any generated file
- [ ] No unreplaced template variables remain in installed sub-skills
- [ ] Hard constraints from Phase 1 appear in every generated policy doc that claims to be authoritative
- [ ] Needs Manifest items are either resolved or tracked in a milestone
- [ ] The primary shell and command pair are identical across `AGENTS.md`, `docs/harness-manifest.json`, `docs/feedback-profile.json`, and any generated supporting docs
- [ ] `docs/plan.md` includes an Operations Cadence table with `quality-audit` and `milestone-promote`
- [ ] No repository-wide rollback commands appear in generated docs or sub-skills
- [ ] The primary doc validation command passes
- [ ] The primary quality gate command passes (or documents known pre-existing failures)
- [ ] The sub-skills declared in `docs/harness-manifest.json` are installed in `.agents/skills/`
- [ ] If `docs/compaction-strategy.md` is declared in `docs/harness-manifest.json`, it exists and defines checkpoint rules
## 4c. Present to user
Show the user:
1. Summary of what was generated (file list with one-line descriptions)
2. Results of mechanical validation (primary commands passing, manifest + feedback profile self-consistent)
3. The highest-priority remaining feedback gaps and any remaining items from the Needs Manifest
4. "Ready to start M0?" or "These items need to be resolved first: ..."
## 4d. Outstanding needs
If there are still unresolved needs, present them clearly:
```markdown
## Before starting M0, you need to:
1. Set `DATABASE_URL` in `.env` (I can create .env.example for you)
2. Confirm the deployment target (Vercel / AWS / self-hosted)
## I can start M0 without these (will use mocks):
3. External payment API key (only needed for M3)
```
# Phase 5: Continuous Operation (runs forever)
**This phase begins after Phase 4 handoff and never ends.**
The agent operates in a continuous loop, using a small operational prompt set where that meaningfully improves loop closure.
If self-test mode is active, STOP after Phase 4 handoff instead of entering
this loop.
## Daily operation loop
```
+----------------------------------------------+
| 1. Read context (AGENTS.md -> prompt.md) |
| 2. Pick next deliverable from plan.md |
| 3. Run $iterate sub-skill |
| -> implement -> verify -> record |
| 4. After every 3 iterations: |
| -> Run $quality-audit sub-skill |
| 5. When milestone complete: |
| -> Run $milestone-promote sub-skill |
| -> capture review packet + doc sync |
| 6. Loop back to step 1 |
+----------------------------------------------+
```
## Compaction rules
When the agent detects context pressure (long session, many iterations):
1. Read `docs/compaction-strategy.md` for what to preserve vs drop
2. Preserve: current milestone ID, active deliverable, last 3 iteration results,
all hard constraints, quality gate status
3. Drop: completed iteration details older than 3 iterations, resolved needs,
completed milestone details (they're in docs/documentation.md)
4. After compaction: re-read AGENTS.md -> prompt.md -> plan.md to rebuild context
## Recovery strategies
When something fails, the agent uses escalating recovery:
| Level | Trigger | Action |
|---|---|---|
| **L1: Retry** | First failure | Re-run the same approach with minor adjustment |
| **L2: Adjust** | Same failure after L1 | Change approach, try alternative implementation |
| **L3: Rollback + Replan** | Same failure after L2 | Revert only the files from the active iteration, then decompose the deliverable into smaller pieces |
| **L4: Escalate** | Same failure after L3, or any hard constraint violation | Stop and notify human with full context |
## Harness evolution (the environment itself iterates)
The harness -- documents, scripts, constraints, and operational sub-skills -- is not frozen after
Phase 4. It is a living system that improves through use.
**Triggers for harness evolution:**
1. **After every L2/L3 recovery**: ask "What was missing from the environment that
caused this struggle?" Possible answers:
- Missing documentation -> add or update the relevant doc
- Missing tool or script -> create it or recommend installation
- Missing constraint -> add a lint rule, CI check, or AGENTS.md guardrail
- Missing context -> update AGENTS.md, compaction strategy, or Required Reading
- **Uncertain what's missing -> ask the human**
2. **After every milestone**: review whether the harness itself needs an upgrade
(new quality dimension, stronger enforcement, better compaction triggers, etc.)
3. **After every quality-audit**: check harness health -- is the agent struggling
more than expected? Are the same failure categories recurring without prevention?
**Recording harness changes:**
- Log every harness improvement in `docs/documentation.md` Decision Log
- Update `docs/failures/taxonomy.md` with prevention actions (not just classification)
- If a harness change is project-specific, keep it local
- If a harness change is universal, flag it as `harness-improvement-candidate`
## Entropy management (garbage collection)
Agent-generated code drifts over time. To counter this:
1. **During normal iteration**: when doc validation or code changes reveal drift, fix the affected docs in the same loop instead of deferring it to a separate maintenance skill
2. **After every 3 milestones**: run `$quality-audit` for full tech debt scan
3. **Quality score tracking**: each audit produces a quality score in `docs/plan.md`;
if score drops below threshold, next iteration must be debt cleanup, not feature work
4. **Pattern enforcement**: when the agent detects repeated similar code (copy-paste drift),
it extracts to a shared utility and updates the architecture doc
## Multi-session continuity
When resuming after a break (new session, different agent instance):
1. Read `AGENTS.md` (the map)
2. Read `docs/prompt.md` (current milestone contract)
3. Read `docs/plan.md` (find where the marker is)
4. Read last 3 entries in `docs/documentation.md` (recent history)
5. Run the primary quality gate command from `AGENTS.md` (verify current state)
6. Resume the loop from step 2 of the daily operation loop
---
name: iterate
description: >
Execute one iteration cycle within the current milestone. Reads context,
implements a single scoped change, runs quality gates, records results,
and decides next action. Use this for every individual development step.
---
# Iterate — Single Iteration Cycle
Execute exactly one development iteration following `docs/implement.md`.
## When to use
- You need to implement the next deliverable in the current milestone
- You need to fix a bug or test failure
- You need to refactor a specific module
## Steps
### 1. Read context
1. Read `AGENTS.md` — confirm the execution contract and hard constraints
2. Read `docs/prompt.md` — current milestone scope
3. Read `docs/plan.md` — find the specific deliverable (look for unchecked `- [ ]`)
4. Read last 3 entries in `docs/documentation.md` — avoid repeating failed approaches
5. Read `docs/failures/taxonomy.md` — check for known failure patterns
### 2. State intent
Write one sentence:
> I am changing [X] to achieve [Y] because [Z].
The reason must trace to a milestone deliverable. The claim must be falsifiable.
### 3. Implement
- One primary change only
- Respect the max-files-per-change limit recorded in `AGENTS.md` and `docs/implement.md`
- Follow legibility principles (structured logging, remediation hints, module docstrings)
- If the change grows too large, stop and decompose
- Keep a running list of files touched in this iteration; you will need it for safe rollback and documentation
### 4. Verify
1. Run the primary quality gate command from `AGENTS.md` §5.
2. Run the focused verification commands listed in `docs/prompt.md` for the active deliverable.
3. If the project defines a separate regression check in `docs/implement.md`, run that before recording results.
### 5. Record
Update `docs/documentation.md`:
- Refresh `## Current State`
- current milestone
- active deliverable
- last validation result
- next action
- If this iteration introduced a lasting repo rule or tradeoff, add a short bullet to `## Decision Log`
- If code changes or doc validation exposed stale guidance, update the affected docs now instead of deferring to a separate maintenance sub-skill
```
## Iteration #<n> — <date>
- **Milestone**: <ID>
- **Change**: <one sentence>
- **Files modified**: <list>
- **Rollback scope**: <only files from this iteration that are safe to revert>
- **Verification result**: <pass/fail + evidence>
- **Quality gate status**: <pass/fail>
- **Decision**: continue | adjust | rollback
- **Next step**: <what to do next>
```
### 6. Decide
- **continue** → verification passed, pick next deliverable
- **adjust** → partial success, refine and re-iterate
- **rollback** → regression or unexplainable result, revert
## On failure: use escalating recovery
| Level | When | Action |
|---|---|---|
| L1: Retry | First failure | Minor adjustment, re-run |
| L2: Adjust | Same failure after L1 | Different approach entirely |
| L3: Rollback + Replan | Same failure after L2 | Revert only the files from this iteration after inspecting the diff, then decompose the deliverable |
| L4: Escalate | Same failure after L3 | Stop, notify human with full context |
### Harness signal capture (after every L2 or L3)
After resolving an L2 or L3 failure, ask: **"What was missing from the environment
that caused this struggle?"**
Classify the gap:
- `missing-doc` → add or update the relevant documentation
- `missing-tool` → create a script or recommend installing a tool
- `missing-constraint` → add a lint rule, CI check, or AGENTS.md guardrail
- `missing-context` → update AGENTS.md, compaction strategy, or Required Reading
- `uncertain` → ask the human what would have prevented this failure
Then:
1. Update `docs/failures/taxonomy.md` with the failure pattern AND a prevention action
2. If the fix is a harness change (not just a code fix), record it in
`docs/documentation.md` Decision Log with tag `harness-improvement`
3. If the pattern is universal (not project-specific), also tag it
`harness-improvement-candidate` for potential upstream skill improvement
## Replan trigger check (after every L2 or L3)
After resolving an L2 or L3 failure, also check whether the plan itself needs adjustment:
- **Consecutive 2× L2+**: review whether the current deliverable should be split
into smaller pieces. If yes, update `docs/plan.md` with the decomposed deliverables.
- **Consecutive 3× L2+**: review whether the current milestone scope is too ambitious.
If yes, consider splitting the milestone or deferring some deliverables.
- **New constraint or dependency discovered**: evaluate whether it affects the current
milestone or subsequent milestones. If yes, update `docs/plan.md` and record the
reason in `docs/documentation.md` Decision Log.
- **Uncertain whether to replan** → ask the human (per the Uncertainty Protocol).
Spec change protocol:
- Small changes (do not affect North Star or hard constraints) → agent may update
`docs/plan.md` and `docs/prompt.md` autonomously, recording the change in Decision Log
- Large changes (affect North Star, hard constraints, or architecture) → escalate to
the human before making any plan changes
## Periodic check
After every 3 iterations, run `$quality-audit` sub-skill (if installed) and update
`docs/plan.md` Operations Cadence with the latest evidence.
Before external review or handoff, capture a short review packet in
`docs/documentation.md` with:
- scope of the change
- files changed
- validation evidence
- primary risks or follow-ups
Never use repository-wide rollback commands such as `git checkout -- .` or `git reset --hard`.
During normal iteration work, do NOT ask the human for routine confirmation.
Only escalate when a hard constraint, blocking ambiguity, uncertainty protocol trigger,
or L4 recovery trigger requires it.
This diff is collapsed.
# documentation.md — Experiment & Change Log
Record every meaningful change here. Keep only the last 20 entries.
Archive older entries to `docs/archive/documentation-YYYY-MM.md` when needed.
Iteration numbers must be strictly increasing so audit cadence can be checked mechanically.
## Current State
- **Current milestone**: {{MILESTONE_ID}} — {{CURRENT_MILESTONE_TITLE}}
- **Active deliverable**: {{ACTIVE_DELIVERABLE}}
- **Last validation result**: {{LAST_VALIDATION_RESULT}}
- **Next action**: {{NEXT_ACTION}}
## How To Run
- **Primary quality gate**: `{{QUALITY_GATES_PRIMARY_COMMAND}}`
- **Primary doc validation**: `{{DOC_VALIDATION_PRIMARY_COMMAND}}`
- **Current runnable surface**: {{CURRENT_RUN_SURFACE}}
## Known Issues
- {{KNOWN_ISSUE_1}}
- {{KNOWN_ISSUE_2}}
## Decision Log
- {{DECISION_1}}
- {{DECISION_2}}
Entry types:
- **Iteration**: regular development iteration (from `$iterate` sub-skill)
- **Quality Audit**: periodic quality check (from `$quality-audit` sub-skill)
- **Milestone Complete**: milestone promotion (from `$milestone-promote` sub-skill)
- **Review Packet**: scoped handoff evidence captured before promotion, merge, or external review
- **Doc Sync**: targeted documentation refresh when code or the harness drifts
## Activity Log
## Iteration #{{ITERATION_NUMBER}} — {{DATE}}
- **Milestone**: {{MILESTONE_ID}}
- **Change**: {{ONE_SENTENCE_DESCRIPTION}}
- **Files modified**: {{FILE_LIST}}
- **Rollback scope**: {{ROLLBACK_SCOPE}}
- **Verification result**: {{PASS_OR_FAIL_WITH_EVIDENCE}}
- **Quality gate status**: pass | fail
- **Decision**: continue | adjust | rollback
- **Next step**: {{NEXT_ACTION}}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
custom: http://doc.ruoyi.vip/ruoyi/other/donate.html
This diff is collapsed.
<?xml version="1.0" encoding="UTF-8"?>
<settings xmlns="http://maven.apache.org/SETTINGS/1.2.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.2.0 https://maven.apache.org/xsd/settings-1.2.0.xsd">
<mirrors>
<mirror>
<id>central-direct</id>
<mirrorOf>*</mirrorOf>
<name>Maven Central</name>
<url>https://repo.maven.apache.org/maven2</url>
</mirror>
</mirrors>
</settings>
The MIT License (MIT)
Copyright (c) 2018 RuoYi
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
\ No newline at end of file
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment