Template testing
Pulp Engine ships a fixture-driven test harness that lets you verify a template renders correctly against known inputs — dry-run expression checks, HTML snapshot comparison, and (opt-in) PDF visual regression. The harness is designed to run in CI as the gate before promoting a template to a production label.
Why out-of-process? The server does not verify test reports. A client-supplied “tests passed” flag is trivially forgeable, so the promotion gate lives in your CI pipeline — not in the API. See the staging workflow below for the standard shape.
Install the CLI
npm i -g @pulp-engine/cli
# or per-project:
npm i --save-dev @pulp-engine/cli
Optional dependencies (only needed when you declare pdfSnapshot in a
fixture):
npm i --save-dev pdf-to-png-converter pixelmatch pngjs
Fixture format
Fixtures are YAML files under ./tests by default. One file per
template. Each file pins a template key and version, and declares one
or more named test cases:
# tests/invoice.test.yaml
template: invoice
version: "1.0.4" # required — pin the version under test
label: staging # optional — informational only (docs/diagnostics)
tests:
- name: "renders with line items"
input: # inline JSON data
customer: Acme
lineItems:
- description: Widget
amount: 100
expect:
dryRun: ok # default — must pass validation + expressions
htmlSnapshot: __snapshots__/basic.html
- name: "renders a blank invoice"
input:
file: ./data/blank.json # load from a sibling file
expect:
htmlSnapshot: __snapshots__/blank.html
pdfSnapshot: __snapshots__/blank.pdf.png
tolerancePixels: 50 # max differing pixels for PDF diff
- name: "rejects missing customer"
input: {}
expect:
dryRun: fail # negative test — dry-run must NOT pass
expect options
| Field | Default | Meaning |
|---|---|---|
dryRun | ok | ok requires a passing dry-run; fail requires a failing dry-run (negative tests); skip bypasses the dry-run step entirely |
htmlSnapshot | — | Path to an HTML baseline file, resolved relative to the fixture. Omit to skip HTML snapshot checking |
pdfSnapshot | — | Path to a PDF-page PNG baseline, resolved relative to the fixture. Requires the optional PDF deps |
tolerancePixels | 0 | Max differing pixels allowed for PDF snapshot diff. Strict by default |
HTML snapshot normalization
Before diffing, HTML is normalized deterministically so irrelevant formatting changes don’t cause false drift:
- Line endings normalized to
\n - Tag/attribute names lowercased
- Attributes sorted alphabetically within each tag
- Inter-tag whitespace collapsed
- Runs of whitespace inside text nodes collapsed to a single space
data-x-render-*tracing attributes stripped
This means a re-pretty-printed HTML output with identical content passes a snapshot check without updating the baseline.
Running tests
# Run every fixture under ./tests
pulp-engine test
# Single fixture
pulp-engine test tests/invoice.test.yaml
# Custom directory
pulp-engine test ./my-fixtures
# Update snapshots in place (create missing baselines, overwrite drifted ones)
pulp-engine test --update-snapshots
# Emit JUnit XML instead of the console reporter
pulp-engine test --reporter junit --output test-results.xml
Configuration is resolved in the usual CLI order:
- Command-line flags (
--api-url,--api-key) - Environment variables (
PULP_ENGINE_API_URL,PULP_ENGINE_API_KEY) .pulp-engine.jsonsearched up from the current directory- Built-in defaults (
http://localhost:3000)
The harness calls production render routes (POST /render,
POST /render/html) — never preview routes. Preview routes can be
disabled in production via PREVIEW_ROUTES_ENABLED=false, and they
bypass the {key, version, label} resolution path the harness is
meant to test. This means the CI user needs an API key with the
render (or admin) scope.
Promoting a tested version
After a clean pulp-engine test, promote the version to a label:
pulp-engine promote \
--template invoice \
--label staging \
--version 1.0.4 \
--test-run-id "$GITHUB_RUN_ID"
--test-run-id sends an optional X-PulpEngine-Test-Run-Id header that
Pulp Engine records in the audit event for forensic correlation with the
CI run. The server never validates this value — it is a breadcrumb,
not a gate. Enforcement (only promote on a passing test run) is the
CI pipeline’s job.
Use --if-match to guard against concurrent promotions:
pulp-engine promote \
--template invoice \
--label prod \
--version 1.0.4 \
--if-match 1.0.3
The promote call fails with 412 Precondition Failed if another
promoter moved the label since you read it.
CI workflow
The standard three-step gate:
- Run
pulp-engine testagainst the pinned version. - On a clean run, promote to
staging. - On manual approval, promote to
prod.
A sample GitHub Actions workflow lives at
.github/workflows/template-ci.sample.yml.
Copy it into your own repository and set the secrets (PULP_ENGINE_API_URL,
PULP_ENGINE_ADMIN_KEY) to wire it up. The admin key should be scoped
to CI and rotated independently of any human credentials.
Secret hygiene
- The test harness hits production render routes with an API key. Use
a dedicated CI key with the minimum scope required (
renderworks forpulp-engine test;adminis required forpulp-engine promote). - Baseline files are plain text / PNG — commit them alongside your fixtures in version control. They contain no secrets.
- Never commit your API key. The CLI reads from
PULP_ENGINE_API_KEYso CI secret stores work out of the box.
Limitations (v1)
- PDF snapshots compare the first page only. Multi-page diffs are a follow-up.
- Snapshot diff output shows a short unified-diff-like preview but not
a structured HTML tree diff. If that becomes painful, the roadmap
has
parse5-backed tree diffing as a follow-up. - Test reports are not uploaded anywhere automatically — the JUnit reporter is the interop point, and CI providers handle ingestion.