KA Evaluation View — Explorer Documentation
This guide explains how the KA Candidate Detail page is assembled, what each metric represents, and how to trace entries/exits back to the stored evaluation payload.
1. Page Purpose
The view is a diagnostic tool for pattern evaluations produced by ka_evaluation.py
. Every card summarises a slice of the simulation_json
payload stored in ka_pattern_candidates
. Use it to:
- Verify the candle window that was sampled and the resulting short/long/price returns.
- Inspect the synthetic entry/exit rationale that the evaluator now emits.
- Compare the pattern’s edge against baseline projections across future windows.
- Cross-check trade sizing assumptions and reference prices logged with the candidate.
2. Data Flow Overview
- Evaluation loop (
ka_evaluation.py::runner
) picks a trading pair, pulls candles, and generates synthetic pattern candidates. - Simulation payload (
_simulate_four_candle_strategy
) scans the dataset for qualifying windows, builds trade history, and assemblesentry_context
/exit_context
summaries. - Storage (
KaEvaluationStorage.persist_results
) serialises the dataclasses viadataclasses.asdict
into thesimulation_json
column. - View (
ka_view.php
) retrieves the row byid
orcandidate_id
, decodes JSON fields, and renders the cards and charts.
Prototype pattern logic
- Window size: four consecutive candles (
pattern_window_size
). - Long trigger: each close is lower than the previous one, the total drop exceeds
drop_threshold_pct
, and the last candle closes green. Opens a long trade at the close of the fourth candle. - Short trigger: mirrored rules — four higher closes, total rise above
rise_threshold_pct
, last candle closes red. - Exit rule: wait for the opposite signal or, if configured, cap exposure at
max_hold_bars
candles. Any open position is force-closed at dataset end. - Returns: each trade subtracts two commissions (entry + exit). Aggregate performance is
short_term_return
. - Persistence filter: simulations with no trades or non-positive total return are discarded and never stored, so every candidate in the UI represents at least one profitable pattern sequence.
- Context candles: the detail view shows both
context_candles_before
andcontext_candles_after
so you can inspect price action leading into the pattern and the immediate aftermath after the last trade closes.
simulation_json
, historical rows generated before April 2025 will show “—” for the entry/exit explanation cards. Re-run the evaluator if you want the richer context for older candidates.
3. Key Payload Fields
The table below maps the main JSON keys to the UI components:
JSON path | Type | Description & UI usage |
---|---|---|
simulation_json.short_term_return |
float | Displayed as “Pattern return” in the Overview stat cards and multiplied by trade_size for P&L. |
simulation_json.long_return , short_return |
float | Baseline comparisons, shown both in the Overview cards and in the Scenario table. |
simulation_json.window_candles |
array | High/low/open/close data for the highlighted window; drawn as the candlestick chart with context bars. |
simulation_json.entry_context |
object | New in this revision. Carries entry summary , notes , orientation bias, and baseline edges. Rendered as “Entry rationale”. |
simulation_json.exit_context |
object | Lists why the synthetic trade closed (currently always “window_end”) and how future windows behaved. Rendered as “Exit rationale”. |
simulation_json.comparison |
array | One element per forward projection window, each containing long_return , short_return , price_return , and timestamps. Drives the Forward Projections table and chart series. |
metadata_json.window_size |
int | Window length in candles, surfaced in the Overview → “Window length” fact. |
reasoning_json.short_term_return |
float | Legacy reasoning bundle kept for compatibility. Not required for the new rationale blocks. |
4. Entry & Exit Contexts
Entry context (entry_context
)
- trigger: currently
window_start
because the stub enters at the first candle. Reserve for future triggers. - orientation:
long
,short
, orflat
based on the sign ofpattern_return
. - summary: Sentence combining entry rule, pattern vs baseline returns.
- notes: Bullet list including edges vs long/short baselines and synthetic trade count.
- baselines: Raw numeric snapshot (pattern/long/short) kept for downstream use.
Exit context (exit_context
)
- trigger:
window_end
— evaluation always closes at last candle. - summary: Explains close timing, projection positivity counts, and the window outcome.
- notes: Underlying price move and averaged projection returns when available.
- projections: Aggregated counts and averages used to populate the Forward Projections fact cards.
notes
with the actual rules that caused trades to fire.
5. Manual Trade Replay Guide
You do not need local Python access to repeat a candidate’s trade. Everything required is on the KA view. Follow this checklist with the trading/charting software you prefer:
- Capture the inputs. Note the Trading pair, Interval, Entry/Exit time, Entry/Exit price, and trade size shown under “Window & Trade”. Download the raw payload if you want exact candles.
- Set the timeframe. In your platform pick the same interval (e.g., 1m, 5m). Convert the timestamps from the view (UTC) into your platform’s timezone before marking the candles.
- Mark the window. Highlight the candle range using
window_size
and the recorded start time. This ensures indicators or overlays use the same slice as the evaluator. - Recreate the trade. Use the orientation from the Entry rationale: open a synthetic position at Entry price on the first candle of the window, and exit at Exit price when the final candle closes. The baseline returns/cards let you compare to buy-and-hold benchmarks.
- Match overlays. On the chart, the shaded green/red bands highlight each executed trade while the circles with callout bubbles explain the entry/exit rationale—mirror these candles in your platform to verify the sequence.
- Apply position sizing. Multiply the percentage return by the documented
trade_size
(or your own notional) to compute cash P&L. The Scenario table shows the calculations the UI performs. - Review projections. Use the Forward Projections table to see how price behaved after the exit. You can optionally run paper trades on those future windows using the same methodology.
- Document findings. Record whether the recreated trade matches the evaluator’s percentages. If you adjusted rules, jot them down for repeatability.
6. Chart & Returns Panel
Performance chart
- The highlighted rectangle equals
window_candles
. Candles before/after show contextual ranges. - Blue line traces close prices; dashed lines mark the pattern window used for detection.
- Green/red shaded zones show each executed trade; callout bubbles summarise the entry and exit rationale directly on the chart.
- The Plotly toolbar (top-right) lets you zoom, pan, and reset the view; scroll zoom is enabled for quick inspection.
- Legend strings come from
ka_view.php
’s JavaScript block; adjust there for localisation.
Returns projection (dotted chart)
- X-axis: forward window end times (
comparison[*].end_time
). - Y-axis: percentage return since the evaluation exit. Pattern line only plotted at the first point.
- Zero baseline line corresponds to “no change versus exit price”.
7. Interpreting the Fact Grids
Overview
Shows headline returns and metadata. Values are formatted with helper functions format_percent
, format_money
, and trend_class
.
Window & Trade
Aggregates entry/exit timestamps, prices, duration, trade size, and triggers. Rationale blocks render only when summaries are present.
Forward Projections
Fact grid uses projectionFacts
derived from exit_context.projections
plus raw counts for long/short/price windows.
Metadata Highlights
Displays score and timestamps directly from the table row. If metadata_json
contains extra keys, they are listed in a table beneath.
8. Adding Your Own Signals
- Extend
PatternSimulation
with new fields (e.g.,indicator_snapshots
). - Populate them in
_simulate_pattern
and ensure they serialise cleanly. - Expose the data in
ka_view.php
by updating the decoded arrays/fact grids. - Document the meaning in this file so analysts can interpret the new metrics quickly.
Because storage uses JSON longtext, you can evolve the schema without database migrations, but keep an eye on payload size (MySQL limit ≈ 4 GB — practically, keep below a few megabytes per row).
9. FAQ
Why is the candidate ID a four-digit number?
The stub generator uses PATTERN-{random.randint(1000, 9999)}
purely as a label. It does not encode significance or rank.
Where do the entry/exit notes come from?
They are currently derived from baseline returns inside _build_entry_context
/ _build_exit_context
. Replace those helpers when real pattern logic is available.
Do I need to migrate the database?
No. Additional context lives inside the JSON payload, so existing schema is fine.
How can I export the raw JSON?
Use the “Raw Payloads” accordions at the bottom of the detail page or query the MySQL table directly.
10. Troubleshooting Checklist
- If the chart is blank, ensure
window_candles
decoded into a non-empty array. - If documentation links 404, verify files exist under
web/docs/
and web server has read access. - If percentages show “—”, confirm the evaluator wrote numeric values and not strings/NaN.
- Re-run evaluations with
ka_evaluation.py
after changing window logic to refresh persisted JSON.