Source:
packages/fabric-mcp/src/tools.tsmcpgeneratedfabric_slice
Build an LLM-ready context window for a target symbol, bounded by a real token budget (cl100k_base BPE, the same tokenizer GPT-4 and Claude use). BFS outward from the symbol for up to 2 hops, sort by tier, then greedily emit HOT nodes (full inline), WARM (condensed to 200 chars), and COLD (reference line with node id) until the next candidate would exceed max_tokens. Response returns the context string, per-tier counts, and honest tokens_used/max_tokens fields so callers can compute accurate utilisation.
Input schema
| Field | Type | Required | Description |
|---|---|---|---|
target | string | yes | Symbol name to generate context for. |
max_tokens | number | no | Hard token budget. Default 8000. The slice is strictly bounded — overrun is measured and reported, not silently exceeded. (default: 8000) |
task | string | no | Optional task header prepended to the slice ("# Task: …"). |
Source
Source: packages/fabric-mcp/src/tools.ts
