Neural network background
Source: packages/fabric-mcp/src/tools.tsmcpgenerated

fabric_slice

Build an LLM-ready context window for a target symbol, bounded by a real token budget (cl100k_base BPE, the same tokenizer GPT-4 and Claude use). BFS outward from the symbol for up to 2 hops, sort by tier, then greedily emit HOT nodes (full inline), WARM (condensed to 200 chars), and COLD (reference line with node id) until the next candidate would exceed max_tokens. Response returns the context string, per-tier counts, and honest tokens_used/max_tokens fields so callers can compute accurate utilisation.

Input schema

FieldTypeRequiredDescription
targetstringyesSymbol name to generate context for.
max_tokensnumbernoHard token budget. Default 8000. The slice is strictly bounded — overrun is measured and reported, not silently exceeded. (default: 8000)
taskstringnoOptional task header prepended to the slice ("# Task: …").

Source

Source: packages/fabric-mcp/src/tools.ts