Context: Studying Claude Code source code (backup branch)

Feature Gate System and tengu_chair_sermon

How feature gates work

checkStatsigFeatureGate_CACHED_MAY_BE_STALE(gate) in growthbook.ts:804 checks in order:

  1. Env var overrides (getEnvOverrides()) for eval harnesses
  2. Config overrides (getConfigOverrides()) from settings
  3. GrowthBook cached features (cachedGrowthBookFeatures in global config, persisted to disk)
  4. Statsig cached gates (legacy fallback during migration)
  5. Default: false

GrowthBook flags are fetched from Anthropic’s servers and cached locally. Gate values determined server-side based on user attributes as part of A/B experiment rollout. Users can’t toggle these unless they set env var or config overrides (intended for internal eval harnesses).

tengu_chair_sermon: system-reminder wrapping gate

Controls whether ALL attachment messages get universal <system-reminder> wrapping, plus a “smoosh” optimization.

When ON (true) (messages.ts):

  • ensureSystemReminderWrap applied to all attachment messages (line 2276), guaranteeing every attachment gets <system-reminder> tags regardless of whether the individual normalizeAttachmentForAPI case remembered to wrap
  • smooshSystemReminderSiblings runs (line 2337): folds <system-reminder>-prefixed text blocks INTO the adjacent tool_result.content instead of leaving them as separate sibling blocks
  • This eliminated a problem where separate text siblings after tool_result taught models to emit the stop sequence (A/B result: 92% rate dropped to 0%, per comment at line 2605-2608)

When OFF (false):

  • Only attachment types that explicitly call wrapInSystemReminder in their normalizeAttachmentForAPI case get wrapped
  • Legacy narrower smoosh runs instead (line 2616-2625): only string-content tool_results + all-text siblings

Implication for hook/skill content

When the gate is ON, hook additionalContext and other attachment content gets smooshed directly into tool_result.content rather than sitting as a separate user-role text block. This means the model sees it as part of the tool result itself, not as a separate message. Changes the effective positioning of injected context.