DocumentationConfiguration

Configuration

aide-memory ships with defaults you can tune. If something doesn’t fit (chatty per-file recall, hard-block on first read, periodic Stop nudges), flip a knob.

aide-memory init seeds .aide/config.json with every public setting so you can see and edit them in one place.

Config file location

<project-root>/.aide/config.json

Created at init. Source of truth for the entire knob surface. If missing or malformed, defaults from scripts/hooks/defaults.json are used automatically.

You can also cat .aide/config-reference.md for an auto-generated reference of every setting + default + description, regenerated on every init.

All public config keys

Hook behavior

KeyTypeDefaultDescription
hooks.read.maxBlocksnumber1Max pre-read hard-blocks per file path per session. 0 disables the pre-read hook entirely (no block, no soft nudge, no tracking). 1 blocks on the first read of each file with scoped memories; later reads are silent or soft-nudged.
hooks.edit.maxBlocksnumber1Same as hooks.read.maxBlocks for pre-edit.
hooks.search.mode"off" | "soft" | "block""soft"Pre-search hook behavior. "off" runs Grep silently. "soft" (default) lets Grep proceed but injects “N memories match; call aide_search for structured results.” "block" hard-blocks Grep until the agent calls aide_search.
hooks.correction.enabledbooleantrueDetect correction phrasings (“no, use X instead”, etc.) in user messages and prompt aide_remember.
hooks.precompact.mode"cleanup" | "off""cleanup""cleanup" (default) clears recalled-paths, stop-count, correction-pending tracking files at /compact so the post-compact turn re-prompts cleanly. "off" preserves the tracking across compaction (useful if you compact often and don’t want re-prompts each time).
hooks.stop.schedulearray[{until:9,every:3},{until:29,every:5},{every:10}]Phased interval for Stop hook reflection nudge. Default ramps every 3 turns through turn 9, every 5 through turn 29, every 10 afterwards.
hooks.visiblebooleantrueSurface user-facing aide-memory · ... systemMessage lines when hooks fire (soft recalls, correction detected, session-start injection, Stop checkpoints). Set false to hide all aide-memory systemMessage lines; hooks still function (context injection + block enforcement unchanged). Does not affect what the agent sees.

Recall + scope matching

KeyTypeDefaultDescription
recall.limitnumber20Max memories per aide_recall call before layer-diversity balancing.
recall.ensureLayerDiversitybooleantrueSwap under-represented layers up into results when total is below recall.layerDiversityMinLimit.
recall.layerDiversityMinLimitnumber5Threshold below which diversity swap applies.
recall.minScopeDepthnumber1Minimum fixed-prefix segment count for a scope to be eligible for per-file recall. Default 1 is permissive: any scope with at least 1 segment qualifies. Bump to 2+ for stricter scoping when you have many broad scopes. Broad scopes below the threshold are NOT excluded from memory entirely; they just surface via SessionStart injection instead of per-file. See visualized breakdown below.

SessionStart injection

KeyTypeDefaultDescription
injection.enabledbooleantrueMaster switch for SessionStart dynamic injection. When false, the rules file still ships (static content) but carries no memory-derived content. Granular per-layer knobs only apply when this is true.
injection.preferencesnumber | "all" | false15Max preferences injected at SessionStart. 0 / false disables; number = hard cap. Top N by recalled_count desc, updated_at desc so most-used preferences surface first.
injection.excludeScopedPreferencesbooleanfalseIf true, scoped preferences skip SessionStart and surface only via path hooks.
injection.technicalnumber | booleanfalseInject technical-layer memories at SessionStart? Default off (technical surfaces via path hooks). Number caps; true = unlimited.
injection.area_contextnumber | booleanfalseSame for area_context. Default off.
injection.guidelines"all" | number | false"all"Inject guidelines. Default "all" since guidelines apply broadly. Number caps; false disables.
injection.priorityAlwaysOverridebooleantrueInclude any memory with priority: "always" regardless of layer gating. Rendered first (in ## Always section) so priority memories survive the char cap.
injection.maxCharsnumber1200Overall character cap for the concatenated injection. Truncates with ...truncated.

Memory storage

KeyTypeDefaultDescription
memories.hideFromGrepbooleantrueAdd .aide/memories/ to an aide-memory-managed block in .ignore so grep / ripgrep skip it. Live-synced on config change. Forces structured access via aide_recall / aide_search instead of grep dumps.
memories.softening.thresholdnumber10Below this total-memory count, pre-read/pre-edit hard blocks become soft nudges. Keeps small projects friendly while you’re seeding. Bump higher for “stay gentle longer”, drop to 0 for “hard-block from memory #1”.
memories.defaultSharedbooleantrueDefault shared value for new preferences memories when the caller doesn’t pass one explicitly. true (default) writes to preferences/shared/ (committed). false writes to preferences/personal/ (gitignored). Per-call shared: true | false always overrides. Other layers (technical, area_context, guidelines) ignore this; they live in their layer folder regardless.

Integration + embeddings

KeyTypeDefaultDescription
versionnumber1Config schema version.
tags.presetsstring[]["architecture", "testing", "security", "style", "integration", "config", "migration", "performance", "api-contract"]Available tag presets surfaced by aide_remember.
contributorstring"auto"Contributor name attached to new memories. Default "auto" reads git config user.name at memory-creation time. Override with a team handle if you want all stored memories under a single name.
embeddings.backend"auto" | "transformers" | "ollama" | "none""auto"Semantic-search backend. "auto" tries transformers (if installed), then ollama, then keyword-only. "none" disables semantic supplement.
embeddings.modelstring"auto"Model name. "auto" uses backend defaults (Xenova/bge-small-en-v1.5 for transformers, nomic-embed-text for ollama).
telemetry.enabledbooleantrueControls both local SQLite analytics and PostHog remote telemetry. Set false to disable all telemetry. The AIDE_TELEMETRY=off env var also works and takes precedence if set.
updates.checkbooleantrueCheck for new npm versions after each command (non-blocking).

Visualized: scope-matching dial (recall.minScopeDepth)

When you open a file, aide-memory decides which scoped memories are “specific enough” to surface per-file vs which belong at SessionStart-only. recall.minScopeDepth is the dial that controls this.

Think of scopes ranked by how specific they are; more path segments equals more specific:

most specific  ↑   src/api/routes/**     (3 segments)
               │   src/api/**            (2 segments)
 less specific ↓   src/**                (1 segment)

The dial is a BAR: scopes above it surface per-file, scopes below it go to SessionStart instead.

Default (minScopeDepth: 1), permissive, works across project shapes:

  src/api/routes/**     ← per-file recall ✓
  src/api/**            ← per-file recall ✓
  src/**                ← per-file recall ✓
 ─────── BAR ───────
  (nothing below)

Every scoped memory surfaces when you open a matching file. Works for src/-prefixed projects, flat Next.js / SvelteKit (pages/**, components/**), and monorepos (packages/foo/**).

minScopeDepth: 2, quieter, recommended when you have many broad scopes:

  src/api/routes/**     ← per-file recall ✓
  src/api/**            ← per-file recall ✓
 ─────── BAR ───────
  src/**                ← SessionStart only

minScopeDepth: 3, strict edge case: only very-narrow scopes surface per-file.


Visualized: Stop-hook rhythm (hooks.stop.schedule)

Turn:   1   2   3   4   5   6   7   8   9  10 11 12 13 14 15 16 17 18 19 20 ...
              ▲           ▲           ▲              ▲              ▲
            first       second       third          every-5         every-5
             fire        fire         fire

Phase 1 (turns 1-9):   every 3 turns   ← dense, fresh decisions
Phase 2 (turns 10-29): every 5 turns   ← mid-session
Phase 3 (turns 30+):   every 10 turns  ← long session, rare

Custom schedules:

aide-memory config hooks.stop.schedule '[{"every":5}]'                           # every 5, forever
aide-memory config hooks.stop.schedule '[{"until":20,"every":3},{"every":999999}]'  # dense first 20, off after
aide-memory config hooks.stop.schedule '[{"every":999999}]'                      # effectively off

Visualized: softening threshold (memories.softening.threshold)

Memories in store:
   0 ──── 5 ──── 9       10 ───── 20 ────── 50+
   │               │     │                     │
   └── SOFT ───────┘     └────── HARD ─────────┘
       (small project)        (full blocking)
  • 0 memories: pre-read/pre-edit silent
  • 1 to 9 memories: soft nudge only, never hard-blocks
  • 10+ memories: hard-blocks on first touch of each file with scoped memories

Bump higher (e.g. 25) for a longer “soft only” ramp; bump to 0 for “hard-block from memory #1” (aggressive use).


Visualized: SessionStart budget (injection.maxChars)

Sections are concatenated in this order and clipped at the cap:

  ┌──────────────────────────────────────────────────┐
  │ ## Always             ← priority:"always" mems   │  (renders FIRST, survives clip)
  ├──────────────────────────────────────────────────┤
  │ ## Session Preferences                           │
  ├──────────────────────────────────────────────────┤
  │ ## Technical Context  (only if injection.technical=true)
  ├──────────────────────────────────────────────────┤
  │ ## Area Context       (only if injection.area_context=true)
  ├──────────────────────────────────────────────────┤
  │ ## Guidelines                                    │
  └──────────────────────────────────────────────────┘
  ↑ total concatenated length ≤ injection.maxChars
    anything over budget gets "...truncated"
  • 1200 (default): typical session-start context, ~300 tokens.
  • 600: aggressive clip; only top-N preferences + always-memories fit.
  • 2000 to 3000: lets richer context through, useful with injection.technical=true or injection.area_context=N.
  • Very large (10000+): essentially no clip.

Reading config

aide-memory config hooks.read.maxBlocks         # → 1
aide-memory config recall.minScopeDepth         # → 1
aide-memory config tags.presets                 # → ["architecture",...]

Setting config

aide-memory config memories.defaultShared false
aide-memory config contributor "Ahmed Meky"
aide-memory config recall.minScopeDepth 1
aide-memory config injection.preferences 30

Or just edit .aide/config.json directly. The JSON file is the source of truth; aide-memory’s hooks re-read it on every fire so changes propagate without restarting anything. For an in-flight Claude Code session, reconnect the MCP server (/mcp → reconnect) for instant cache-derived-artifact resync.

Values are auto-parsed from strings:

  • "true" / "false" become booleans
  • Numeric strings become numbers
  • JSON arrays / objects ({...} / [...]) are parsed as JSON
  • Everything else stays a string

Embedding configuration

aide-memory supports optional semantic search. When enabled, memories are embedded at creation time and search supplements keyword results with semantic matches when fewer than 3 keyword hits are found.

# Default: auto. Try transformers (if installed), then ollama, then keyword-only
aide-memory config embeddings.backend auto
 
# Force local Transformers.js
aide-memory config embeddings.backend transformers
aide-memory config embeddings.model Xenova/bge-small-en-v1.5
 
# Force Ollama (requires Ollama running at localhost:11434)
aide-memory config embeddings.backend ollama
aide-memory config embeddings.model nomic-embed-text
 
# Disable semantic search entirely (keyword-only)
aide-memory config embeddings.backend none

@huggingface/transformers is in optionalDependencies; npm install -g aide-memory will attempt to install it (npm continues if it fails). Without either backend available, search uses FTS5 keyword (BM25 ranking) with a LIKE fallback.

Telemetry

aide-memory has two distinct analytics surfaces. Don’t conflate them.

1. Local SQLite analytics (always on, never transmitted). Tool-call counts and recall events are written to your local SQLite cache at ~/.aide/projects/<hash>/memory.db. This drives aide-memory stats. Purely local; nothing leaves your machine. There is no remote endpoint involved.

2. Anonymized event tallies to PostHog (on by default, env-var to disable). aide-memory sends anonymized event tallies (just counts, no content) to PostHog so we can see which features are used. Disable any time by setting AIDE_TELEMETRY=off in your environment.

What’s sent:

  • Event type (remember, recall, search, etc.)
  • A SHA256-hashed hostname:username for deduplication
  • Platform (e.g. darwin, linux)
  • Node version

What’s never sent: memory content, code, file paths, scope strings, project names, contributor names, query strings, search keywords, recall result content, the number of memories you have, or any other user-identifying data. The events are counts only (“a recall happened on this machine”), no payload.

To disable:

export AIDE_TELEMETRY=off

A note on the bundled key. The PostHog write key ships in the published bundle. This is industry standard for client-side analytics SDKs (Sentry, Mixpanel, PostHog itself all ship the project key in the client). Write keys identify a project so events route to it; they don’t authorize sensitive operations.

Update checks

aide-memory checks for new versions after each command (non-blocking, runs in the background). To disable:

aide-memory config updates.check false

Resetting config

rm .aide/config.json
aide-memory init

Memories are not affected. aide-memory init writes a fresh .aide/config.json seeded with all public defaults but doesn’t touch .aide/memories/. To reset memories AND config, rm -rf .aide/ and re-init (this destroys all stored memories; only do it for a clean start).