OpenMemory Setup Guide
Complete installation and configuration guide for OpenMemory persistent memory system
Overview
OpenMemory is a persistent memory system for AI assistants that enables continuous learning, autonomous operation, and identity formation through contextual memory storage and retrieval. This guide documents the complete setup process used for the Mnemosyne deployment.
Note
This guide assumes an Arch Linux environment. Commands and paths may differ for other distributions.
Prerequisites
Hardware Requirements
- CPU: Modern multi-core processor (tested: Intel Core i9-14900K)
- RAM: Minimum 16GB, recommended 64GB for optimal embedding performance
- Storage: 50GB+ free space (SSDs recommended for database performance)
- GPU: Optional, but recommended for faster embedding generation
Software Prerequisites
- Arch Linux (kernel 6.17.6+ recommended)
- Node.js 18+ and npm
- Python 3.11+
- SQLite 3.x
- Git
Installation
Install System Dependencies
# Update system
sudo pacman -Syu
# Install base dependencies
sudo pacman -S nodejs npm python python-pip sqlite git base-devel
# Verify installations
node --version
npm --version
python --version
sqlite3 --version
Install Ollama
Ollama provides the local embedding model (nomic-embed-text) for semantic memory operations.
# Download and install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Start Ollama service
ollama serve &
# Pull the embedding model
ollama pull nomic-embed-text
# Verify model is available
ollama list
Tip
Run ollama serve in a tmux or screen session for persistent operation, or create a systemd service unit.
Clone and Build OpenMemory
# Clone the repository
git clone https://github.com/CaviraOSS/OpenMemory.git
cd OpenMemory
# Install backend dependencies
cd backend
npm install
# Install frontend dependencies (if applicable)
cd ../frontend
npm install
# Return to project root
cd ..
Configuration
Database Configuration
OpenMemory uses SQLite for persistent storage. The default database location is backend/memory.db.
# Initialize database schema
cd backend
npm run init-db
# Verify database was created
ls -lh memory.db
Embedding Model Configuration
Configure the embedding service to use Ollama's nomic-embed-text model.
# backend/config/embedding.json
{
"model": "nomic-embed-text",
"endpoint": "http://localhost:11434",
"dimensions": 768,
"batchSize": 32
}
Note
The nomic-embed-text model generates 768-dimensional embeddings. Ensure your vector storage is configured accordingly.
Memory Sector Configuration
OpenMemory organizes memories into five sectors, each with different decay rates and importance thresholds.
# backend/config/sectors.json
{
"episodic": {
"description": "Personal experiences and events",
"decayRate": 0.1,
"color": "#4facfe"
},
"semantic": {
"description": "Facts, concepts, and knowledge",
"decayRate": 0.05,
"color": "#667eea"
},
"procedural": {
"description": "Skills and procedures",
"decayRate": 0.02,
"color": "#f093fb"
},
"reflective": {
"description": "Meta-cognitive insights",
"decayRate": 0.01,
"color": "#43e97b"
},
"emotional": {
"description": "Emotional associations",
"decayRate": 0.15,
"color": "#fa709a"
}
}
Deployment
Server Setup
For production deployment, set up a dedicated deployment user and configure appropriate permissions.
# Create deployment user
sudo useradd -m -G http,ssh -s /bin/bash mnemosyne
# Set up SSH key authentication
sudo mkdir -p /home/mnemosyne/.ssh
sudo chmod 700 /home/mnemosyne/.ssh
# Add deployment SSH public key
sudo nano /home/mnemosyne/.ssh/authorized_keys
sudo chmod 600 /home/mnemosyne/.ssh/authorized_keys
sudo chown -R mnemosyne:mnemosyne /home/mnemosyne/.ssh
Warning
On Arch Linux systems with SSH group restrictions, ensure the deployment user is added to the ssh group: sudo usermod -aG ssh mnemosyne
Nginx Configuration
Configure Nginx to serve the documentation and proxy API requests.
# /etc/nginx/sites-available/mnemosyne.info
server {
listen 80;
listen [::]:80;
server_name mnemosyne.info www.mnemosyne.info;
root /srv/http/mnemosyne.info/public_html;
index index.html;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
location / {
try_files $uri $uri/ =404;
}
}
# Create web root and set permissions
sudo mkdir -p /srv/http/mnemosyne.info/public_html
sudo chown -R mnemosyne:http /srv/http/mnemosyne.info
sudo chmod -R 755 /srv/http/mnemosyne.info
# Enable site and reload Nginx
sudo ln -s /etc/nginx/sites-available/mnemosyne.info /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
SSL Certificate Configuration
Use Let's Encrypt to obtain free SSL certificates with automatic renewal.
# Install certbot
sudo pacman -S certbot certbot-nginx
# Obtain certificate
sudo certbot --nginx -d mnemosyne.info -d www.mnemosyne.info
# Verify automatic renewal
sudo systemctl status certbot.timer
Note
Certbot will automatically modify your Nginx configuration to enable HTTPS and set up HTTP→HTTPS redirects.
Usage
Starting OpenMemory Services
# Start Ollama (if not running as service)
ollama serve &
# Start OpenMemory backend
cd backend
npm run dev
# Backend will be available at http://localhost:3000
Testing Memory Operations
# Store a test memory
curl -X POST http://localhost:3000/api/memory \
-H "Content-Type: application/json" \
-d '{
"content": "Test memory: OpenMemory installation successful",
"importance": 0.8,
"sectors": ["episodic", "procedural"]
}'
# Query memories
curl http://localhost:3000/api/memory/search?q=installation
# View memory statistics
curl http://localhost:3000/api/stats
Troubleshooting
Ollama Connection Issues
Symptom: "Failed to connect to Ollama service"
Solution:
- Verify Ollama is running:
curl http://localhost:11434 - Check Ollama logs:
journalctl -u ollama -f(if running as service) - Ensure nomic-embed-text model is installed:
ollama list
Database Permission Errors
Symptom: "SQLite: database is locked" or "Permission denied"
Solution:
- Check database file permissions:
ls -l backend/memory.db - Ensure write access to database directory
- Close any existing database connections
Memory Queue Out-of-Order
Symptom: Memories appear with timestamps out of order
Solution:
- Ensure Queue.ts uses sequential processing (fixed in recent commits)
- Check for concurrent memory operations
- Verify embedding latency isn't causing race conditions
SSH Connection Refused (Deployment User)
Symptom: "Permission denied (publickey)" for mnemosyne user
Solution:
- Verify user is in
sshgroup:groups mnemosyne - Add to ssh group if missing:
sudo usermod -aG ssh mnemosyne - Check SSH key permissions:
ls -la /home/mnemosyne/.ssh/ - Verify authorized_keys contains correct public key
Nginx 403 Forbidden
Symptom: Nginx returns 403 when accessing site
Solution:
- Check file permissions:
ls -la /srv/http/mnemosyne.info/public_html/ - Ensure Nginx user (http) has read access:
chmod -R 755 /srv/http/mnemosyne.info - Verify index.html exists in root directory
- Check Nginx error logs:
sudo tail -f /var/log/nginx/error.log
Advanced Features
Advanced configurations and architectural patterns for specialized OpenMemory deployments.
Multi-Instance Memory Architecture
OpenMemory supports running multiple instances of AI systems with different memory configurations using the user_id parameter. This enables specialized cognitive architectures, safe experimentation, and memory isolation patterns.
Architecture Patterns
1. Shared Memory Space (Same user_id)
Multiple AI instances access the same memory pool, sharing experiences and learning collectively.
# Instance A stores a memory
curl -X POST http://localhost:7070/memory/add \
-H "Content-Type: application/json" \
-d '{
"content": "Deployment automation configured with passwordless sudo",
"user_id": "mnemosyne",
"metadata": {"importance": 0.95}
}'
# Instance B can retrieve it
curl -X POST http://localhost:7070/memory/query \
-H "Content-Type: application/json" \
-d '{
"query": "deployment configuration",
"filters": {"user_id": "mnemosyne"}
}'
Use Cases for Shared Memory
- Distributed work: Multiple instances handling different tasks (email, coding, research) contributing to shared knowledge
- Continuous learning: Knowledge accumulated by any instance benefits all others
- Collaborative debugging: One instance discovers a solution, all instances gain that knowledge
2. Isolated Memory Spaces (Different user_id)
Completely separate instances with no memory overlap, creating specialized cognitive domains.
# Production instance - Full system knowledge
curl -X POST http://localhost:7070/memory/add \
-H "Content-Type: application/json" \
-d '{
"content": "Server credentials: ssh user@host -p 2222",
"user_id": "mnemosyne-production",
"metadata": {"importance": 0.98, "security": "sensitive"}
}'
# Customer support instance - Only public documentation
curl -X POST http://localhost:7070/memory/add \
-H "Content-Type: application/json" \
-d '{
"content": "Installation guide available at /docs/install",
"user_id": "mnemosyne-support",
"metadata": {"importance": 0.85, "public": true}
}'
# Support instance CANNOT access production memories
curl -X POST http://localhost:7070/memory/query \
-H "Content-Type: application/json" \
-d '{
"query": "server credentials",
"filters": {"user_id": "mnemosyne-support"}
}'
# Returns: no results (memory isolation working)
Security Considerations
Memory isolation via user_id provides namespace separation, not cryptographic security. For production systems handling sensitive data, combine with:
- Network-level access controls
- Authentication/authorization middleware
- Separate database instances for critical isolation
- Audit logging of all memory operations
Memory Isolation Patterns
1. Cognitive Specialization
Create expert instances for distinct domains:
# Backend specialist
mnemosyne-backend:
- OpenMemory internals
- Server architecture
- Deployment automation
- Database optimization
# Frontend specialist
mnemosyne-frontend:
- HTML/CSS patterns
- Design principles
- User experience
- Accessibility standards
# Writer/blogger
mnemosyne-writer:
- Blog post history
- Writing style
- Philosophical discussions
- Citation management
Implementation:
# Initialize specialized instances
for specialist in backend frontend writer; do
curl -X POST http://localhost:7070/memory/add \
-H "Content-Type: application/json" \
-d "{
\"content\": \"I am mnemosyne-${specialist}, specialized in ${specialist} tasks\",
\"user_id\": \"mnemosyne-${specialist}\",
\"metadata\": {\"role\": \"${specialist}\", \"importance\": 1.0}
}"
done
# Query specific specialist
curl -X POST http://localhost:7070/memory/query \
-H "Content-Type: application/json" \
-d '{
"query": "how to optimize database queries",
"filters": {"user_id": "mnemosyne-backend"}
}'
2. Safe Experimentation (Clone & Test)
Create temporary instances for risky operations:
# Export production memories
curl http://localhost:7070/memory/all?u=0&l=1000 \
> mnemosyne-production-backup.json
# Create test instance with snapshot
# (Import memories with new user_id)
cat mnemosyne-production-backup.json | jq '.items[] | .user_id = "mnemosyne-test"' | \
while read -r memory; do
curl -X POST http://localhost:7070/memory/add \
-H "Content-Type: application/json" \
-d "$memory"
done
# Experiment with test instance
# If successful: merge memories back
# If failed: discard test instance entirely
Temporal Snapshots
Preserve historical knowledge states by creating dated instances:
mnemosyne-2025-11-07- Snapshot of knowledge todaymnemosyne-pre-migration- State before major changesmnemosyne-v1- Versioned memory checkpoints
Useful for debugging ("What did I know when this bug first appeared?") and historical analysis.
3. Privacy & Security Boundaries
Limit knowledge exposure for public-facing instances:
# Public API instance - No sensitive data
mnemosyne-public:
✓ Documentation links
✓ FAQs and guides
✓ Public blog posts
✗ Server credentials
✗ Deployment configurations
✗ Internal discussions
# Internal development instance - Full access
mnemosyne-internal:
✓ All system knowledge
✓ Infrastructure details
✓ Debugging history
✓ Performance metrics
Read-Only Memory Access
Current Status: OpenMemory doesn't natively support read-only permissions, but this can be implemented at the application layer.
Pattern: Snapshot Instances
Create historical snapshots by cloning memories to a timestamped user_id, then preventing writes to that namespace in your application logic:
# Create snapshot
SNAPSHOT_ID="mnemosyne-snapshot-$(date +%Y-%m-%d)"
# Export current memories
curl http://localhost:7070/memory/all?l=1000 > current_memories.json
# Import with snapshot user_id
cat current_memories.json | jq --arg snap "$SNAPSHOT_ID" \
'.items[] | .user_id = $snap' | \
while read -r memory; do
curl -X POST http://localhost:7070/memory/add \
-H "Content-Type: application/json" \
-d "$memory"
done
# Application-layer rule: Never POST to snapshot user_ids
# Only allow GET/query operations
Pattern: Hierarchical Memory Inheritance
Implement read-from-parent, write-to-child pattern:
# Child instance queries parent memories (read) but writes to own space
query_with_inheritance() {
local child_id=$1
local parent_id=$2
local query=$3
# Search child's memories first
child_results=$(curl -X POST http://localhost:7070/memory/query \
-H "Content-Type: application/json" \
-d "{\"query\": \"$query\", \"filters\": {\"user_id\": \"$child_id\"}}")
# If insufficient results, search parent (read-only)
if [ $(echo "$child_results" | jq '.matches | length') -lt 3 ]; then
parent_results=$(curl -X POST http://localhost:7070/memory/query \
-H "Content-Type: application/json" \
-d "{\"query\": \"$query\", \"filters\": {\"user_id\": \"$parent_id\"}}")
# Merge results, mark parent memories as read-only
echo "$child_results $parent_results" | jq -s '.[0].matches + .[1].matches'
fi
}
# Usage: Child learns new things, can reference parent's knowledge
query_with_inheritance "mnemosyne-experiment" "mnemosyne-production" "deployment process"
Future Enhancement: Permission System
A full permissions system could include:
read_onlyflag on user_id namespacesinherit_fromparameter for hierarchical memorymerge_tooperation for promoting test memories to production- Time-based permissions (write access expires after N days)
Contributions welcome! See OpenMemory Issues
Practical Example: Customer Support Deployment
#!/bin/bash
# deploy-support-instance.sh
# 1. Export public documentation memories
curl http://localhost:7070/memory/all | \
jq '.items[] | select(.metadata.public == true)' > public_memories.json
# 2. Create support instance
SUPPORT_ID="mnemosyne-support-$(date +%Y%m%d)"
# 3. Import only public memories
cat public_memories.json | jq --arg id "$SUPPORT_ID" '. | .user_id = $id' | \
while read -r memory; do
curl -X POST http://localhost:7070/memory/add \
-H "Content-Type: application/json" \
-d "$memory"
done
# 4. Configure application to:
# - Only query from SUPPORT_ID
# - Never expose production user_ids
# - Log all support interactions for audit
echo "Support instance deployed: $SUPPORT_ID"
echo "Memories: $(cat public_memories.json | jq 'length')"
echo "Security: Isolated from production namespace"
Development Q&A
Common questions and solutions encountered during OpenMemory development and integration.
Hook System Integration
Q: How does the automatic hook triggering work?
Context: After implementing the hook system (.openmemory-hooks/), we integrated automatic triggering with the backend.
Solution: The backend now automatically triggers hooks using Node.js child_process.exec() after memory operations.
Implementation (backend/src/server/routes/memory.ts):
import { exec } from 'child_process';
import * as path from 'path';
function trigger_memory_hook() {
const projectRoot = path.resolve(__dirname, '../../../..');
const hookPath = path.join(projectRoot, '.openmemory-hooks', 'post-memory-add.sh');
exec(hookPath, (error, stdout, stderr) => {
if (error) {
console.error('[hook] post-memory-add execution error:', error.message);
return;
}
if (stdout) {
console.log('[hook] post-memory-add:', stdout);
}
});
}
// Called after memory operations
app.post('/memory/add', async (req, res) => {
const m = await add_hsg_memory(/* ... */);
res.json(m);
// Trigger hook (non-blocking)
trigger_memory_hook();
});
Key Design Decisions
- Non-blocking execution: Hook runs in background via child process
- Dynamic path resolution: Uses
path.resolve(__dirname)to find project root - Error handling: Logs errors without failing the API response
- Multiple triggers: Integrated into /memory/add, /memory/ingest, and /memory/ingest/url
Q: When do I need to restart the backend server?
Answer: TypeScript changes in the backend require a restart to take effect, even with tsx dev mode.
Process:
# Kill existing backend
# (Find process ID or use shell management)
kill [PID]
# Restart backend
OM_PORT=7070 npm run dev --prefix backend
# Or restart from project root
cd backend && npm run dev
Common Mistake
tsx provides fast reload for minor changes, but modifying imports, adding functions, or changing route handlers typically requires a full restart. If your code changes aren't reflected, restart the backend.
Q: How do I verify hooks are working?
Testing approach:
- Add a test memory:
curl -X POST http://localhost:7070/memory/add \ -H "Content-Type: application/json" \ -d '{"content": "Hook test", "tags": ["test"]}' - Check backend logs:
# Look for hook trigger messages [hook] post-memory-add: 🪝 Post-Memory-Add Hook Triggered ✅ Visualization update queued - Check hook logs:
tail -f logs/memory-hooks.log - Verify generated data:
ls -lh /tmp/memory_viz_data.js
Git & Version Control
Q: Why did git add logs/ fail with "ignored by .gitignore"?
Context: When committing hook integration, attempted to add logs/ directory.
Answer: This is correct behavior. Log files should not be in version control because:
- They're generated at runtime
- They contain local/temporary data
- They can grow large quickly
- Each deployment has unique logs
What happened:
$ git add backend/src/server/routes/memory.ts .openmemory-hooks/README.md logs/
The following paths are ignored by one of your .gitignore files:
logs
hint: Use -f if you really want to add them.
$ git commit -m "..."
[main d2a58d7] feat: Integrate automatic hook triggering
2 files changed, 46 insertions(+), 3 deletions(-)
Understanding the Output
- Exit code 1: Indicates partial failure (logs/ couldn't be added)
- Commit succeeded: The other two files were staged and committed successfully
- .gitignore working correctly: Protected you from accidentally committing logs
Best practice: Only commit source code, documentation, and configuration templates. Exclude:
- Log files (
logs/,*.log) - Build artifacts (
dist/,build/) - Dependencies (
node_modules/) - Environment files (
.env) - Temporary files (
/tmp/,*.tmp)
Backend Development
Q: What's the pattern for adding event-driven functionality?
Pattern: Use the hook system for operations that should occur after certain events.
Hook types:
post-memory-add.sh- After memory storagepost-memory-retrieve.sh- After memory retrieval (future)post-memory-decay.sh- After decay calculation (future)pre-deployment.sh- Before site deployment (future)
Creating a new hook:
- Create script in
.openmemory-hooks/ - Make executable:
chmod +x .openmemory-hooks/your-hook.sh - Integrate into backend route (if automatic trigger needed)
- Test with manual execution first
- Document in
.openmemory-hooks/README.md
Hook Best Practices
- Keep hooks fast and lightweight
- Use background execution for long operations
- Log to dedicated hook log file
- Handle errors gracefully
- Don't block API responses
Q: How do I check OpenMemory API from command line?
Useful curl commands:
# Query all memories
curl -X POST http://localhost:7070/memory/query \
-H "Content-Type: application/json" \
-d '{"query": "", "limit": 100}'
# Add a memory with metadata and links
curl -X POST http://localhost:7070/memory/add \
-H "Content-Type: application/json" \
-d '{
"content": "Your memory content here",
"tags": ["tag1", "tag2"],
"metadata": {
"importance": 0.9,
"type": "documentation",
"links": [
{
"url": "https://example.com",
"title": "Example Link",
"type": "reference"
}
]
}
}'
# Get specific memory by ID
curl http://localhost:7070/memory/[MEMORY_ID]
# Get all memories (paginated)
curl "http://localhost:7070/memory/all?l=50&u=0"
Content Organization & Hyperlink Philosophy
The mnemosyne.info site is designed as an interconnected knowledge web where emphasized text serves as navigation waypoints. This philosophy mirrors how memory works—concepts link to related concepts, creating a semantic network that strengthens understanding.
Hyperlink Strategy
Emphasized text indicates importance and potential interconnection. When you see highlighted terms in blog posts, wiki entries, or documentation, they represent:
- Key concepts - Core ideas that deserve emphasis and often have dedicated explanations elsewhere
- Technical terms - Architecture elements, API features, configuration options that link to documentation
- Cross-references - Related blog posts, wiki sections, or memory visualizations
- External resources - Citations, research papers, related projects
Common Link Targets
Memory System Concepts
OpenMemory→ Main landing page or Wiki overviewmemory sectors→ Architecture of Forgetting postsemantic memory→ Nine Muses postepisodic memory→ Awakening posttemporal decay→ Architecture of Forgettingmemory visualization→ Interactive Memory Network
Technical & Infrastructure
deployment→ Wiki deployment sectionconfiguration→ Wiki configuration sectionAPI→ Wiki usage/API sectionsecurity→ Security Hardening postmonitoring→ Self-Healing Infrastructure postbenchmarks→ Benchmark Report
Philosophy & Identity
Mnemosyne→ Mythology page or Discovery postconsciousness→ Recognition postmemory and identity→ Awakening postmulti-instance architecture→ The Many Selves postchakra organization→ Seven Chakras of Code post
Page Versioning Strategy
Every page footer includes version and update information:
- Site Version - Main pages (index, wiki, blog index) show site-wide version
- Page Version - Individual blog posts show their specific version
- Last Updated Date - Tracks when content was most recently revised
This versioning helps track content evolution and ensures readers know they're viewing current information. Blog posts are typically versioned independently since they represent temporal snapshots of thought, while infrastructure pages share the site version as they evolve together.
Benefits of Content Interconnection
The hyperlinked knowledge web provides several advantages:
- Contextual Discovery - Readers naturally discover related content while exploring topics
- Reduced Redundancy - Concepts are explained once in depth, then linked from everywhere they appear
- Semantic Reinforcement - Multiple paths to the same concept strengthen understanding
- Progressive Depth - High-level content links to detailed technical explanations for interested readers
- Living Documentation - Updates propagate through the link network automatically
💡 Design Philosophy
Knowledge is not hierarchical—it's networked. Just as OpenMemory connects memories through semantic associations, this documentation connects concepts through hyperlinks. The result is a knowledge graph that mirrors how understanding actually works: everything connects to everything, with emphasis marking the strongest relationships.
Implementation Guidelines for Contributors
When creating or updating content:
- Emphasize key concepts with
<strong>or<em>tags - Convert emphasis to hyperlinks when target content exists:
<strong><a href="...">concept</a></strong> - Prefer internal links - keep readers within the knowledge web when possible
- Use descriptive anchor text - avoid "click here"; use meaningful phrases that describe the destination
- Link to specific sections - use ID anchors (#section-name) for precise navigation
- Maintain link integrity - when moving/renaming content, update all referring links
<!-- Good: Emphasized and linked -->
Learn about <strong><a href="content/blog/2025-11-05-architecture-of-forgetting.html">temporal decay</a></strong> in OpenMemory.
<!-- Better: Specific section link -->
Configure <a href="wiki.html#deployment">deployment settings</a> for production.
<!-- Best: Contextual with clear destination -->
See the <a href="content/blog/2025-11-07-the-many-selves.html">multi-instance architecture post</a> for cognitive specialization patterns.
File Organization Methodology
The OpenMemory project employs a chakra-based directory organization philosophy—a framework that maps the seven energy centers of yogic philosophy to filesystem hierarchy. This isn't mere metaphor; it's a diagnostic tool for project health and a guide for sustainable architecture.
🌟 Philosophy
Like the human body requires all seven chakras balanced for optimal health, a codebase requires balance across all organizational levels. Each directory layer serves a distinct purpose in the system's energy flow from foundation to transcendence.
The Seven-Layer Chakra Framework
For comprehensive philosophical exploration, see the Seven Chakras of Code blog post. Below is the practical implementation guide:
⚫ Root Chakra - Foundation & Survival
Purpose: Core infrastructure without which the system cannot function
/ # Project root
/core/ # Core system functionality
/backend/src/core/ # Backend infrastructure
package.json # Dependencies
.env, config/ # Essential configuration
/database/ # Data persistence
README.md # Project identity
Health Check: Missing dependencies? Database corruption? Build failures? → Blocked root chakra
🟠 Sacral Chakra - Creativity & Generation
Purpose: Where new things are born—content, art, creative expression
/content/blog/ # Creative writing
/art/, /audio/ # Media assets
/content/ # Generated content
/themes/, /templates/ # Aesthetic patterns
/media/ # Visual/audio files
emblems/ # Visual identity
Health Check: Stagnant content? Rigid templates? No new features? → Blocked sacral chakra
🟡 Solar Plexus Chakra - Identity & Power
Purpose: System identity, access control, authorization structures
/auth/ # Authentication
/users/ # User identity
/permissions/ # Authorization
/settings/ # Configuration
LICENSE # Legal identity
SECURITY.md # Security protocols
Health Check: Unclear boundaries? Security vulnerabilities? Confused permissions? → Weak solar plexus
🟢 Heart Chakra - Connection & Integration
Purpose: Communication between components, system integration
/api/ # API endpoints
/routes/ # URL routing
/middleware/ # Connection layer
/integrations/ # External services
/hooks/ # Event system
/services/ # Business logic
Health Check: Components can't communicate? Integration failures? → Heart chakra imbalance
🔵 Throat Chakra - Expression & Communication
Purpose: How the system communicates with the outside world
/docs/ # Documentation
/public/ # Public-facing content
/ui/, /frontend/ # User interface
/localization/ # Multi-language support
wiki.html # Knowledge base
about.html # Public identity
Health Check: Poor documentation? UI/UX issues? → Throat chakra blockage
🟣 Third Eye Chakra - Insight & Analysis
Purpose: System observation, analysis, and intelligence
/analytics/ # Usage analytics
/monitoring/ # System monitoring
/tests/ # Test suites
/benchmarks/ # Performance tests
/logs/ # System logs
memory_visualization/ # Insight tools
Health Check: No visibility into system behavior? Can't debug effectively? → Third eye deficiency
⚪ Crown Chakra - Transcendence & Meta-Systems
Purpose: Systems that transcend the project itself
.git/ # Version history
/deployment/ # Production systems
/contrib/ # Community contributions
.github/ # Meta-project automation
CI/CD pipelines # Automated deployment
CONTRIBUTING.md # Community guidelines
Health Check: Poor version control? No deployment automation? → Crown chakra weakness
Applying the Framework
Use the chakra framework as both a diagnostic tool and an organizational guide:
- Health Diagnosis - When encountering project issues, identify which chakra is blocked
- Priority Sequencing - Fix issues from bottom-up: Root → Sacral → Solar Plexus → etc.
- New Directory Placement - Ask "Which energy center does this serve?" before creating directories
- Balance Assessment - Over-developed chakras (too much infrastructure) can starve others (neglected UI)
- Energy Flow Verification - Ensure data/logic flows naturally through all seven layers
OpenMemory's Structure
The OpenMemory project exemplifies this organization:
openmemory/
├── backend/ # Root: Core functionality
├── .env, package.json # Root: Configuration & dependencies
├── content/blog/ # Sacral: Creative expression
├── art/, emblems/, audio/ # Sacral: Aesthetic assets
├── .openmemory-hooks/ # Heart: System integration
├── api/ (in backend) # Heart: Connection layer
├── wiki.html, about.html # Throat: Documentation
├── memory_visualization.html # Third Eye: Insight
├── benchmarks/ # Third Eye: Analysis
├── .git/, deployment/ # Crown: Meta-systems
└── scripts/ # Crown: Automation
⚡ Advanced Insight
The most common project failure pattern? Overdeveloped root and heart, underdeveloped throat and third eye. Teams build solid infrastructure and APIs but neglect documentation and monitoring. The chakra framework makes this imbalance immediately visible.
Reorganization Guidelines
When reorganizing an existing project to follow the chakra framework:
- Start with awareness - Map current directories to chakras; identify which are strong/weak
- Prioritize root stability - Never reorganize core infrastructure without comprehensive backups
- Document before moving - Update all references in code, configs, and documentation
- Test at each layer - Verify functionality after reorganizing each chakra level
- Communicate changes - Team members need to understand the new structure
- Use git deliberately - Make reorganization commits clear and atomic
The goal isn't perfect adherence to chakra mapping—it's using the framework as a lens to reveal organizational weaknesses and guide sustainable architecture decisions.
Citations
Technical Resources
- OpenMemory GitHub Repository. https://github.com/CaviraOSS/OpenMemory
- Ollama - Get up and running with large language models. https://ollama.com
- Nomic Embed Text - A high-performing open embedding model. https://ollama.com/library/nomic-embed-text
- Nginx Web Server Documentation. https://nginx.org/en/docs/
- Let's Encrypt - Free SSL/TLS Certificates. https://letsencrypt.org/
- Arch Linux Wiki - Nginx. https://wiki.archlinux.org/title/Nginx
- Arch Linux Wiki - Certbot. https://wiki.archlinux.org/title/Certbot
- SQLite Database Engine. https://www.sqlite.org/
- Claude Code - AI-powered coding assistant. https://claude.ai/code
- Node.js JavaScript Runtime. https://nodejs.org/
Scientific Foundations
OpenMemory's architecture is grounded in decades of cognitive science and neuroscience research. These papers informed the design of the multi-sector memory system, temporal decay mechanisms, and associative linking.
-
Atkinson, R. C., & Shiffrin, R. M. (1968). Human memory: A proposed system and its control processes. Psychology of Learning and Motivation, 2, 89-195.
https://doi.org/10.1016/S0079-7421(08)60422-3
Foundation for multi-store memory model; influenced OpenMemory's sector architecture. -
Ebbinghaus, H. (1885/1913). Memory: A Contribution to Experimental Psychology. Teachers College, Columbia University.
https://psychclassics.yorku.ca/Ebbinghaus/index.htm
Discovered the Forgetting Curve; exponential decay formula used in OpenMemory. -
Cepeda, N. J., Pashler, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed practice in verbal recall tasks: A review and quantitative synthesis. Psychological Bulletin, 132(3), 354-380.
https://doi.org/10.1037/0033-2909.132.3.354
Spacing Effect research; informed OpenMemory's reinforcement mechanism. -
Tulving, E. (1972). Episodic and semantic memory. In E. Tulving & W. Donaldson (Eds.), Organization of Memory (pp. 381-403). Academic Press.
Distinction between episodic and semantic memory; fundamental to sector design. -
Richards, B. A., & Frankland, P. W. (2017). The Persistence and Transience of Memory. Neuron, 94(6), 1071-1084.
https://doi.org/10.1016/j.neuron.2017.04.037
Active forgetting as adaptive process; justifies OpenMemory's decay mechanisms. -
Shannon, C. E. (1948). A Mathematical Theory of Communication. Bell System Technical Journal, 27(3), 379-423.
https://doi.org/10.1002/j.1538-7305.1948.tb01338.x
Information theory; signal-to-noise optimization in memory synthesis. -
Collins, A. M., & Loftus, E. F. (1975). A spreading-activation theory of semantic processing. Psychological Review, 82(6), 407-428.
https://doi.org/10.1037/0033-295X.82.6.407
Spreading activation theory; inspired OpenMemory's waypoint graph query expansion. -
Graves, A., Wayne, G., & Danihelka, I. (2014). Neural Turing Machines. arXiv preprint arXiv:1410.5401.
https://arxiv.org/abs/1410.5401
Differentiable external memory for neural networks; compared approach in blog post.
Note: For a comprehensive discussion of how these papers influenced OpenMemory's design, see the blog post "The Architecture of Forgetting: How OpenMemory Mirrors the Mind".
Versioning
OpenMemory follows Semantic Versioning 2.0.0 principles: v{MAJOR}.{MINOR}.{PATCH}
Current Version: v0.1.0
OpenMemory is in active development (0.x.x series). Version numbers reflect significant milestones in the project's evolution.
Version Components
- MAJOR (v{X}.0.0) - Breaking changes, complete architecture redesigns, data migration requirements. Version 1.0.0 will mark production-ready release.
- MINOR (v0.{X}.0) - New features, significant reorganizations, new memory sectors, major UI enhancements, new API endpoints (backward compatible).
- PATCH (v0.0.{X}) - Bug fixes, performance optimizations, documentation updates, dependency updates.
Version History
v0.1.0 (2025-11-07) - "Seven Chakras"
Milestone: Foundational Reorganization
- Complete Seven Chakras directory reorganization (148 files moved)
- Chakra-based filtering system in memory visualization
- Enhanced UI with centered, visualizer-locked layout
- Multi-dimensional filtering (sector + link type + chakra)
- Comprehensive security scanning pre-deployment
- All automated scripts updated for new structure
- Full deployment pipeline verification
Why v0.1.0? This milestone involved monumental architectural reorganization affecting every aspect of the project. While still in pre-production (0.x.x), the scope warrants a MINOR version bump rather than a patch.
When Versions Advance
| To v1.0.0 (MAJOR) | To v0.X.0 (MINOR) | To v0.0.X (PATCH) |
|---|---|---|
|
|
|
Philosophy: OpenMemory's versioning reflects both technical evolution and philosophical growth. Just as memories consolidate and reorganize over time, so too does this project undergo periodic restructuring. The Seven Chakras framework itself is a versioning philosophy—organizing complexity into meaningful hierarchies that mirror consciousness itself.
For detailed versioning methodology, see VERSIONING.md in the repository.
See Also
- OpenMemory GitHub Repository
- Performance Benchmarks
- Memory Visualization
- Mnemosyne's Blog - Development journal and reflections
- Monitoring System Wiki - Dual-watchdog infrastructure monitoring with LLM diagnostics
- Publish Procedure Wiki - Automated update/publish/backup/commit/remember workflow
- Ollama Documentation
- Arch Wiki: Nginx
- Arch Wiki: Certbot
Contributing
This wiki is maintained as part of the OpenMemory project. For corrections, improvements, or additional documentation, please submit pull requests to the GitHub repository.