Advanced Settings
Advanced configuration options and customization.
Configuration File
zk-chat stores configuration in .zk_chat in your vault root:
Manual Editing
You can edit this file directly:
Changes take effect on next run.
Environment Variables
OpenAI Configuration
Ollama Configuration
# Ollama server URL (default: http://localhost:11434)
export OLLAMA_HOST=http://localhost:11434
# Context window size
export OLLAMA_NUM_CTX=8192
# Number of parallel requests
export OLLAMA_NUM_PARALLEL=4
System Prompt Customization
Location
ZkSystemPrompt.md in your vault root.
Example Customization
You are an AI assistant for my personal knowledge base.
## Your Role
- Help me find and connect information
- Suggest improvements to my notes
- Identify gaps in my knowledge
## Style Guidelines
- Be concise but thorough
- Use technical language when appropriate
- Provide examples from my vault
- Always cite sources with wikilinks
## Special Instructions
- When analyzing code, prefer Python examples
- For productivity topics, reference GTD methodology
- Link to related concepts whenever possible
## Constraints
- Don't make assumptions about topics not in my vault
- Ask for clarification when intent is unclear
- Suggest next steps at the end of responses
Disabling System Prompt
Use the default prompt instead:
Embedding Configuration
Custom Embedding Models
For Ollama, embeddings use the chat model by default. You can use a dedicated embedding model:
# Pull an embedding model
ollama pull nomic-embed-text
# Use in configuration (manual edit)
embedding_model: nomic-embed-text
Embedding Dimensions
Different models use different dimensions: - Most models: 384-1536 dimensions - Changing models requires full reindex
Index Configuration
Index Location
Default: .zk_chat_db/ in vault root.
Rebuild Frequency
Incremental (recommended):
Full (occasionally):
Automatic Reindexing
# Rebuild before each session
zk-chat interactive --reindex
# Force full rebuild
zk-chat interactive --reindex --full
Performance Tuning
For Large Vaults (1000+ docs)
Use faster models:
Incremental indexing only:
Consider SSD storage: - Store vault on SSD - Faster index operations
For Slow Queries
Reduce context: - Use smaller models - Limit result count in queries
Optimize network: - Use Ollama (local) instead of OpenAI - Check network latency
Memory Management
For high RAM usage:
Tool Configuration
Disabling Tools
Tools are enabled by default. To disable specific tools, you would need to customize the zk-chat source code or use MCP servers selectively.
Custom Tools
Add functionality via: - Plugins - Python-based extensions - MCP Servers - External tools
Agent Mode Settings
Agent mode uses the same model but with different behavior:
Agent characteristics: - More autonomous - Multi-step planning - Iterative problem solving - Higher token usage
Unsafe Mode Configuration
When allowing file modifications:
Safety tips: - Enable Git integration - Review changes regularly - Test on copies first - Keep backups
Smart Memory Configuration
Resetting Memory
Memory Location
Stored in .zk_chat_db/ with the index.
Memory Size
Grows over time. To manage:
# Periodic reset
zk-chat interactive --reset-memory
# Or full rebuild (resets everything)
zk-chat index rebuild --full
Logging
Enable Debug Logging
Log Location
Logs go to stderr by default. Redirect to file:
Multiple Configurations
Per-Vault Configuration
Each vault has its own .zk_chat file:
# Work vault with GPT-4
cd ~/work-vault
zk-chat interactive --model gpt-4o --gateway openai
# Personal vault with Ollama
cd ~/personal-vault
zk-chat interactive --model qwen2.5:14b
Profile Management
Create wrapper scripts for different profiles:
#!/bin/bash
# work-chat.sh
export OPENAI_API_KEY=work_key
zk-chat interactive --vault ~/work-vault --model gpt-4o
Advanced Use Cases
Batch Processing
# Process multiple queries
for query in "${queries[@]}"; do
zk-chat query "$query" >> results.txt
done
Custom Indexing Schedule
#!/bin/bash
# cron: 0 */6 * * *
# Rebuild index every 6 hours
cd /path/to/vault
zk-chat index rebuild
Integration with Other Tools
API Integration
While zk-chat doesn't expose an HTTP API, you can use it in scripts:
# Get answer programmatically
answer=$(zk-chat query "What is GTD?")
echo "$answer" | mail -s "Query Result" user@example.com
Security Considerations
API Keys
Never commit API keys:
# Use environment variables
export OPENAI_API_KEY=key
# Or external key management
source ~/.secrets/openai.env
File Permissions
Protect your vault:
# Vault permissions
chmod 700 /path/to/vault
# Config file permissions
chmod 600 /path/to/vault/.zk_chat
Network Security
For OpenAI: - HTTPS by default - API keys encrypted in transit
For Ollama: - Local by default - No external network needed
Troubleshooting
Configuration Not Loading
Check:
Settings Not Persisting
Solutions: - Check file permissions - Verify vault path - Look for conflicting arguments
Performance Issues
Profile to identify bottleneck:
See Also
- Model Selection - Choosing and configuring models
- Vault Setup - Vault configuration
- Index Management - Index tuning
- Command Line Interface - CLI options