Eidetica Documentation
Welcome to the official documentation for Eidetica - a decentralized database built on Merkle-CRDT principles with built-in peer-to-peer synchronization.
Key Features
- Decentralized Architecture: No central server required - peers connect directly
- Conflict-Free Replication: Automatic merge resolution using CRDT principles
- Content-Addressable Storage: Immutable, hash-identified data entries
- Real-time Synchronization: Background sync with configurable batching and timing
- Multiple Transport Protocols: HTTP and Iroh P2P with NAT traversal
- Authentication & Security: Ed25519 signatures for all operations
- Flexible Data Models: Support for documents, key-value, and structured data
Project Structure
Eidetica is organized as a Cargo workspace:
- Library (
crates/lib/): The core Eidetica library crate - CLI Binary (
crates/bin/): Command-line interface using the library - Examples (
examples/): Standalone applications demonstrating usage
Quick Links
Documentation Sections
- User Guide: Learn how to use the Eidetica library
- Getting Started: Set up your first Eidetica database
- Synchronization Guide: Enable peer-to-peer sync
- Internal Documentation: Understand the internal design and contribute
- Design Documents: Architectural documents used for development
Examples
- Chat Application - Complete multi-user chat with P2P sync
- Todo Application - Task management example
User Guide
Welcome to the Eidetica User Guide. This guide will help you understand and use Eidetica effectively in your applications.
What is Eidetica?
Eidetica is a Rust library for managing structured data with built-in history tracking. It combines concepts from distributed systems, Merkle-CRDTs, and traditional databases to provide a unique approach to data management:
- Efficient data storage with customizable Databases
- History tracking for all changes via immutable Entries forming a DAG
- Structured data types via named, typed Stores within logical Databases
- Atomic changes across multiple data structures using Transactions
- Designed for distribution (future capability)
How to Use This Guide
This user guide is structured to guide you from basic setup to advanced concepts:
- Getting Started: Installation, basic setup, and your first steps.
- Basic Usage Pattern: A quick look at the typical workflow.
- Core Concepts: Understand the fundamental building blocks:
- Entries & Databases: The core DAG structure.
- Databases: How data is stored.
- Stores: Where structured data lives (
DocStore,Table,YDoc). - Transactions: How atomic changes are made.
- Tutorial: Todo App: A step-by-step walkthrough using a simple application.
- Code Examples: Focused code snippets for common tasks.
Quick Overview: The Core Flow
Eidetica revolves around a few key components working together:
Database: You start by choosing or creating a storageDatabase(e.g.,InMemoryDatabase).Instance: You create aInstanceinstance, providing it theDatabase. This is your main database handle.Database: Using theInstance, you create or load aDatabase, which acts as a logical container for related data and tracks its history.Transaction: To read or write data, you start aTransactionfrom theDatabase. This ensures atomicity and consistent views.Store: Within aTransaction, you get handles to namedStores (likeDocStoreorTable<YourData>). These provide methods (set,get,insert,remove, etc.) to interact with your structured data.Commit: Changes made viaStorehandles within theTransactionare staged. Callingcommit()on theTransactionfinalizes these changes atomically, creating a new historicalEntryin theDatabase.
Basic Usage Pattern
Here's a quick example showing creating a user, database, and writing new data.
extern crate eidetica; extern crate serde; use eidetica::{backend::database::InMemory, Instance, crdt::Doc, store::{DocStore, Table}}; use serde::{Serialize, Deserialize}; #[derive(Serialize, Deserialize, Clone, Debug)] struct MyData { name: String, } fn main() -> eidetica::Result<()> { let backend = InMemory::new(); let instance = Instance::open(Box::new(backend))?; // Create and login a passwordless user instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; // Create a database let mut settings = Doc::new(); settings.set("name", "my_database"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; // --- Writing Data --- // Start a Transaction let txn = database.new_transaction()?; let inserted_id = { // Scope for store handles // Get Store handles let config = txn.get_store::<DocStore>("config")?; let items = txn.get_store::<Table<MyData>>("items")?; // Use Store methods config.set("version", "1.0")?; items.insert(MyData { name: "example".to_string() })? }; // Handles drop, changes are staged in txn // Commit changes let new_entry_id = txn.commit()?; println!("Committed changes, new entry ID: {}", new_entry_id); // --- Reading Data --- // Use Database::get_store_viewer for a read-only view let items_viewer = database.get_store_viewer::<Table<MyData>>("items")?; if let Ok(item) = items_viewer.get(&inserted_id) { println!("Read item: {:?}", item); } Ok(()) }
See Transactions and Code Examples for more details.
Project Status
Eidetica is currently under active development. The core functionality is working, but APIs are considered experimental and may change in future releases. It is suitable for evaluation and prototyping, but not yet recommended for production systems requiring long-term API stability.
Getting Started
This guide will walk you through the basics of using Eidetica in your Rust applications. We'll cover the essential steps to set up and interact with the database.
For contributing to Eidetica itself, see the Contributing guide.
Installation
Add Eidetica to your project dependencies:
[dependencies]
eidetica = "0.1.0" # Update version as appropriate
# Or if using from a local workspace:
# eidetica = { path = "path/to/eidetica/crates/lib" }
Setting up the Database
To start using Eidetica, you need to:
- Choose and initialize a Backend (storage mechanism)
- Create an Instance (the infrastructure manager)
- Create and login a User (authentication and session)
- Create or access a Database through the User (logical container for data)
Here's a simple example:
extern crate eidetica; use eidetica::{backend::database::InMemory, Instance, crdt::Doc}; fn main() -> eidetica::Result<()> { // Create a new in-memory backend let backend = InMemory::new(); // Create the Instance let instance = Instance::open(Box::new(backend))?; // Create a passwordless user (perfect for embedded/single-user apps) instance.create_user("alice", None)?; // Login to get a User session let mut user = instance.login_user("alice", None)?; // Create a database in the user's context let mut settings = Doc::new(); settings.set("name", "my_database"); // Get the default key (earliest created key) let default_key = user.get_default_key()?; let _database = user.create_database(settings, &default_key)?; Ok(()) }
Note: This example uses a passwordless user (password is None) for simplicity, which is perfect for embedded applications and CLI tools. For multi-user scenarios, you can create password-protected users by passing Some("password") instead.
The backend determines how your data is stored. The example above uses InMemory, which keeps everything in memory but can save to a file:
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory, crdt::Doc}; use std::path::PathBuf; fn main() -> eidetica::Result<()> { // Create instance and user let backend = InMemory::new(); let instance = Instance::open(Box::new(backend))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test_db"); let default_key = user.get_default_key()?; let _database = user.create_database(settings, &default_key)?; // Use a temporary file path for testing let temp_dir = std::env::temp_dir(); let path = temp_dir.join("eidetica_test_save.json"); // Save the backend to a file let backend_guard = instance.backend(); if let Some(in_memory) = backend_guard.as_any().downcast_ref::<InMemory>() { in_memory.save_to_file(&path)?; } // Clean up the temporary file if path.exists() { std::fs::remove_file(&path).ok(); } Ok(()) }
You can load a previously saved backend:
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory, crdt::Doc}; use std::path::PathBuf; fn main() -> eidetica::Result<()> { // First create and save a test backend let backend = InMemory::new(); let instance = Instance::open(Box::new(backend))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test_db"); let default_key = user.get_default_key()?; let _database = user.create_database(settings, &default_key)?; // Use a temporary file path for testing let temp_dir = std::env::temp_dir(); let path = temp_dir.join("eidetica_test_load.json"); // Save the backend first let backend_guard = instance.backend(); if let Some(in_memory) = backend_guard.as_any().downcast_ref::<InMemory>() { in_memory.save_to_file(&path)?; } // Load a previously saved backend let backend = InMemory::load_from_file(&path)?; // Load instance (automatically detects existing system state) let instance = Instance::open(Box::new(backend))?; // Login to existing user let user = instance.login_user("alice", None)?; // Clean up the temporary file if path.exists() { std::fs::remove_file(&path).ok(); } Ok(()) }
User-Centric Architecture
Eidetica uses a user-centric architecture:
- Instance: Manages infrastructure (user accounts, backend, system databases)
- User: Handles all contextual operations (database creation, key management)
All database and key operations happen through a User session after login. This provides:
- Clear separation: Infrastructure management vs. contextual operations
- Strong isolation: Each user has separate keys and preferences
- Flexible authentication: Users can have passwords or not (passwordless mode)
Passwordless Users (embedded/single-user apps):
instance.create_user("alice", None)?;
let user = instance.login_user("alice", None)?;
Password-Protected Users (multi-user apps):
instance.create_user("bob", Some("password123"))?;
let user = instance.login_user("bob", Some("password123"))?;
The downside of password protection is a slow login. instance.login_user needs to verify the password and decrypt keys, which by design is a relatively slow operation.
Working with Data
Eidetica uses Stores to organize data within a database. One common store type is Table, which maintains a collection of items with unique IDs.
Defining Your Data
Any data you store must be serializable with serde:
Basic Operations
All operations in Eidetica happen within an atomic Transaction:
Inserting Data:
extern crate eidetica; extern crate serde; use eidetica::{backend::database::InMemory, Instance, crdt::Doc, store::Table, Database}; use serde::{Serialize, Deserialize}; #[derive(Clone, Debug, Serialize, Deserialize)] struct Person { name: String, age: u32, } fn main() -> eidetica::Result<()> { let instance = Instance::open(Box::new(InMemory::new()))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test_db"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; // Start an authenticated transaction let op = database.new_transaction()?; // Get or create a Table store let people = op.get_store::<Table<Person>>("people")?; // Insert a person and get their ID let person = Person { name: "Alice".to_string(), age: 30 }; let _id = people.insert(person)?; // Commit the changes (automatically signed with the user's key) op.commit()?; Ok(()) }
Reading Data:
extern crate eidetica; extern crate serde; use eidetica::{backend::database::InMemory, Instance, crdt::Doc, store::Table, Database}; use serde::{Serialize, Deserialize}; #[derive(Clone, Debug, Serialize, Deserialize)] struct Person { name: String, age: u32, } fn main() -> eidetica::Result<()> { let instance = Instance::open(Box::new(InMemory::new()))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test_db"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; // Insert some test data let op = database.new_transaction()?; let people = op.get_store::<Table<Person>>("people")?; let test_id = people.insert(Person { name: "Alice".to_string(), age: 30 })?; op.commit()?; let id = &test_id; let op = database.new_transaction()?; let people = op.get_store::<Table<Person>>("people")?; // Get a single person by ID if let Ok(person) = people.get(id) { println!("Found: {} ({})", person.name, person.age); } // Search for all people (using a predicate that always returns true) let all_people = people.search(|_| true)?; for (id, person) in all_people { println!("ID: {}, Name: {}, Age: {}", id, person.name, person.age); } Ok(()) }
Updating Data:
extern crate eidetica; extern crate serde; use eidetica::{backend::database::InMemory, Instance, crdt::Doc, store::Table, Database}; use serde::{Serialize, Deserialize}; #[derive(Clone, Debug, Serialize, Deserialize)] struct Person { name: String, age: u32, } fn main() -> eidetica::Result<()> { let instance = Instance::open(Box::new(InMemory::new()))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test_db"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; // Insert some test data let op_setup = database.new_transaction()?; let people_setup = op_setup.get_store::<Table<Person>>("people")?; let test_id = people_setup.insert(Person { name: "Alice".to_string(), age: 30 })?; op_setup.commit()?; let id = &test_id; let op = database.new_transaction()?; let people = op.get_store::<Table<Person>>("people")?; // Get, modify, and update if let Ok(mut person) = people.get(id) { person.age += 1; people.set(id, person)?; } op.commit()?; Ok(()) }
Deleting Data:
extern crate eidetica; extern crate serde; use eidetica::{backend::database::InMemory, Instance, crdt::Doc, store::Table, Database}; use serde::{Serialize, Deserialize}; #[derive(Clone, Debug, Serialize, Deserialize)] struct Person { name: String, age: u32, } fn main() -> eidetica::Result<()> { let instance = Instance::open(Box::new(InMemory::new()))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test_db"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; let _id = "test_id"; let op = database.new_transaction()?; let people = op.get_store::<Table<Person>>("people")?; // FIXME: Table doesn't currently support deletion // You can overwrite with a "deleted" marker or use other approaches op.commit()?; Ok(()) }
Complete Examples
For complete working examples, see:
-
Chat Example - Multi-user chat application demonstrating:
- User accounts and authentication
- Real-time synchronization with HTTP and Iroh transports
- Bootstrap protocol for joining rooms
- TUI interface with Ratatui
-
Todo Example - Task management application
Next Steps
After getting familiar with the basics, you might want to explore:
- Core Concepts to understand Eidetica's unique features
- Synchronization Guide to set up peer-to-peer data sync
- Authentication Guide for secure multi-user applications
- Advanced operations like querying and filtering
- Using different store types for various data patterns
Core Concepts
Understanding the fundamental ideas behind Eidetica will help you use it effectively and appreciate its unique capabilities.
Architectural Foundation
Eidetica builds on several powerful concepts from distributed systems and database design:
- Content-addressable storage: Data is identified by the hash of its content, similar to Git and IPFS
- Directed acyclic graphs (DAGs): Changes form a graph structure rather than a linear history
- Conflict-free replicated data types (CRDTs): Data structures that can merge concurrent changes automatically
- Immutable data structures: Once created, data is never modified, only new versions are added
These foundations enable Eidetica's key features: robust history tracking, efficient synchronization, and eventual consistency in distributed environments.
Merkle-CRDTs
Eidetica is inspired by the Merkle-CRDT concept from OrbitDB, which combines:
- Merkle DAGs: A data structure where each node contains a cryptographic hash of its children, creating a tamper-evident history
- CRDTs: Data types designed to resolve conflicts automatically when concurrent changes occur
In a Merkle-CRDT, each update creates a new node in the graph, containing:
- References to parent nodes (previous versions)
- The updated data
- Metadata for conflict resolution
This approach allows for:
- Strong auditability of all changes
- Automatic conflict resolution
- Efficient synchronization between replicas
Data Model Layers
Eidetica organizes data in a layered architecture:
+-----------------------+
| User Application |
+-----------------------+
| Instance |
+-----------------------+
| Databases |
+----------+------------+
| Stores | Operations |
+----------+------------+
| Entries (DAG) |
+-----------------------+
| Database Storage |
+-----------------------+
Each layer builds on the ones below, providing progressively higher-level abstractions:
- Database Storage: Physical storage of data (currently InMemory with file persistence)
- Entries: Immutable, content-addressed objects forming the database's history
- Databases & Stores: Logical organization and typed access to data
- Operations: Atomic transactions across multiple stores
- Instance: The top-level database container and API entry point
Entries and the DAG
At the core of Eidetica is a directed acyclic graph (DAG) of immutable Entry objects:
-
Each Entry represents a point-in-time snapshot of data and has:
- A unique ID derived from its content (making it content-addressable)
- Links to parent entries (forming the graph structure)
- Data payloads organized by store
- Metadata for database and store relationships
-
The DAG enables:
- Full history tracking (nothing is ever deleted)
- Efficient verification of data integrity
- Conflict resolution when merging concurrent changes
IPFS Inspiration and Future Direction
While Eidetica draws inspiration from IPFS (InterPlanetary File System), it currently uses its own implementation patterns:
- IPFS is a content-addressed, distributed storage system where data is identified by cryptographic hashes
- OrbitDB (which inspired Eidetica) uses IPFS for backend storage and distribution
Eidetica's future plans include:
- Developing efficient internal APIs for transferring objects between Eidetica instances
- Potential IPFS-compatible addressing for distributed storage
- More efficient synchronization mechanisms than traditional IPFS
Stores: A Core Innovation
Eidetica extends the Merkle-CRDT concept with Stores, which partition data within each Entry:
- Each store is a named, typed data structure within a Database
- Stores can use different data models and conflict resolution strategies
- Stores maintain their own history tracking within the larger Database
This enables:
- Type-safe, structure-specific APIs for data access
- Efficient partial synchronization (only needed stores)
- Modular features through pluggable stores
- Atomic operations across different data structures
Planned future stores include:
- Object Storage: Efficiently handling large objects with content-addressable hashing
- Backup: Archiving database history for space efficiency
- Encrypted Store: Transparent encrypted data storage
Atomic Operations and Transactions
All changes in Eidetica happen through atomic Transactions:
- A Transaction is created from a Database
- Stores are accessed and modified through the Transaction
- When committed, all changes across all stores become a single new Entry
- If the Transaction fails, no changes are applied
This model ensures data consistency while allowing complex operations across multiple stores.
Settings as Stores
In Eidetica, even configuration is stored as a store:
- A Database's settings are stored in a special "settings" Store internally that is hidden from regular usage
- This approach unifies the data model and allows settings to participate in history tracking
CRDT Properties and Eventual Consistency
Eidetica is designed with distributed systems in mind:
- All data structures have CRDT properties for automatic conflict resolution
- Different store types implement appropriate CRDT strategies:
- DocStore uses last-writer-wins (LWW) with implicit timestamps
- Table preserves all items, with LWW for updates to the same item
These properties ensure that when Eidetica instances synchronize, they eventually reach a consistent state regardless of the order in which updates are received.
History Tracking and Time Travel
One of Eidetica's most powerful features is comprehensive history tracking:
- All changes are preserved in the Entry DAG
- "Tips" represent the latest state of a Database or Store
- Historical states can be reconstructed by traversing the DAG
This design allows for future capabilities like:
- Point-in-time recovery
- Auditing and change tracking
- Historical queries and analysis
- Branching and versioning
Current Status and Roadmap
Eidetica is under active development, and some features mentioned in this documentation are still in planning or development stages. Here's a summary of the current status:
Implemented Features
- Core Entry and Database structure
- In-memory database with file persistence
- DocStore and Table store implementations
- CRDT functionality:
- Doc (hierarchical nested document structure with recursive merging and tombstone support for deletions)
- Atomic operations across stores
- Tombstone support for proper deletion handling in distributed environments
Planned Features
- Object Storage store for efficient handling of large objects
- Backup store for archiving database history
- Encrypted store for transparent encrypted data storage
- IPFS-compatible addressing for distributed storage
- Enhanced synchronization mechanisms
- Point-in-time recovery
This roadmap is subject to change as development progresses. Check the project repository for the most up-to-date information on feature availability.
Entries & Databases
The basic units of data and organization in Eidetica.
Entries
Entries are the fundamental building blocks in Eidetica. An Entry represents an atomic unit of data with the following characteristics:
- Content-addressable: Each entry has a unique ID derived from its content, similar to Git commits.
- Immutable: Once created, entries cannot be modified.
- Parent references: Entries maintain references to their parent entries, forming a directed acyclic graph (DAG).
- Database association: Each entry belongs to a database and can reference parent entries within both the main database and stores.
- Store data: Entries can contain data for one or more stores, representing different aspects or types of data.
Entries function similar to commits in Git - they represent a point-in-time snapshot of data with links to previous states, enabling history tracking.
Databases
A Database in Eidetica is a logical container for related entries, conceptually similar to:
- A traditional database containing multiple tables
- A branch in a version control system
- A collection in a document database
Key characteristics of Databases:
- Root Entry: Each database has a root entry that serves as its starting point.
- Named Identity: Databases typically have a name stored in their settings store.
- History Tracking: Databases maintain the complete history of all changes as a linked graph of entries.
- Store Organization: Data within a database is organized into named stores, each potentially using different data structures.
- Atomic Operations: All changes to a database happen through transactions, which create new entries.
Database Transactions
You interact with Databases through Transactions:
extern crate eidetica; use eidetica::{backend::database::InMemory, Instance, crdt::Doc, store::DocStore, Database}; use eidetica::Result; fn example(database: Database) -> Result<()> { // Create a new transaction let op = database.new_transaction()?; // Access stores and perform actions let settings = op.get_store::<DocStore>("settings")?; settings.set("version", "1.2.0")?; // Commit the changes, creating a new Entry let new_entry_id = op.commit()?; Ok(()) } fn main() -> Result<()> { let backend = InMemory::new(); let instance = Instance::open(Box::new(backend))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; example(database)?; Ok(()) }
When you commit a transaction, Eidetica:
- Creates a new Entry containing all changes
- Links it to the appropriate parent entries
- Adds it to the database's history
- Returns the ID of the new entry
Database Settings
Each Database maintains its settings as a key-value store in a special "settings" store:
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory, crdt::Doc, store::SettingsStore, Database}; fn main() -> eidetica::Result<()> { // Setup database for testing let instance = Instance::open(Box::new(InMemory::new()))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings_doc = Doc::new(); settings_doc.set("name", "example_database"); settings_doc.set("version", "1.0.0"); let default_key = user.get_default_key()?; let database = user.create_database(settings_doc, &default_key)?; // Access database settings through a transaction let transaction = database.new_transaction()?; let settings_store = transaction.get_settings()?; // Access common settings let name = settings_store.get_name()?; println!("Database name: {}", name); // Access custom settings via the underlying DocStore let doc_store = settings_store.as_doc_store(); if let Ok(version_value) = doc_store.get("version") { println!("Database version available"); } transaction.commit()?; Ok(()) }
Common settings include:
name: The identifier for the database (used byUser::find_database). This is the primary standard setting currently used.- Other application-specific settings can be stored here.
Tips and History
Databases in Eidetica maintain a concept of "tips" - the latest entries in the database's history. Tips represent the current state of the database and are managed automatically by the system.
When you create transactions and commit changes, Eidetica automatically:
- Updates the database tips to point to your new entries
- Maintains the complete history of all previous states
- Ensures efficient access to the current state through tip tracking
This historical information remains accessible, allowing you to:
- Track all changes to data over time
- Reconstruct the state at any point in history (requires manual traversal or specific backend support - see Backends)
Database vs. Store
While a Database is the logical container, the actual data is organized into Stores. This separation allows:
- Different types of data structures within a single Database
- Type-safe access to different parts of your data
- Fine-grained history tracking by store
- Efficient partial replication and synchronization
See Stores for more details on how data is structured within a Database.
Database Storage
Database storage implementations in Eidetica define how and where data is physically stored.
The Database Abstraction
The Database trait abstracts the underlying storage mechanism for Eidetica entries. This separation of concerns allows the core database logic to remain independent of the specific storage details.
Key responsibilities of a Database:
- Storing and retrieving entries by their unique IDs
- Tracking relationships between entries
- Calculating tips (latest entries) for databases and stores
- Managing the graph-like structure of entry history
Available Database Implementations
InMemory
The InMemory database is currently the primary storage implementation:
- Stores all entries in memory
- Can load from and save to a JSON file
- Well-suited for development, testing, and applications with moderate data volumes
- Simple to use and requires no external dependencies
Example usage:
// Create a new in-memory database
use eidetica::backend::database::InMemory;
let database = InMemory::new();
let db = Instance::open(Box::new(database))?;
// ... use the database ...
// Save to a file (optional)
let path = PathBuf::from("my_database.json");
let database_guard = db.backend().lock().unwrap();
if let Some(in_memory) = database_guard.as_any().downcast_ref::<InMemory>() {
in_memory.save_to_file(&path)?;
}
// Load from a file
let database = InMemory::load_from_file(&path)?;
let db = Instance::open(Box::new(database))?;
Note: The InMemory database is the only storage implementation currently provided with Eidetica.
Database Trait Responsibilities
The Database trait (eidetica::backend::Database) defines the core interface required for storage. Beyond simple get and put for entries, it includes methods crucial for navigating the database's history and structure:
get_tips(tree_id): Finds the latest entries in a specificDatabase.get_subtree_tips(tree_id, subtree_name): Finds the latest entries for a specificStorewithin aDatabase.all_roots(): Finds all top-levelDatabaseroots stored in the database.get_tree(tree_id)/get_subtree(...): Retrieve all entries for a database/store, typically sorted topologically (required for some history operations, potentially expensive).
Implementing these methods efficiently often requires the database to understand the DAG structure, making the database more than just a simple key-value store.
Database Performance Considerations
The Database implementation significantly impacts database performance:
- Entry Retrieval: How quickly entries can be accessed by ID
- Graph Traversal: Efficiency of history traversal and tip calculation
- Memory Usage: How entries are stored and whether they're kept in memory
- Concurrency: How concurrent operations are handled
Stores
Stores provide structured, type-safe access to different kinds of data within a Database.
The Store Concept
In Eidetica, Stores extend the Merkle-CRDT concept by explicitly partitioning data within each Entry. A Store:
- Represents a specific type of data structure (like a key-value store or a collection of records)
- Has a unique name within its parent Database
- Maintains its own history tracking
- Is strongly typed (via Rust generics)
Stores are what make Eidetica practical for real applications, as they provide high-level, data-structure-aware interfaces on top of the core Entry and Database concepts.
Why Stores?
Stores offer several advantages:
- Type Safety: Each store implementation provides appropriate methods for its data type
- Isolation: Changes to different stores can be tracked separately
- Composition: Multiple data structures can exist within a single Database
- Efficiency: Only relevant stores need to be loaded or synchronized
- Atomic Operations: Changes across multiple stores can be committed atomically
Available Store Types
Eidetica provides several store types, each optimized for different data patterns:
| Type | Purpose | Key Features | Best For |
|---|---|---|---|
| DocStore | Document storage | Path-based operations, nested structures | Configuration, metadata, structured docs |
| Table<T> | Record collections | Auto-generated UUIDs, type safety, search | User lists, products, any structured records |
| SettingsStore | Database settings | Type-safe settings API, auth management | Database configuration, authentication |
| YDoc | Collaborative editing | Y-CRDT integration, real-time sync | Shared documents, collaborative text editing |
| PasswordStore | Encrypted wrapper | Password-based encryption, wraps any store | Sensitive data, secrets, credentials |
DocStore (Document-Oriented Storage)
The DocStore store provides a document-oriented interface for storing and retrieving structured data. It wraps the crdt::Doc type to provide ergonomic access patterns with both simple key-value operations and path-based operations for nested data structures.
Basic Usage
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory, crdt::Doc, store::DocStore, path}; fn main() -> eidetica::Result<()> { let backend = Box::new(InMemory::new()); let instance = Instance::open(backend)?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test_db"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; // Get a DocStore store let op = database.new_transaction()?; let store = op.get_store::<DocStore>("app_data")?; // Set simple values store.set("version", "1.0.0")?; store.set("author", "Alice")?; // Path-based operations for nested structures // This creates nested maps: {"database": {"host": "localhost", "port": "5432"}} store.set_path(path!("database.host"), "localhost")?; store.set_path(path!("database.port"), "5432")?; // Retrieve values let version = store.get("version")?; // Returns a Value let host = store.get_path(path!("database.host"))?; // Returns Value op.commit()?; Ok(()) }
Important: Path Operations Create Nested Structures
When using set_path("a.b.c", value), DocStore creates nested maps, not flat keys with dots:
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory, crdt::Doc, store::DocStore, path}; fn main() -> eidetica::Result<()> { let backend = Box::new(InMemory::new()); let instance = Instance::open(backend)?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test_db"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; let op = database.new_transaction()?; let store = op.get_store::<DocStore>("app_data")?; // This code: store.set_path(path!("user.profile.name"), "Bob")?; // Creates this structure: // { // "user": { // "profile": { // "name": "Bob" // } // } // } // NOT: { "user.profile.name": "Bob" } ❌ op.commit()?; Ok(()) }
Use cases for DocStore:
- Application configuration
- Metadata storage
- Structured documents
- Settings management
- Any data requiring path-based access
Table
The Table<T> store manages collections of serializable items, similar to a table in a database:
extern crate eidetica; extern crate serde; use eidetica::{Instance, backend::database::InMemory, crdt::Doc, store::Table}; use serde::{Serialize, Deserialize}; fn main() -> eidetica::Result<()> { let backend = Box::new(InMemory::new()); let instance = Instance::open(backend)?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test_db"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; // Define a struct for your data #[derive(Serialize, Deserialize, Clone)] struct User { name: String, email: String, active: bool, } // Get a Table store let op = database.new_transaction()?; let users = op.get_store::<Table<User>>("users")?; // Insert items (returns a generated UUID) let user = User { name: "Alice".to_string(), email: "alice@example.com".to_string(), active: true, }; let id = users.insert(user)?; // Get an item by ID if let Ok(user) = users.get(&id) { println!("Found user: {}", user.name); } // Update an item if let Ok(mut user) = users.get(&id) { user.active = false; users.set(&id, user)?; } // Delete an item let was_deleted = users.delete(&id)?; if was_deleted { println!("User deleted successfully"); } // Search for items matching a condition let active_users = users.search(|user| user.active)?; for (id, user) in active_users { println!("Active user: {} (ID: {})", user.name, id); } op.commit()?; Ok(()) }
Use cases for Table:
- Collections of structured objects
- Record storage (users, products, todos, etc.)
- Any data where individual items need unique IDs
- When you need to search across records with custom predicates
SettingsStore (Database Settings Management)
The SettingsStore provides a specialized, type-safe interface for managing database settings and authentication configuration. It wraps the internal _settings subtree to provide convenient methods for common settings operations.
Basic Usage
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory, crdt::Doc, store::SettingsStore}; fn main() -> eidetica::Result<()> { let backend = Box::new(InMemory::new()); let instance = Instance::open(backend)?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test_db"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; // Get a SettingsStore for the current transaction let transaction = database.new_transaction()?; let settings_store = transaction.get_settings()?; // Set database name settings_store.set_name("My Application Database")?; // Get database name let name = settings_store.get_name()?; println!("Database name: {}", name); transaction.commit()?; Ok(()) }
Authentication Management
SettingsStore provides convenient methods for managing authentication keys:
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory, crdt::Doc, store::SettingsStore}; use eidetica::auth::{AuthKey, Permission}; use eidetica::auth::crypto::{generate_keypair, format_public_key}; fn main() -> eidetica::Result<()> { // Setup database for testing let instance = Instance::open(Box::new(InMemory::new()))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "stores_auth_example"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; // Generate a keypair for the new user let (_alice_signing_key, alice_verifying_key) = generate_keypair(); let alice_public_key = format_public_key(&alice_verifying_key); let transaction = database.new_transaction()?; let settings_store = transaction.get_settings()?; // Add a new authentication key let auth_key = AuthKey::active( &alice_public_key, Permission::Write(10), )?; settings_store.set_auth_key("alice", auth_key)?; // Get an authentication key let key = settings_store.get_auth_key("alice")?; println!("Alice's key: {}", key.pubkey()); // Revoke a key settings_store.revoke_auth_key("alice")?; transaction.commit()?; Ok(()) }
Complex Updates with Closures
For complex operations that need to be atomic, use the update_auth_settings method:
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory, crdt::Doc, store::SettingsStore}; use eidetica::auth::{AuthKey, Permission}; use eidetica::auth::crypto::{generate_keypair, format_public_key}; fn main() -> eidetica::Result<()> { // Setup database for testing let instance = Instance::open(Box::new(InMemory::new()))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "complex_auth_example"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; // Generate keypairs for multiple users let (_bob_signing_key, bob_verifying_key) = generate_keypair(); let bob_public_key = format_public_key(&bob_verifying_key); let bob_key = AuthKey::active(&bob_public_key, Permission::Write(20))?; let (_charlie_signing_key, charlie_verifying_key) = generate_keypair(); let charlie_public_key = format_public_key(&charlie_verifying_key); let charlie_key = AuthKey::active(&charlie_public_key, Permission::Admin(15))?; let (_old_user_signing_key, old_user_verifying_key) = generate_keypair(); let old_user_public_key = format_public_key(&old_user_verifying_key); let old_user_key = AuthKey::active(&old_user_public_key, Permission::Write(30))?; // Add old_user first so we can revoke it let setup_txn = database.new_transaction()?; let setup_store = setup_txn.get_settings()?; setup_store.set_auth_key("old_user", old_user_key)?; setup_txn.commit()?; let transaction = database.new_transaction()?; let settings_store = transaction.get_settings()?; // Perform multiple auth operations atomically settings_store.update_auth_settings(|auth| { // Add multiple keys auth.overwrite_key("bob", bob_key)?; auth.overwrite_key("charlie", charlie_key)?; // Revoke an old key auth.revoke_key("old_user")?; Ok(()) })?; transaction.commit()?; Ok(()) }
Advanced Usage
For operations not covered by the convenience methods, access the underlying DocStore:
let transaction = database.new_transaction()?;
let settings_store = transaction.get_settings()?;
// Access underlying DocStore for advanced operations
let doc_store = settings_store.as_doc_store();
doc_store.set_path(path!("custom.config.option"), "value")?;
transaction.commit()?;
Use cases for SettingsStore:
- Database configuration and metadata
- Authentication key management
- User permission management
- Bootstrap and sync policies
- Any settings that need type-safe, validated access
YDoc (Y-CRDT Integration)
The YDoc store provides integration with Y-CRDT (Yjs) for real-time collaborative editing. This requires the "y-crdt" feature:
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory, crdt::Doc, store::YDoc}; use eidetica::y_crdt::{Map, Text, Transact}; fn main() -> eidetica::Result<()> { // Setup database for testing let backend = InMemory::new(); let instance = Instance::open(Box::new(backend))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "y_crdt_stores"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; // Get a YDoc store let op = database.new_transaction()?; let doc_store = op.get_store::<YDoc>("document")?; // Work with Y-CRDT structures doc_store.with_doc_mut(|doc| { let text = doc.get_or_insert_text("content"); let metadata = doc.get_or_insert_map("meta"); let mut txn = doc.transact_mut(); // Collaborative text editing text.insert(&mut txn, 0, "Hello, collaborative world!"); // Set metadata metadata.insert(&mut txn, "title", "My Document"); metadata.insert(&mut txn, "author", "Alice"); Ok(()) })?; op.commit()?; Ok(()) }
Use cases for YDoc:
- Real-time collaborative text editing
- Shared documents with multiple editors
- Conflict-free data synchronization
- Applications requiring sophisticated merge algorithms
PasswordStore (Encrypted Wrapper)
PasswordStore wraps any other store type with transparent password-based encryption. All data is encrypted using AES-256-GCM before being stored, with keys derived from a password using Argon2id.
For detailed usage and examples, see the Encryption Guide.
Subtree Index
Eidetica automatically maintains an index of all user-created subtrees in a special _index subtree. This index stores metadata about each subtree, including its Store type and configuration.
What is the Subtree Index?
The _index subtree tracks:
- Subtree names: Which subtrees exist in the database
- Store types: What type of Store manages each subtree (e.g., "docstore:v0", "table:v0")
- Configuration: Store-specific settings for each subtree
The index is maintained automatically when you access stores via get_store() and is useful for:
- Discovery: Finding what subtrees exist in a database
- Type information: Understanding what Store type manages each subtree
- Tooling: Building generic database browsers and inspectors
The index is accessed via Transaction::get_index(), which returns a Registry - a general-purpose type for managing name → {type, config} mappings.
Automatic Registration
When you first access a Store using Transaction::get_store(), it's automatically registered in the _index with its Store type and default configuration:
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory, crdt::Doc, store::DocStore}; fn main() -> eidetica::Result<()> { let backend = Box::new(InMemory::new()); let instance = Instance::open(backend)?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test_db"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; // First access to "app_config" - automatically registered in _index let txn = database.new_transaction()?; let config: DocStore = txn.get_store("app_config")?; config.set("version", "1.0.0")?; txn.commit()?; // The 'app_config' Store is now registered with type "docstore:v0" Ok(()) }
Registration happens immediately when get_store() is called for a new subtree.
System Subtrees: The special system subtrees (_settings, _index, _root) are excluded from the index to avoid circular dependencies.
Querying the Index
Use get_index() to query information about registered subtrees:
extern crate eidetica; extern crate serde; use eidetica::{Instance, backend::database::InMemory, crdt::Doc, store::{DocStore, Table}}; use serde::{Serialize, Deserialize}; fn main() -> eidetica::Result<()> { let backend = Box::new(InMemory::new()); let instance = Instance::open(backend)?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test_db"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; // Create some subtrees first #[derive(Serialize, Deserialize, Clone)] struct User { name: String } let setup_txn = database.new_transaction()?; let _config: DocStore = setup_txn.get_store("config")?; let _users: Table<User> = setup_txn.get_store("users")?; setup_txn.commit()?; // Query the index to discover subtrees let txn = database.new_transaction()?; let index = txn.get_index()?; // List all registered subtrees let subtrees = index.list()?; for name in subtrees { println!("Found subtree: {}", name); } // Check if a specific subtree exists if index.contains("config") { // Get metadata about the subtree let info = index.get_entry("config")?; println!("Type: {}", info.type_id); // e.g., "docstore:v0" println!("Config: {}", info.config); // Store-specific configuration } Ok(()) }
Manual Registration
You can manually register or update subtree metadata using set_entry() on the index. This is useful for pre-registering subtrees with custom configuration:
let txn = database.new_transaction()?;
let index = txn.get_index()?;
// Pre-register a subtree with custom configuration
index.set_entry(
"documents",
"ydoc:v0",
r#"{"compression":"zstd","cache_size":1024}"#
)?;
txn.commit()?;
// Future accesses will use the registered configuration
When to Use the Subtree Index
Many applications don't need to interact with the subtree index directly and can let auto-registration handle everything automatically. Use get_index() when you need to:
- List subtrees: Build a database browser or inspector
- Query metadata: Check Store types or configurations
- Pre-configure: Set custom configuration before first use
- Build tooling: Create generic tools that work with any database structure
For more information on how the index system works internally, see the Subtree Index Design Document.
Store Implementation Details
Each Store implementation in Eidetica:
- Implements the
Storetrait - Provides methods appropriate for its data structure
- Handles serialization/deserialization of data
- Manages the store's history within the Database
The Store trait defines the minimal interface:
pub trait Store: Sized {
fn new(op: &Transaction, store_name: &str) -> Result<Self>;
fn name(&self) -> &str;
}
Store implementations add their own methods on top of this minimal interface.
Store History and Merging (CRDT Aspects)
While Eidetica uses Merkle-DAGs for overall history, the way data within a Store is combined when branches merge relies on Conflict-free Replicated Data Type (CRDT) principles. This ensures that even if different replicas of the database have diverged and made concurrent changes, they can be merged back together automatically without conflicts (though the merge result depends on the CRDT strategy).
Each Store type implements its own merge logic, typically triggered implicitly when an Transaction reads the current state of the store (which involves finding and merging the tips of that store's history):
-
DocStore: Implements a Last-Writer-Wins (LWW) strategy using the internalDoctype. When merging concurrent writes to the same key or path, the write associated with the laterEntry"wins", and its value is kept. Writes to different keys are simply combined. Deleted keys (viadelete()) are tracked with tombstones to ensure deletions propagate properly. -
Table<T>: Also uses LWW for updates to the same row ID. If two concurrent operations modify the same row, the later write wins. Inserts of different rows are combined (all inserted rows are kept). Deletions generally take precedence over concurrent updates (though precise semantics might evolve).
Note: The CRDT merge logic happens internally when an Transaction loads the initial state of a Store or when a store viewer is created. You typically don't invoke merge logic directly.
Future Store Types
Eidetica's architecture allows for adding new Store implementations. Potential future types include:
- ObjectStore: For storing large binary blobs.
These are not yet implemented. Development is currently focused on the core API and the existing DocStore and Table types.
Transactions: Atomic Changes
In Eidetica, all modifications to the data stored within a Database's Stores happen through an Transaction. This is a fundamental concept ensuring atomicity and providing a consistent mechanism for interacting with your data.
Authentication Note: All transactions in Eidetica are authenticated by default. Every transaction uses the database's default signing key to ensure that all changes are cryptographically verified and can be traced to their source.
A Transaction bundles multiple Store operations (which affect individual subtrees) into a single atomic Entry that gets committed to the database.
Why Transactions?
Transactions provide several key benefits:
- Atomicity: Changes made to multiple
Stores within a singleTransactionare committed together as one atomic unit. If thecommit()fails, no changes are persisted. This is similar to transactions in traditional databases. - Consistency: A
Transactioncaptures a snapshot of theDatabase's state (specifically, the tips of the relevantStores) when it's created or when aStoreis first accessed within it. All reads and writes within thatTransactionoccur relative to this consistent state. - Change Staging: Modifications made via
Storehandles are staged within theTransactionobject itself, not written directly to the database untilcommit()is called. - Authentication: All transactions are automatically authenticated using the database's default signing key, ensuring data integrity and access control.
- History Creation: A successful
commit()results in the creation of a newEntryin theDatabase, containing the staged changes and linked to the previous state (the tips theTransactionwas based on). This is how history is built.
The Transaction Lifecycle
Using a Transaction follows a distinct lifecycle:
-
Creation: Start an authenticated transaction from a
Databaseinstance.extern crate eidetica; use eidetica::{backend::database::InMemory, Instance, crdt::Doc}; fn main() -> eidetica::Result<()> { // Setup database let backend = InMemory::new(); let instance = Instance::open(Box::new(backend))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; let _txn = database.new_transaction()?; // Automatically uses the database's default signing key Ok(()) } -
Store Access: Get handles to the specific
Stores you want to interact with. This implicitly loads the current state (tips) of that store into the transaction if accessed for the first time.extern crate eidetica; extern crate serde; use eidetica::{backend::database::InMemory, Instance, crdt::Doc, store::{Table, DocStore, SettingsStore}, Database}; use serde::{Serialize, Deserialize}; #[derive(Clone, Debug, Serialize, Deserialize)] struct User { name: String, } fn main() -> eidetica::Result<()> { // Setup database and transaction let backend = InMemory::new(); let instance = Instance::open(Box::new(backend))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; let txn = database.new_transaction()?; // Get handles within a scope or manage their lifetime let _users_store = txn.get_store::<Table<User>>("users")?; let _config_store = txn.get_store::<DocStore>("config")?; let _settings_store = txn.get_settings()?; // For database settings txn.commit()?; Ok(()) } -
Staging Changes: Use the methods provided by the
Storehandles (set,insert,get,remove, etc.). These methods interact with the data staged within theTransaction.extern crate eidetica; extern crate serde; use eidetica::{backend::database::InMemory, Instance, crdt::Doc, store::{Table, DocStore, SettingsStore}}; use serde::{Serialize, Deserialize}; #[derive(Clone, Debug, Serialize, Deserialize)] struct User { name: String, } fn main() -> eidetica::Result<()> { // Setup database and transaction let backend = InMemory::new(); let instance = Instance::open(Box::new(backend))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; let txn = database.new_transaction()?; let users_store = txn.get_store::<Table<User>>("users")?; let config_store = txn.get_store::<DocStore>("config")?; let settings_store = txn.get_settings()?; // Insert a new user and get their ID let user_id = users_store.insert(User { name: "Alice".to_string() })?; let _current_user = users_store.get(&user_id)?; config_store.set("last_updated", "2024-01-15T10:30:00Z")?; settings_store.set_name("Updated Database Name")?; // Manage database settings txn.commit()?; Ok(()) }Note:
getmethods within a transaction read from the staged state, reflecting any changes already made within the same transaction. -
Commit: Finalize the changes. This consumes the
Transactionobject, calculates the finalEntrycontent based on staged changes, cryptographically signs the entry, writes the newEntryto theDatabase, and returns theIDof the newly createdEntry.extern crate eidetica; use eidetica::{backend::database::InMemory, Instance, crdt::Doc}; fn main() -> eidetica::Result<()> { // Setup database let backend = InMemory::new(); let instance = Instance::open(Box::new(backend))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; // Create transaction and commit let txn = database.new_transaction()?; let new_entry_id = txn.commit()?; println!("Changes committed. New state represented by Entry: {}", new_entry_id); Ok(()) }After
commit(), thetxnvariable is no longer valid.
Managing Database Settings
Within transactions, you can manage database settings using SettingsStore. This provides type-safe access to database configuration and authentication settings:
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory, crdt::Doc, store::SettingsStore}; use eidetica::auth::{AuthKey, Permission}; use eidetica::auth::crypto::{generate_keypair, format_public_key}; fn main() -> eidetica::Result<()> { // Setup database for testing let instance = Instance::open(Box::new(InMemory::new()))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "settings_example"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; // Generate keypairs for old user and add it first so we can revoke it let (_old_user_signing_key, old_user_verifying_key) = generate_keypair(); let old_user_public_key = format_public_key(&old_user_verifying_key); let old_user_key = AuthKey::active(&old_user_public_key, Permission::Write(15))?; let setup_txn = database.new_transaction()?; let setup_store = setup_txn.get_settings()?; setup_store.set_auth_key("old_user", old_user_key)?; setup_txn.commit()?; let transaction = database.new_transaction()?; let settings_store = transaction.get_settings()?; // Update database name settings_store.set_name("Production Database")?; // Generate keypairs for new users (hidden in production code) let (_new_user_signing_key, new_user_verifying_key) = generate_keypair(); let new_user_public_key = format_public_key(&new_user_verifying_key); let (_alice_signing_key, alice_verifying_key) = generate_keypair(); let alice_public_key = format_public_key(&alice_verifying_key); // Add authentication keys let new_user_key = AuthKey::active( &new_user_public_key, Permission::Write(10), )?; settings_store.set_auth_key("new_user", new_user_key)?; // Complex auth operations atomically let alice_key = AuthKey::active(&alice_public_key, Permission::Write(5))?; settings_store.update_auth_settings(|auth| { auth.overwrite_key("alice", alice_key)?; auth.revoke_key("old_user")?; Ok(()) })?; transaction.commit()?; Ok(()) }
This ensures that settings changes are atomic and properly authenticated alongside other database modifications.
Read-Only Access
While Transactions are essential for writes, you can perform reads without an explicit Transaction using Database::get_store_viewer:
extern crate eidetica; extern crate serde; use eidetica::{backend::database::InMemory, Instance, crdt::Doc, store::Table, Database}; use serde::{Serialize, Deserialize}; #[derive(Clone, Debug, Serialize, Deserialize)] struct User { name: String, } fn main() -> eidetica::Result<()> { // Setup database with some data let backend = InMemory::new(); let instance = Instance::open(Box::new(backend))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; // Insert test data let txn = database.new_transaction()?; let users_store = txn.get_store::<Table<User>>("users")?; let user_id = users_store.insert(User { name: "Alice".to_string() })?; txn.commit()?; let users_viewer = database.get_store_viewer::<Table<User>>("users")?; if let Ok(_user) = users_viewer.get(&user_id) { // Read data based on the current tips of the 'users' store } Ok(()) }
A SubtreeViewer provides read-only access based on the latest committed state (tips) of that specific store at the time the viewer is created. It does not allow modifications and does not require a commit().
Choose Transaction when you need to make changes or require a transaction-like boundary for multiple reads/writes. Choose SubtreeViewer for simple, read-only access to the latest state.
Authentication Guide
How to use Eidetica's authentication system for securing your data.
Quick Start
Every Eidetica database requires authentication. Here's the minimal setup:
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory}; use eidetica::crdt::Doc; fn main() -> eidetica::Result<()> { let backend = InMemory::new(); let instance = Instance::open(Box::new(backend))?; // Create and login a passwordless user (generates Ed25519 keypair automatically) instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; // Create a database using the user's default key let mut settings = Doc::new(); settings.set("name", "my_database"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; // All operations are now authenticated let op = database.new_transaction()?; // ... make changes ... op.commit()?; // Automatically signed Ok(()) }
Key Concepts
Mandatory Authentication: Every entry must be signed - no exceptions.
Permission Levels:
- Admin: Can modify settings and manage keys
- Write: Can read and write data
- Read: Can only read data
Key Storage: Private keys are stored in Instance, public keys in database settings.
Common Tasks
Adding Users
Give other users access to your database:
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory, crdt::Doc, store::SettingsStore}; use eidetica::auth::{AuthKey, Permission}; use eidetica::auth::crypto::{generate_keypair, format_public_key}; fn main() -> eidetica::Result<()> { // Setup database for testing let instance = Instance::open(Box::new(InMemory::new()))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "auth_example"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; let transaction = database.new_transaction()?; // Generate a keypair for the new user let (_alice_signing_key, alice_verifying_key) = generate_keypair(); let alice_public_key = format_public_key(&alice_verifying_key); let settings_store = transaction.get_settings()?; // Add a user with write access let user_key = AuthKey::active( &alice_public_key, Permission::Write(10), )?; settings_store.set_auth_key("alice", user_key)?; transaction.commit()?; Ok(()) }
Making Data Public (Read-Only)
Allow anyone to read your database:
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory}; use eidetica::store::SettingsStore; use eidetica::auth::{AuthKey, Permission}; use eidetica::crdt::Doc; fn main() -> eidetica::Result<()> { let instance = Instance::open(Box::new(InMemory::new()))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test_db"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; let transaction = database.new_transaction()?; let settings_store = transaction.get_settings()?; // Wildcard key for public read access let public_key = AuthKey::active( "*", Permission::Read, )?; settings_store.set_auth_key("*", public_key)?; transaction.commit()?; Ok(()) }
Collaborative Databases (Read-Write)
Create a collaborative database where anyone can read and write without individual key management:
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory}; use eidetica::store::SettingsStore; use eidetica::auth::{AuthKey, Permission}; use eidetica::crdt::Doc; fn main() -> eidetica::Result<()> { let instance = Instance::open(Box::new(InMemory::new()))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "collaborative_notes"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; // Set up global write permissions let transaction = database.new_transaction()?; let settings_store = transaction.get_settings()?; // Global permission allows any device to read and write let collaborative_key = AuthKey::active( "*", Permission::Write(10), )?; settings_store.set_auth_key("*", collaborative_key)?; transaction.commit()?; Ok(()) }
How it works:
- Any device can bootstrap without approval (global permission grants access)
- Devices discover available SigKeys using
Database::find_sigkeys() - Select a SigKey from the available options (will include
"*"for global permissions) - Open the database with the selected SigKey
- All transactions automatically use the configured permissions
- No individual keys are added to the database's auth settings
Example of opening a collaborative database:
extern crate eidetica; use eidetica::{Instance, Database, backend::database::InMemory}; use eidetica::auth::crypto::{generate_keypair, format_public_key}; use eidetica::auth::types::SigKey; fn main() -> eidetica::Result<()> { let instance = Instance::open(Box::new(InMemory::new()))?; let (signing_key, verifying_key) = generate_keypair(); let database_root_id = "collaborative_db_root".into(); // Get your public key let pubkey = format_public_key(&verifying_key); // Discover all SigKeys this public key can use let sigkeys = Database::find_sigkeys(&instance, &database_root_id, &pubkey)?; // Use the first available SigKey (will be "*" for global permissions) if let Some((sigkey, _permission)) = sigkeys.first() { let sigkey_str = match sigkey { SigKey::Direct(name) => name.clone(), _ => panic!("Delegation paths not yet supported"), }; // Open the database with the discovered SigKey let database = Database::open(instance, &database_root_id, signing_key, sigkey_str)?; // Create transactions as usual let txn = database.new_transaction()?; // ... make changes ... txn.commit()?; } Ok(()) }
This is ideal for:
- Team collaboration spaces
- Shared notes and documents
- Public wikis
- Development/testing environments
Security note: Use appropriate permission levels. Write(10) allows Write and Read operations but not Admin operations (managing keys and settings).
Revoking Access
Remove a user's access:
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory}; use eidetica::store::SettingsStore; use eidetica::auth::{AuthKey, Permission}; use eidetica::crdt::Doc; fn main() -> eidetica::Result<()> { let instance = Instance::open(Box::new(InMemory::new()))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test_db"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; // First add alice key so we can revoke it let transaction_setup = database.new_transaction()?; let settings_setup = transaction_setup.get_settings()?; settings_setup.set_auth_key("alice", AuthKey::active("*", Permission::Write(10))?)?; transaction_setup.commit()?; let transaction = database.new_transaction()?; let settings_store = transaction.get_settings()?; // Revoke the key settings_store.revoke_auth_key("alice")?; transaction.commit()?; Ok(()) }
Note: Historical entries created by revoked keys remain valid.
Multi-User Setup Example
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory, crdt::Doc, store::SettingsStore}; use eidetica::auth::{AuthKey, Permission}; use eidetica::auth::crypto::{generate_keypair, format_public_key}; fn main() -> eidetica::Result<()> { // Setup database for testing let instance = Instance::open(Box::new(InMemory::new()))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "multi_user_example"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; let transaction = database.new_transaction()?; let settings_store = transaction.get_settings()?; // Generate keypairs for different users let (_super_admin_signing_key, super_admin_verifying_key) = generate_keypair(); let super_admin_public_key = format_public_key(&super_admin_verifying_key); let (_dept_admin_signing_key, dept_admin_verifying_key) = generate_keypair(); let dept_admin_public_key = format_public_key(&dept_admin_verifying_key); let (_user1_signing_key, user1_verifying_key) = generate_keypair(); let user1_public_key = format_public_key(&user1_verifying_key); // Use update_auth_settings for complex multi-key setup settings_store.update_auth_settings(|auth| { // Super admin (priority 0 - highest) auth.overwrite_key("super_admin", AuthKey::active( &super_admin_public_key, Permission::Admin(0), )?)?; // Department admin (priority 10) auth.overwrite_key("dept_admin", AuthKey::active( &dept_admin_public_key, Permission::Admin(10), )?)?; // Regular users (priority 100) auth.overwrite_key("user1", AuthKey::active( &user1_public_key, Permission::Write(100), )?)?; Ok(()) })?; transaction.commit()?; Ok(()) }
Key Management Tips
- Use descriptive key names: "alice_laptop", "build_server", etc.
- Set up admin hierarchy: Lower priority numbers = higher authority
- Use SettingsStore methods:
set_auth_key()for setting keys (upsert behavior)revoke_auth_key()for removing accessupdate_auth_settings()for complex multi-step operations
- Regular key rotation: Periodically update keys for security
- Backup admin keys: Keep secure copies of critical admin keys
Advanced: Cross-Database Authentication (Delegation)
Delegation allows databases to reference other databases as sources of authentication keys. This enables powerful patterns like:
- Users manage their own keys in personal databases
- Multiple projects share authentication across databases
- Hierarchical access control without granting admin privileges
How Delegation Works
When you delegate to another database:
- The delegating database references another database in its
_settings.auth - The delegated database maintains its own keys in its
_settings.auth - Permission clamping ensures delegated keys can't exceed specified bounds
- Delegation paths reference keys by their name in the delegated database's auth settings
Basic Delegation Setup
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory, crdt::Doc}; use eidetica::auth::{DelegatedTreeRef, Permission, PermissionBounds, TreeReference}; use eidetica::store::SettingsStore; fn main() -> eidetica::Result<()> { let instance = Instance::open(Box::new(InMemory::new()))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let default_key = user.get_default_key()?; // Create user's personal database let alice_database = user.create_database(Doc::new(), &default_key)?; // Create main project database let project_database = user.create_database(Doc::new(), &default_key)?; // Get the user's database root and current tips let user_root = alice_database.root_id().clone(); let user_tips = alice_database.get_tips()?; // Add delegation reference to project database let transaction = project_database.new_transaction()?; let settings = transaction.get_settings()?; settings.update_auth_settings(|auth| { auth.add_delegated_tree("alice@example.com", DelegatedTreeRef { permission_bounds: PermissionBounds { max: Permission::Write(15), min: Some(Permission::Read), }, tree: TreeReference { root: user_root, tips: user_tips, }, })?; Ok(()) })?; transaction.commit()?; Ok(()) }
Now any key in Alice's personal database can access the project database, with permissions clamped to the specified bounds.
Understanding Delegation Paths
Critical concept: A delegation path traverses through databases using two different types of key names:
- Delegation reference names - Point to other databases (DelegatedTreeRef)
- Signing key names - Point to public keys (AuthKey) for signature verification
Delegation Reference Names
These are names in the delegating database's auth settings that point to other databases:
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory, crdt::Doc}; use eidetica::auth::{DelegatedTreeRef, Permission, PermissionBounds, TreeReference}; use eidetica::store::SettingsStore; fn main() -> eidetica::Result<()> { let instance = Instance::open(Box::new(InMemory::new()))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let default_key = user.get_default_key()?; let alice_db = user.create_database(Doc::new(), &default_key)?; let alice_root = alice_db.root_id().clone(); let alice_tips = alice_db.get_tips()?; let project_db = user.create_database(Doc::new(), &default_key)?; let transaction = project_db.new_transaction()?; let settings = transaction.get_settings()?; // In project database: "alice@example.com" points to Alice's database settings.update_auth_settings(|auth| { auth.add_delegated_tree( "alice@example.com", // ← Delegation reference name DelegatedTreeRef { tree: TreeReference { root: alice_root, tips: alice_tips, }, permission_bounds: PermissionBounds { max: Permission::Write(15), min: Some(Permission::Read), }, } )?; Ok(()) })?; transaction.commit()?; Ok(()) }
This creates an entry in the project database's auth settings:
- Name:
"alice@example.com" - Points to: Alice's database (via TreeReference)
Signing Key Names
These are names in the delegated database's auth settings that point to public keys:
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory, crdt::Doc}; use eidetica::auth::{AuthKey, Permission}; use eidetica::store::SettingsStore; fn main() -> eidetica::Result<()> { let instance = Instance::open(Box::new(InMemory::new()))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let default_key_id = user.get_default_key()?; let alice_db = user.create_database(Doc::new(), &default_key_id)?; let signing_key = user.get_signing_key(&default_key_id)?; let alice_pubkey_str = eidetica::auth::crypto::format_public_key(&signing_key.verifying_key()); let transaction = alice_db.new_transaction()?; let settings = transaction.get_settings()?; // In Alice's database: "alice_laptop" points to a public key // (This was added automatically during bootstrap, but we can add aliases) settings.update_auth_settings(|auth| { auth.add_key( "alice_work", // ← Signing key name (alias) AuthKey::active( &alice_pubkey_str, // The actual Ed25519 public key Permission::Write(10), )? )?; Ok(()) })?; transaction.commit()?; Ok(()) }
This creates an entry in Alice's database auth settings:
- Name:
"alice_work"(an alias for the same key as"alice_laptop") - Points to: An Ed25519 public key
Using Delegated Keys
A delegation path is a sequence of steps that traverses from the delegating database to the signing key:
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory, crdt::Doc}; use eidetica::auth::{SigKey, DelegationStep}; use eidetica::store::DocStore; fn main() -> eidetica::Result<()> { let instance = Instance::open(Box::new(InMemory::new()))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let default_key = user.get_default_key()?; let project_db = user.create_database(Doc::new(), &default_key)?; let user_db = user.create_database(Doc::new(), &default_key)?; let user_tips = user_db.get_tips()?; // Create a delegation path with TWO steps: let delegation_path = SigKey::DelegationPath(vec![ // Step 1: Look up "alice@example.com" in PROJECT database's auth settings // This is a delegation reference name pointing to Alice's database DelegationStep { key: "alice@example.com".to_string(), tips: Some(user_tips), // Tips for Alice's database }, // Step 2: Look up "alice_laptop" in ALICE'S database's auth settings // This is a signing key name pointing to an Ed25519 public key DelegationStep { key: "alice_laptop".to_string(), tips: None, // Final step has no tips (it's a pubkey, not a tree) }, ]); // Use the delegation path to create an authenticated operation // Note: This requires the actual signing key to be available // project_database.new_operation_with_sig_key(delegation_path)?; Ok(()) }
Path traversal:
- Start in project database auth settings
- Look up
"alice@example.com"→ finds DelegatedTreeRef → jumps to Alice's database - Look up
"alice_laptop"in Alice's database → finds AuthKey → gets Ed25519 public key - Use that public key to verify the entry signature
Permission Clamping
Permissions from delegated databases are automatically clamped:
User DB key: Admin(5) → Project DB clamps to: Write(15) (max bound)
User DB key: Write(10) → Project DB keeps: Write(10) (within bounds)
User DB key: Read → Project DB keeps: Read (above min bound)
Rules:
- If delegated permission > max bound: lowered to max
- If delegated permission < min bound: raised to min (if specified)
- Permissions within bounds are preserved
- Admin permissions only apply within the delegated database
This makes it convenient to reuse the same validation rules across both databases. Only an Admin can grant permissions to a database by modifying the Auth Settings, but we can grant lower access to a User, and allow them to use any key they want, by granting access to a User controlled database and giving that the desired permissions. The User can then manage their own keys using their own Admin keys, under exactly the same rules.
Multi-Level Delegation
Delegated databases can themselves delegate to other databases, creating chains:
// Entry signed through a delegation chain:
{
"auth": {
"sig": "...",
"key": [
{
"key": "team@example.com", // Step 1: Delegation ref in Main DB → Team DB
"tips": ["team_db_tip"]
},
{
"key": "alice@example.com", // Step 2: Delegation ref in Team DB → Alice's DB
"tips": ["alice_db_tip"]
},
{
"key": "alice_laptop", // Step 3: Signing key in Alice's DB → pubkey
// No tips - this is a pubkey, not a tree
}
]
}
}
Path traversal:
- Look up
"team@example.com"in Main DB → finds DelegatedTreeRef → jump to Team DB - Look up
"alice@example.com"in Team DB → finds DelegatedTreeRef → jump to Alice's DB - Look up
"alice_laptop"in Alice's DB → finds AuthKey → get Ed25519 public key - Use that public key to verify the signature
Each level applies its own permission clamping, with the final effective permission being the minimum across all levels.
Common Delegation Patterns
User-Managed Access:
Project DB → delegates to → Alice's Personal DB
↓
Alice manages her own keys
Team Hierarchy:
Main DB → delegates to → Team DB → delegates to → User DB
(max: Admin) (max: Write)
Cross-Project Authentication:
Project A ───┐
├→ delegates to → Shared Auth DB
Project B ───┘
Key Aliasing
Auth settings can contain multiple names for the same public key with different permissions:
{
"_settings": {
"auth": {
"Ed25519:abc123...": {
"pubkey": "Ed25519:abc123...",
"permissions": "admin:0",
"status": "active"
},
"alice_work": {
"pubkey": "Ed25519:abc123...",
"permissions": "write:10",
"status": "active"
},
"alice_readonly": {
"pubkey": "Ed25519:abc123...",
"permissions": "read",
"status": "active"
}
}
}
}
This allows:
- The same key to have different permission contexts
- Readable delegation path names instead of public key strings
- Fine-grained access control based on how the key is referenced
Best Practices
- Use descriptive delegation names:
"alice@example.com","team-engineering" - Set appropriate permission bounds: Don't grant more access than needed
- Update delegation tips: Keep tips current to ensure revocations are respected
- Use friendly key names: Add aliases for keys that will be used in delegation paths
- Document delegation chains: Complex hierarchies can be hard to debug
See Also
- Delegation Design - Technical details
- Permission System - How permissions work
Troubleshooting
"Authentication failed": Check that:
- The key exists in database settings
- The key status is Active (not Revoked)
- The key has sufficient permissions for the operation
"Key name conflict": When using set_auth_key() with different public key:
set_auth_key()provides upsert behavior for same public key- Returns KeyNameConflict error if key name exists with different public key
- Use
get_auth_key()to check existing key before deciding action
"Cannot modify key": Admin operations require:
- Admin-level permissions
- Equal or higher priority than the target key
Multi-device conflicts: During bootstrap sync between devices:
- If same key name with same public key: Operation succeeds (safe)
- If same key name with different public key: Operation fails (prevents conflicts)
- Consider using device-specific key names like "alice_laptop", "alice_phone"
Network partitions: Authentication changes merge automatically using Last-Write-Wins. The most recent change takes precedence.
Bootstrap Security Policy
When new devices join existing databases through bootstrap synchronization, Eidetica provides two approval methods to balance security and convenience.
Bootstrap Approval Methods
Eidetica supports two bootstrap approval approaches, checked in this order:
- Global Wildcard Permissions - Databases with global '*' permissions automatically approve bootstrap requests if the requested permission is satisfied
- Manual Approval - Default secure behavior requiring admin approval for each device
Default Security Behavior
By default, bootstrap requests are rejected for security:
// Bootstrap will fail without explicit policy configuration
client_sync.sync_with_peer_for_bootstrap(
"127.0.0.1:8080",
&database_tree_id,
Some("device_key_name"),
Some(Permission::Write(100)),
).await; // Returns PermissionDenied error
Global Wildcard Permissions (Recommended for Collaboration)
The simplest approach for collaborative databases is to use global wildcard permissions:
let mut settings = Doc::new();
let mut auth_doc = Doc::new();
// Add admin key
auth_doc.set_json("admin", serde_json::json!({
"pubkey": admin_public_key,
"permissions": {"Admin": 1},
"status": "Active"
}))?;
// Add global wildcard permission for automatic bootstrap
auth_doc.set_json("*", serde_json::json!({
"pubkey": "*",
"permissions": {"Write": 10}, // Allows Read and Write(11+) requests
"status": "Active"
}))?;
settings.set("auth", auth_doc);
Benefits:
- No per-device key management required
- Immediate bootstrap approval
- Simple configuration - one permission setting controls all devices
- See Bootstrap Guide for details
Manual Approval Process
For controlled access scenarios, use manual approval to review each bootstrap request:
Security Recommendations:
- Development/Testing: Use global wildcard permissions for convenience
- Production: Use manual approval for controlled access
- Team Collaboration: Use global wildcard permissions with appropriate permission levels
- Public Databases: Use global wildcard permissions for open access, or manual approval for controlled access
Bootstrap Flow
- Client Request: Device requests access with public key and permission level
- Global Permission Check: Server checks if global '*' permission satisfies request
- Global Permission Approval: If global permission exists and satisfies request, access is granted immediately
- Manual Approval Queue: If no global permission, request is queued for admin review
- Admin Decision: Admin explicitly approves or rejects the request
- Database Access: Approved devices can read/write according to granted permissions
See Also
- Core Concepts - Understanding Databases and Entries
- Getting Started - Basic database setup
- Authentication Reference - Technical reference
Encryption Guide
PasswordStore provides transparent password-based encryption for any Store type.
Quick Start
extern crate eidetica; use eidetica::{Instance, Registered, backend::database::InMemory, crdt::Doc, store::{PasswordStore, DocStore}}; fn main() -> eidetica::Result<()> { let backend = Box::new(InMemory::new()); let instance = Instance::open(backend)?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "secrets_db"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; // Create and initialize an encrypted store let tx = database.new_transaction()?; let mut encrypted = tx.get_store::<PasswordStore>("secrets")?; encrypted.initialize("my_password", DocStore::type_id(), "{}")?; // Use the wrapped store normally let docstore = encrypted.unwrap::<DocStore>()?; docstore.set("api_key", "sk-secret-12345")?; tx.commit()?; Ok(()) }
Opening Existing Stores
extern crate eidetica; use eidetica::{Instance, Registered, backend::database::InMemory, crdt::Doc, store::{PasswordStore, DocStore}}; fn main() -> eidetica::Result<()> { let backend = Box::new(InMemory::new()); let instance = Instance::open(backend)?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "secrets_db"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; { let tx = database.new_transaction()?; let mut encrypted = tx.get_store::<PasswordStore>("secrets")?; encrypted.initialize("my_password", DocStore::type_id(), "{}")?; let docstore = encrypted.unwrap::<DocStore>()?; docstore.set("secret", "value")?; tx.commit()?; } // Use open() for existing stores instead of initialize() let tx = database.new_transaction()?; let mut encrypted = tx.get_store::<PasswordStore>("secrets")?; encrypted.open("my_password")?; let docstore = encrypted.unwrap::<DocStore>()?; let _secret = docstore.get("secret")?; tx.commit()?; Ok(()) }
Wrapping Other Store Types
PasswordStore wraps any store type. Use Registered::type_id() to get the type identifier:
extern crate eidetica; extern crate serde; use eidetica::{Instance, Registered, backend::database::InMemory, crdt::Doc, store::{PasswordStore, Table}}; use serde::{Serialize, Deserialize}; fn main() -> eidetica::Result<()> { let backend = Box::new(InMemory::new()); let instance = Instance::open(backend)?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "creds_db"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; #[derive(Serialize, Deserialize, Clone)] struct Credential { service: String, password: String, } let tx = database.new_transaction()?; let mut encrypted = tx.get_store::<PasswordStore>("credentials")?; encrypted.initialize("vault_password", Table::<Credential>::type_id(), "{}")?; let table = encrypted.unwrap::<Table<Credential>>()?; table.insert(Credential { service: "github.com".to_string(), password: "secret_token".to_string(), })?; tx.commit()?; Ok(()) }
Security Notes
- No recovery: Lost password = lost data (by design)
- Encryption: AES-256-GCM with Argon2id key derivation
- Relay-safe: Encrypted data can sync through untrusted relays
See Also
- PasswordStore API - Full API documentation
- Stores - Overview of all store types
Synchronization Guide
Eidetica's sync system enables real-time data synchronization between distributed peers.
Quick Start
1. Enable Sync
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory}; fn main() -> eidetica::Result<()> { let backend = Box::new(InMemory::new()); let instance = Instance::open(backend)?; instance.enable_sync()?; // Create and login a user (generates authentication key automatically) instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; Ok(()) }
2. Start a Server
let sync = instance.sync().unwrap();
sync.enable_http_transport()?;
// Start a server to accept connections
sync.start_server_async("127.0.0.1:8080").await?;
3. Connect and Sync
// Single API handles both bootstrap (new) and incremental (existing) sync
sync.sync_with_peer("127.0.0.1:8080", Some(&tree_id)).await?;
That's it. The system automatically detects whether you need full bootstrap or incremental sync.
Transport Options
HTTP
Simple REST-based sync. Good for development and fixed-IP deployments.
sync.enable_http_transport()?;
sync.start_server_async("127.0.0.1:8080").await?;
Iroh P2P (Recommended for Production)
QUIC-based with NAT traversal. Works through firewalls.
sync.enable_iroh_transport()?;
sync.start_server_async("ignored").await?; // Iroh manages addressing
let my_address = sync.get_server_address_async().await?; // Share this with peers
Declarative Sync API
For persistent sync relationships:
extern crate eidetica; use eidetica::sync::{SyncPeerInfo, Address}; use eidetica::{Instance, backend::database::InMemory, crdt::Doc}; fn main() -> eidetica::Result<()> { let backend = Box::new(InMemory::new()); let instance = Instance::open(backend)?; instance.enable_sync()?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let default_key = user.get_default_key()?; let db = user.create_database(Doc::new(), &default_key)?; let tree_id = db.root_id().clone(); let sync = instance.sync().expect("Sync enabled"); let peer_pubkey = "ed25519:abc123".to_string(); // Register a peer for automatic background sync let handle = sync.register_sync_peer(SyncPeerInfo { peer_pubkey, tree_id, addresses: vec![Address { transport_type: "http".to_string(), address: "http://peer.example.com:8080".to_string(), }], auth: None, display_name: Some("Peer Device".to_string()), })?; Ok(()) }
Background sync happens automatically. Check status with handle.status()?.
Sync Settings
Configure per-database sync behavior:
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory, crdt::Doc}; use eidetica::user::types::{SyncSettings, TrackedDatabase}; fn main() -> eidetica::Result<()> { let backend = Box::new(InMemory::new()); let instance = Instance::open(backend)?; instance.enable_sync()?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let key = user.get_default_key()?; let db = user.create_database(Doc::new(), &key)?; let db_id = db.root_id().clone(); let tracked = TrackedDatabase { database_id: db_id, key_id: user.get_default_key()?, sync_settings: SyncSettings { sync_enabled: true, sync_on_commit: true, // Sync immediately on commit interval_seconds: Some(60), // Also sync every 60 seconds properties: Default::default(), }, }; // Track this database with the User user.track_database(tracked)?; Ok(()) }
Authenticated Bootstrap
For joining databases that require authentication:
sync.sync_with_peer_for_bootstrap(
"127.0.0.1:8080",
&tree_id,
"device_key",
eidetica::auth::Permission::Write,
).await?;
See Bootstrap Guide for approval workflows.
Automatic Behavior
Once configured, the sync system handles:
- Immediate sync on commit (if
sync_on_commit: true) - Periodic sync at configured intervals
- Retry with exponential backoff for failed sends
- Bidirectional transfer in each sync operation
Troubleshooting
| Issue | Solution |
|---|---|
| "No transport enabled" | Call enable_http_transport() or enable_iroh_transport() |
| Sync not happening | Check peer status, network connectivity |
| Auth failures | Verify keys are configured, protocol versions match |
Example
See the Chat Example for a complete working application demonstrating multi-transport sync, bootstrap, and real-time updates.
Bootstrapping
Overview
The Bootstrap system provides secure key management for Eidetica databases by controlling how new devices gain access to synchronized databases. It supports two approval methods:
- Global Wildcard Permissions - Databases with global '*' permissions automatically approve bootstrap requests without adding new keys
- Manual Approval - Bootstrap requests are queued for administrator review and explicit approval
Global Permission Bootstrap
Global '*' permissions provide the simplest and most flexible approach for collaborative or public databases:
How It Works
When a database has global permissions configured (e.g., {"*": {"pubkey": "*", "permissions": "Write: 10"}}), bootstrap requests are automatically approved if the requested permission level is satisfied by the global permission. No new keys are added to the database.
Devices use the global permission for both bootstrap approval and subsequent operations (transactions, reads, writes). The system automatically resolves to the global "*" key when a device's specific key is not present in the database's auth settings.
Advantages
- No key management: Devices don't need individual keys added to database
- Immediate access: Bootstrap approval happens instantly
- Simple configuration: One permission setting controls all devices
- Flexible permissions: Set exactly the permission level you want to allow
Configuration Example
Configure a database with global write permission:
use eidetica::crdt::Doc;
// Create database with global permission
let mut settings = Doc::new();
let mut auth_doc = Doc::new();
// Add admin key for database management
auth_doc.set_json("admin_key", serde_json::json!({
"pubkey": "ed25519:admin_public_key_here",
"permissions": {"Admin": 1},
"status": "Active"
}))?;
// Add global permission for automatic bootstrap approval
auth_doc.set_json("*", serde_json::json!({
"pubkey": "*",
"permissions": {"Write": 10}, // Allows Read and Write(11+) requests
"status": "Active"
}))?;
settings.set("auth", auth_doc);
let database = instance.new_database(settings, "admin_key")?;
Permission Levels
Eidetica uses lower numbers = higher permissions:
- Global
Write(10)allows:Read,Write(11),Write(15), etc. - Global
Write(10)denies:Write(5),Admin(*)
Choose your global permission level carefully based on your security requirements.
Client Workflow
From the client's perspective, the bootstrap process follows these steps:
1. Initial Bootstrap Attempt
The client initiates a bootstrap request when it needs access to a synchronized database:
client_sync.sync_with_peer_for_bootstrap(
&server_address,
&tree_id,
"client_device_key", // Client's key name
Permission::Write(5) // Requested permission level
).await
2. Response Handling
The client must handle different response scenarios:
-
Global Wildcard Permission Approved:
- Request succeeds immediately
- Client gains access via global permission
- No individual key added to database
- Can proceed with normal operations
-
Manual Approval Required (default):
- Request fails with an error
- Error indicates request is "pending"
- Bootstrap request is queued for admin review
3. Waiting for Approval
While the request is pending, the client has several options:
- Polling Strategy: Periodically retry sync operations
- Event-Based: Wait for notification from server (future enhancement)
- User-Triggered: Let user manually retry when they expect approval
4. After Admin Decision
If Approved:
- The initial
sync_with_peer_for_bootstrap()will still return an error - Client must use normal
sync_with_peer()to access the database - Once synced, client can load and use the database normally
If Rejected:
- All sync attempts continue to fail
- Client remains unable to access the database
- May submit a new request with different parameters if appropriate
5. Retry Logic Example
async fn bootstrap_with_retry(
client_sync: &mut Sync,
server_addr: &str,
tree_id: &ID,
key_name: &str,
) -> Result<()> {
// Initial bootstrap request
if let Err(_) = client_sync.sync_with_peer_for_bootstrap(
server_addr, tree_id, key_name, Permission::Write(5)
).await {
println!("Bootstrap request pending approval...");
// Poll for approval (with backoff)
for attempt in 0..10 {
tokio::time::sleep(Duration::from_secs(30 * (attempt + 1))).await;
// Try normal sync after potential approval
if client_sync.sync_with_peer(server_addr, Some(tree_id)).await.is_ok() {
println!("Access granted!");
return Ok(());
}
}
return Err("Bootstrap request timeout or rejected".into());
}
Ok(()) // Auto-approved
}
Usage Examples
Manual Approval Workflow
For administrators managing bootstrap requests:
// 1. List pending requests
let pending = sync.pending_bootstrap_requests()?;
for (request_id, request) in pending {
println!("Request {}: {} wants {} access to tree {}",
request_id,
request.requesting_key_name,
request.requested_permission,
request.tree_id
);
}
// 2. Approve a request
sync.approve_bootstrap_request(
"bootstrap_a1b2c3d4...",
"admin_key" // Your admin key name
)?;
// 3. Or reject a request
sync.reject_bootstrap_request(
"bootstrap_e5f6g7h8...",
"admin_key"
)?;
Complete Client Bootstrap Example
// Step 1: Initial bootstrap attempt with authentication
let bootstrap_result = client_sync.sync_with_peer_for_bootstrap(
&server_address,
&tree_id,
"my_device_key",
Permission::Write(5)
).await;
// Step 2: Handle the response based on approval method
match bootstrap_result {
Ok(_) => {
// Global wildcard permission granted immediate access
println!("Bootstrap approved via global permission! Access granted immediately.");
},
Err(e) => {
// Manual approval required
// The error indicates the request is pending
println!("Bootstrap request submitted, awaiting admin approval...");
// Step 3: Wait for admin to review and approve
// Options:
// a) Poll periodically
// b) Wait for out-of-band notification
// c) User-triggered retry
// Step 4: After admin approval, retry with normal sync
// (bootstrap sync will still fail, use regular sync instead)
tokio::time::sleep(Duration::from_secs(30)).await;
// After approval, normal sync will succeed
match client_sync.sync_with_peer(&server_address, Some(&tree_id)).await {
Ok(_) => {
println!("Access granted! Database synchronized.");
// Client can now load and use the database
let db = client_instance.load_database(&tree_id)?;
},
Err(_) => {
println!("Still pending or rejected. Check with admin.");
}
}
}
}
// Handling rejection scenario
// If the request was rejected, all sync attempts will continue to fail
// The client will need to submit a new bootstrap request if appropriate
Security Considerations
Trust Model
-
Global Wildcard Permissions: Trusts any device that can reach the sync endpoint
- Suitable for: Development, collaborative projects, public databases
- Risk: Any device can gain the configured global permissions
- Benefit: Simple, immediate access for authorized scenarios
-
Manual Approval: Requires explicit admin action for each device
- Suitable for: Production, sensitive data, controlled access scenarios
- Benefit: Complete control over which devices gain access
- Risk: Administrative overhead for each new device
Troubleshooting
Common Issues
-
"Authentication required but not configured"
- Cause: Sync handler cannot authenticate with target database
- Solution: Ensure proper key configuration for database operations
-
"Invalid request state"
- Cause: Attempting to approve/reject non-pending request
- Solution: Check request status before operation
Performance Considerations
- Sync database grows linearly with request count
- Request queries are indexed by ID
Sync Quick Reference
A concise reference for Eidetica's synchronization API with common usage patterns and code snippets.
Setup and Initialization
Basic Sync Setup
use eidetica::{Instance, backend::InMemory};
// Create database with sync enabled
let backend = Box::new(InMemory::new());
let instance = Instance::open(backend)?.enable_sync()?;
// Create and login user (generates authentication key)
instance.create_user("alice", None)?;
let mut user = instance.login_user("alice", None)?;
// Enable transport
let sync = db.sync().unwrap();
sync.enable_http_transport()?;
sync.start_server_async("127.0.0.1:8080").await?;
Understanding BackgroundSync
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory}; fn main() -> eidetica::Result<()> { // Setup database instance with sync capability let backend = Box::new(InMemory::new()); let db = Instance::open(backend)?; db.enable_sync()?; // The BackgroundSync engine starts automatically with transport let sync = db.sync().unwrap(); sync.enable_http_transport()?; // Starts background thread // Background thread configuration and behavior: // - Command processing (immediate response to commits) // - Periodic sync operations (5 minute intervals) // - Retry queue processing (30 second intervals) // - Connection health checks (60 second intervals) // All sync operations are automatic - no manual queue management needed println!("BackgroundSync configured with automatic operation timers"); Ok(()) }
Declarative Sync API (Recommended)
Register Sync Peer
Declare sync intent with automatic background synchronization:
extern crate eidetica; use eidetica::sync::{SyncPeerInfo, Address}; use eidetica::{Instance, backend::database::InMemory, crdt::Doc}; fn main() -> eidetica::Result<()> { let backend = Box::new(InMemory::new()); let instance = Instance::open(backend)?; instance.enable_sync()?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let default_key = user.get_default_key()?; let db = user.create_database(Doc::new(), &default_key)?; let tree_id = db.root_id().clone(); let sync = instance.sync().expect("Sync enabled"); let peer_pubkey = "ed25519:abc123".to_string(); // Register a peer for persistent sync let handle = sync.register_sync_peer(SyncPeerInfo { peer_pubkey, tree_id, addresses: vec![Address { transport_type: "http".to_string(), address: "http://peer.example.com:8080".to_string(), }], auth: None, display_name: Some("Peer Device".to_string()), })?; // Background sync engine now handles synchronization automatically Ok(()) }
Monitor Sync Status
// Check current status
let status = handle.status()?;
println!("Has local data: {}", status.has_local_data);
// Wait for initial bootstrap
handle.wait_for_initial_sync().await?;
// Add more address hints
handle.add_address(Address {
transport_type: "iroh".to_string(),
address: "iroh://node_id".to_string(),
})?;
Legacy Sync API
Authenticated Bootstrap (Recommended for New Databases)
// For new devices joining existing databases with authentication
sync.sync_with_peer_for_bootstrap(
"peer.example.com:8080",
&tree_id,
"device_key", // Local authentication key
eidetica::auth::Permission::Write // Requested permission level
).await?;
// This automatically:
// 1. Connects to peer and performs handshake
// 2. Requests database access with specified permission level
// 3. Receives auto-approved access (or manual approval in production)
// 4. Downloads complete database state
// 5. Grants authenticated write access
Simplified Sync (Legacy/Existing Databases)
// Single call handles connection, handshake, and sync detection
sync.sync_with_peer("peer.example.com:8080", Some(&tree_id)).await?;
// This automatically:
// 1. Connects to peer and performs handshake
// 2. Bootstraps database if you don't have it locally
// 3. Syncs incrementally if you already have the database
// 4. Handles peer registration internally
Database Discovery
// Discover available databases on a peer
let available_trees = sync.discover_peer_trees("peer.example.com:8080").await?;
for tree in available_trees {
println!("Available: {} ({} entries)", tree.tree_id, tree.entry_count);
}
// Bootstrap from discovered database
if let Some(tree) = available_trees.first() {
sync.sync_with_peer("peer.example.com:8080", Some(&tree.tree_id)).await?;
}
Manual Peer Registration (Advanced)
// Register peer manually (for advanced use cases)
let peer_key = "ed25519:abc123...";
sync.register_peer(peer_key, Some("Alice's Device"))?;
// Add addresses
sync.add_peer_address(peer_key, Address::http("192.168.1.100:8080")?)?;
sync.add_peer_address(peer_key, Address::iroh("iroh://peer_id")?)?;
// Use low-level sync with registered peer
sync.sync_tree_with_peer(&peer_key, &tree_id).await?;
// Note: Manual registration is usually unnecessary
// The sync_with_peer() method handles registration automatically
Peer Status Management
// List all peers
let peers = db.sync()?.list_peers()?;
for peer in peers {
println!("{}: {} ({})",
peer.pubkey,
peer.display_name.unwrap_or("Unknown".to_string()),
peer.status
);
}
// Get specific peer info
if let Some(peer) = db.sync()?.get_peer_info(&peer_key)? {
println!("Status: {:?}", peer.status);
println!("Addresses: {:?}", peer.addresses);
}
// Update peer status
db.sync()?.update_peer_status(&peer_key, PeerStatus::Inactive)?;
Database Synchronization
Create and Share Database
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory, crdt::Doc, store::DocStore}; fn main() -> eidetica::Result<()> { let backend = Box::new(InMemory::new()); let instance = Instance::open(backend)?; instance.enable_sync()?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; // Create a database to share let mut settings = Doc::new(); settings.set("name", "My Chat Room"); settings.set("description", "A room for team discussions"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; let tree_id = database.root_id(); // Add some initial data let op = database.new_transaction()?; let store = op.get_store::<DocStore>("messages")?; store.set("welcome", "Welcome to the room!")?; op.commit()?; // Share the tree_id with others println!("Room ID: {}", tree_id); Ok(()) }
Bootstrap from Shared Database
// Join someone else's database using the tree_id
let room_id = "abc123..."; // Received from another user
sync.sync_with_peer("peer.example.com:8080", Some(&room_id)).await?;
// You now have the full database locally
let database = db.load_database(&room_id)?;
let op = database.new_transaction()?;
let store = op.get_store::<DocStore>("messages")?;
println!("Welcome message: {}", store.get_string("welcome")?);
Ongoing Synchronization
// All changes automatically sync after bootstrap
let op = database.new_transaction()?;
let store = op.get_store::<DocStore>("messages")?;
store.set("my_message", "Hello everyone!")?;
op.commit()?; // Automatically syncs to all connected peers
// Manually sync to get latest changes
sync.sync_with_peer("peer.example.com:8080", Some(&tree_id)).await?;
Advanced: Manual Sync Relationships
// For fine-grained control (usually not needed)
sync.add_tree_sync(&peer_key, &tree_id)?;
// List synced databases for peer
let databases = sync.get_peer_trees(&peer_key)?;
// List peers syncing a database
let peers = sync.get_tree_peers(&tree_id)?;
// Remove sync relationship
sync.remove_tree_sync(&peer_key, &tree_id)?;
Data Operations (Auto-Sync)
Basic Data Changes
use eidetica::store::DocStore;
// Any database operation automatically triggers sync
let op = database.new_transaction()?;
let store = op.get_store::<DocStore>("data")?;
store.set("message", "Hello World")?;
store.set_path("user.name", "Alice")?;
store.set_path("user.age", 30)?;
// Commit triggers sync callbacks automatically
op.commit()?; // Entries queued for sync to all configured peers
Bulk Operations
// Multiple operations in single commit
let op = database.new_transaction()?;
let store = op.get_store::<DocStore>("data")?;
for i in 0..100 {
store.set(&format!("item_{}", i), &format!("value_{}", i))?;
}
// Single commit, single sync entry
op.commit()?;
Monitoring and Diagnostics
Server Control
// Start/stop sync server
let sync = db.sync()?;
sync.start_server("127.0.0.1:8080")?;
// Check server status
if sync.is_server_running() {
let addr = sync.get_server_address()?;
println!("Server running at: {}", addr);
}
// Stop server
sync.stop_server()?;
Sync State Tracking
// Get sync state manager
let op = db.sync()?.sync_tree().new_operation()?;
let state_manager = SyncStateManager::new(&op);
// Get sync cursor for peer-database relationship
let cursor = state_manager.get_sync_cursor(&peer_key, &tree_id)?;
if let Some(cursor) = cursor {
println!("Last synced: {:?}", cursor.last_synced_entry);
println!("Total synced: {}", cursor.total_synced_count);
}
// Get peer metadata
let metadata = state_manager.get_sync_metadata(&peer_key)?;
if let Some(meta) = metadata {
println!("Successful syncs: {}", meta.successful_sync_count);
println!("Failed syncs: {}", meta.failed_sync_count);
}
Sync State Tracking
use eidetica::sync::state::SyncStateManager;
// Get sync database operation
let op = sync.sync_tree().new_operation()?;
let state_manager = SyncStateManager::new(&op);
// Check sync cursor
let cursor = state_manager.get_sync_cursor(&peer_key, &tree_id)?;
println!("Last synced: {:?}", cursor.last_synced_entry);
println!("Total synced: {}", cursor.total_synced_count);
// Check sync metadata
let metadata = state_manager.get_sync_metadata(&peer_key)?;
println!("Success rate: {:.2}%", metadata.sync_success_rate() * 100.0);
println!("Avg duration: {:.1}ms", metadata.average_sync_duration_ms);
// Get recent sync history
let history = state_manager.get_sync_history(&peer_key, Some(10))?;
for entry in history {
println!("Sync {}: {} entries in {:.1}ms",
entry.sync_id, entry.entries_count, entry.duration_ms);
}
Error Handling
Common Error Patterns
use eidetica::sync::SyncError;
// Connection errors
match sync.connect_to_peer(&addr).await {
Ok(peer_key) => println!("Connected: {}", peer_key),
Err(e) if e.is_sync_error() => {
match e.sync_error().unwrap() {
SyncError::HandshakeFailed(msg) => {
eprintln!("Handshake failed: {}", msg);
// Retry with different address or check credentials
},
SyncError::NoTransportEnabled => {
eprintln!("Enable transport first");
sync.enable_http_transport()?;
},
SyncError::PeerNotFound(key) => {
eprintln!("Peer {} not registered", key);
// Register peer first
},
_ => eprintln!("Other sync error: {}", e),
}
},
Err(e) => eprintln!("Non-sync error: {}", e),
}
Monitoring Sync Health
// Check server status
if !sync.is_server_running() {
eprintln!("Warning: Sync server not running");
}
// Monitor peer connectivity
let peers = sync.list_peers()?;
for peer in peers {
if peer.status != PeerStatus::Active {
eprintln!("Warning: Peer {} is {}", peer.pubkey, peer.status);
}
}
// Sync happens automatically, but you can monitor state
// via the SyncStateManager for diagnostics
Configuration Examples
Development Setup
// Fast, responsive sync for development
// Enable HTTP transport for easy debugging
db.sync()?.enable_http_transport()?;
db.sync()?.start_server("127.0.0.1:8080")?;
// Connect to local test peer
let addr = Address::http("127.0.0.1:8081")?;
let peer = db.sync()?.connect_to_peer(&addr).await?;
Production Setup
// Use Iroh for production deployments (defaults to n0's relay servers)
db.sync()?.enable_iroh_transport()?;
// Or configure for specific environments:
use iroh::RelayMode;
use eidetica::sync::transports::iroh::IrohTransport;
// Custom relay server (e.g., enterprise deployment)
let relay_url: iroh::RelayUrl = "https://relay.example.com".parse()?;
let relay_node = iroh::RelayNode {
url: relay_url,
quic: Some(Default::default()),
};
let transport = IrohTransport::builder()
.relay_mode(RelayMode::Custom(iroh::RelayMap::from_iter([relay_node])))
.build()?;
db.sync()?.enable_iroh_transport_with_config(transport)?;
// Connect to peers
let addr = Address::iroh(peer_node_id)?;
let peer = db.sync()?.connect_to_peer(&addr).await?;
// Sync happens automatically:
// - Immediate on commit
// - Retry with exponential backoff
// - Periodic sync every 5 minutes
Multi-Database Setup
// Run multiple sync-enabled databases
let db1 = Instance::open(Box::new(InMemory::new())?.enable_sync()?;
db1.sync()?.enable_http_transport()?;
db1.sync()?.start_server("127.0.0.1:8080")?;
let db2 = Instance::open(Box::new(InMemory::new())?.enable_sync()?;
db2.sync()?.enable_http_transport()?;
db2.sync()?.start_server("127.0.0.1:8081")?;
// Connect them together
let addr = Address::http("127.0.0.1:8080")?;
let peer = db2.sync()?.connect_to_peer(&addr).await?;
Testing Patterns
Testing with Iroh (No Relays)
#[tokio::test]
async fn test_iroh_sync_local() -> Result<()> {
use iroh::RelayMode;
use eidetica::sync::transports::iroh::IrohTransport;
// Configure Iroh for local testing (no relay servers)
let transport1 = IrohTransport::builder()
.relay_mode(RelayMode::Disabled)
.build()?;
let transport2 = IrohTransport::builder()
.relay_mode(RelayMode::Disabled)
.build()?;
// Setup databases with local Iroh transport
let db1 = Instance::open(Box::new(InMemory::new())?.enable_sync()?;
db1.sync()?.enable_iroh_transport_with_config(transport1)?;
db1.sync()?.start_server("ignored")?; // Iroh manages its own addresses
let db2 = Instance::open(Box::new(InMemory::new())?.enable_sync()?;
db2.sync()?.enable_iroh_transport_with_config(transport2)?;
db2.sync()?.start_server("ignored")?;
// Get the serialized NodeAddr (includes direct addresses)
let addr1 = db1.sync()?.get_server_address()?;
let addr2 = db2.sync()?.get_server_address()?;
// Connect peers using full NodeAddr info
let peer1 = db2.sync()?.connect_to_peer(&Address::iroh(&addr1)).await?;
let peer2 = db1.sync()?.connect_to_peer(&Address::iroh(&addr2)).await?;
// Now they can sync directly via P2P
Ok(())
}
Mock Peer Setup (HTTP)
#[tokio::test]
async fn test_sync_between_peers() -> Result<()> {
// Setup first peer
let instance1 = Instance::open(Box::new(InMemory::new())?.enable_sync()?;
instance1.create_user("peer1", None)?;
let mut user1 = instance1.login_user("peer1", None)?;
instance1.sync()?.enable_http_transport()?;
instance1.sync()?.start_server("127.0.0.1:0")?; // Random port
let addr1 = instance1.sync()?.get_server_address()?;
// Setup second peer
let instance2 = Instance::open(Box::new(InMemory::new())?.enable_sync()?;
instance2.create_user("peer2", None)?;
let mut user2 = instance2.login_user("peer2", None)?;
instance2.sync()?.enable_http_transport()?;
// Connect peers
let addr = Address::http(&addr1)?;
let peer1_key = instance2.sync()?.connect_to_peer(&addr).await?;
instance2.sync()?.update_peer_status(&peer1_key, PeerStatus::Active)?;
// Setup sync relationship
let key1 = user1.get_default_key()?;
let tree1 = user1.create_database(Doc::new(), &key1)?;
let key2 = user2.get_default_key()?;
let tree2 = user2.create_database(Doc::new(), &key2)?;
db2.sync()?.add_tree_sync(&peer1_key, &tree1.root_id().to_string())?;
// Test sync
let op1 = tree1.new_transaction()?;
let store1 = op1.get_store::<DocStore>("data")?;
store1.set("test", "value")?;
op1.commit()?;
// Wait for sync
tokio::time::sleep(Duration::from_secs(2)).await;
// Verify sync occurred
// ... verification logic
Ok(())
}
Best Practices Summary
✅ Do
- Use
register_sync_peer()for persistent sync relationships (declarative API) - Use
sync_with_peer()for one-off sync operations (legacy API) - Enable sync before creating databases you want to synchronize
- Use Iroh transport for production deployments (better NAT traversal)
- Monitor sync status via
SyncHandlefor declarative sync - Share tree IDs to allow others to bootstrap from your databases
- Handle network failures gracefully (sync system auto-retries)
- Let BackgroundSync handle retry logic automatically
- Leverage automatic peer registration when peers connect to your server
❌ Don't
- Manually manage peers unless you need fine control (use declarative API instead)
- Remove peer relationships for databases you want to synchronize
- Manually manage sync queues (BackgroundSync handles this)
- Ignore sync errors in production code
- Use HTTP transport for high-volume production (prefer Iroh)
- Assume sync is instantaneous (it's eventually consistent)
- Mix APIs unnecessarily (pick declarative or legacy based on use case)
🚀 Sync Features
- Declarative sync API: Register intent, let background engine handle sync
- Automatic peer registration: Incoming connections register automatically
- Status tracking: Monitor sync progress with
SyncHandle - Zero-state joining: Join rooms/databases without any local setup
- Automatic protocol detection: Bootstrap vs incremental sync handled automatically
- Database discovery: Find available databases on peers
- Bidirectional sync: Both devices can share and receive databases
- Tree/peer relationship tracking: Automatic relationship management
🔧 Troubleshooting Checklist
-
Sync not working?
- Check transport is enabled and server started
- Verify peer status is
Active - Confirm database sync relationships configured
- Check network connectivity
-
Performance issues?
- Consider using Iroh transport
- Check for network bottlenecks
- Verify retry queue isn't growing unbounded
- Monitor peer connectivity status
-
Memory usage high?
- Check for dead/unresponsive peers
- Verify retry queue is processing correctly
- Consider restarting sync to clear state
-
Sync delays?
- Remember sync is immediate on commit
- Check if entries are in retry queue
- Verify network is stable
- Check peer responsiveness
Logging
Eidetica uses the tracing crate for structured logging throughout the library. This provides flexible, performant logging that library users can configure to their needs.
Quick Start
Eidetica uses the tracing crate, which means you can attach any subscriber to capture and format logs. The simplest approach is using tracing-subscriber:
[dependencies]
eidetica = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
use tracing_subscriber::EnvFilter;
fn main() -> eidetica::Result<()> {
// Initialize tracing subscriber to see Eidetica logs
tracing_subscriber::fmt()
.with_env_filter(EnvFilter::from_default_env())
.init();
// Now all Eidetica operations will log according to RUST_LOG
// ...
}
You can customize formatting, filtering, and output destinations. See the tracing-subscriber documentation for advanced configuration options.
Configuring Log Levels
Control logging verbosity using the RUST_LOG environment variable:
# Show only errors
RUST_LOG=eidetica=error cargo run
# Show info messages and above
RUST_LOG=eidetica=info cargo run
# Show debug messages for sync module
RUST_LOG=eidetica::sync=debug cargo run
# Show all trace messages (very verbose)
RUST_LOG=eidetica=trace cargo run
Log Levels in Eidetica
Eidetica uses the following log levels:
- ERROR: Critical errors that prevent operations from completing
- Failed database operations
- Network errors during sync
- Authentication failures
- WARN: Important warnings that don't prevent operation
- Retry operations after failures
- Invalid configuration detected
- Deprecated feature usage
- INFO: High-level operational messages
- Sync server started/stopped
- Successful synchronization with peers
- Database loaded/saved
- DEBUG: Detailed operational information
- Sync protocol details
- Database synchronization progress
- Hook execution
- TRACE: Very detailed trace information
- Individual entry processing
- Detailed CRDT operations
- Network protocol messages
Using Eidetica with Logging
Once you've initialized a tracing subscriber, all Eidetica operations will automatically emit structured logs that you can capture and filter:
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory, crdt::Doc}; fn main() -> eidetica::Result<()> { let backend = Box::new(InMemory::new()); let instance = Instance::open(backend)?; // Create and login user - this will log at INFO level instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; // Create a database - this will log at INFO level let mut settings = Doc::new(); settings.set("name", "my_database"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; // Operations will emit logs at appropriate levels // Use RUST_LOG to control what you see Ok(()) }
Performance Considerations
The tracing crate is designed to have minimal overhead when logging is disabled. Log statements that aren't enabled at the current level are optimized away at compile time.
For performance-critical code paths, Eidetica uses appropriate log levels:
- Hot paths use
trace!level to avoid overhead in production - Synchronization operations use
debug!for detailed tracking - Only important events use
info!and above
Integration with Observability Tools
The tracing ecosystem supports various backends for production observability:
- Console output: Default, human-readable format
- JSON output: For structured logging systems
- OpenTelemetry: For distributed tracing
- Jaeger/Zipkin: For trace visualization
See the tracing documentation for more advanced integration options.
Developer Walkthrough: Building with Eidetica
This guide walks through the Todo Example (examples/todo/src/main.rs) to explain Eidetica's core concepts. The example is a simple command-line todo app that demonstrates databases, transactions, stores, and Y-CRDT integration.
Core Concepts
The Todo example demonstrates Eidetica's key components working together in a real application.
1. The Database Backend (Instance)
The Instance is your main entry point. It wraps a storage backend and manages users and databases.
The Todo example implements load_or_create_instance() to handle loading existing backends or creating new ones:
fn load_or_create_instance(path: &PathBuf) -> Result<Instance> {
let instance = if path.exists() {
let backend = InMemory::load_from_file(path)?;
Instance::open(Box::new(backend))?
} else {
let backend = InMemory::new();
Instance::open(Box::new(backend))?
};
println!("✓ Instance initialized");
Ok(instance)
}
This shows how the InMemory backend can persist to disk. Authentication is managed through the User system (see below).
2. Users (User)
Users provide authenticated access to databases. A User manages signing keys and database access. The Todo example creates a passwordless user for simplicity:
fn get_or_create_user(instance: &Instance) -> Result<User> {
let username = "todo-user";
// Try to login first
match instance.login_user(username, None) {
Ok(user) => {
println!("✓ Logged in as passwordless user: {username}");
Ok(user)
}
Err(e) if e.is_not_found() => {
// User doesn't exist, create it
println!("Creating new passwordless user: {username}");
instance.create_user(username, None)?;
let user = instance.login_user(username, None)?;
println!("✓ Created and logged in as passwordless user: {username}");
Ok(user)
}
Err(e) => Err(e),
}
}
3. Databases (Database)
A Database is a primary organizational unit within an Instance. Think of it somewhat like a schema or a logical database within a larger instance. It acts as a container for related data, managed through Stores. Databases provide versioning and history tracking for the data they contain.
The Todo example uses a single Database named "todo", discovered through the User API:
fn load_or_create_todo_database(user: &mut User) -> Result<Database> {
let database_name = "todo";
// Try to find the database by name
let database = match user.find_database(database_name) {
Ok(mut databases) => {
databases.pop().unwrap() // unwrap is safe because find_database errors if empty
}
Err(e) if e.is_not_found() => {
// If not found, create a new one
println!("No existing todo database found, creating a new one...");
let mut settings = Doc::new();
settings.set("name", database_name);
// Get the default key
let default_key = user.get_default_key()?;
// User API automatically configures the database with user's keys
user.create_database(settings, &default_key)?
}
Err(e) => return Err(e),
};
Ok(database)
}
This shows how User::find_database() searches for existing databases by name, and User::create_database() creates new authenticated databases.
4. Transactions and Stores
All data modifications happen within a Transaction. Transactions ensure atomicity and are automatically authenticated using the database's default signing key.
Within a transaction, you access Stores - flexible containers for different types of data. The Todo example uses Table<Todo> to store todo items with unique IDs.
5. The Todo Data Structure
The example defines a Todo struct that must implement Serialize and Deserialize to work with Eidetica:
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Todo {
pub title: String,
pub completed: bool,
pub created_at: DateTime<Utc>,
pub completed_at: Option<DateTime<Utc>>,
}
impl Todo {
pub fn new(title: String) -> Self {
Self {
title,
completed: false,
created_at: Utc::now(),
completed_at: None,
}
}
pub fn complete(&mut self) {
self.completed = true;
self.completed_at = Some(Utc::now());
}
}
6. Adding a Todo
The add_todo() function shows how to insert data into a Table store:
fn add_todo(database: &Database, title: String) -> Result<()> {
// Start an atomic transaction (uses default auth key)
let op = database.new_transaction()?;
// Get a handle to the 'todos' Table store
let todos_store = op.get_store::<Table<Todo>>("todos")?;
// Create a new todo
let todo = Todo::new(title);
// Insert the todo into the Table
// The Table will generate a unique ID for it
let todo_id = todos_store.insert(todo)?;
// Commit the transaction
op.commit()?;
println!("Added todo with ID: {todo_id}");
Ok(())
}
7. Updating a Todo
The complete_todo() function demonstrates reading and updating data:
fn complete_todo(database: &Database, id: &str) -> Result<()> {
// Start an atomic transaction (uses default auth key)
let op = database.new_transaction()?;
// Get a handle to the 'todos' Table store
let todos_store = op.get_store::<Table<Todo>>("todos")?;
// Get the todo from the Table
let mut todo = todos_store.get(id)?;
// Mark the todo as complete
todo.complete();
// Update the todo in the Table
todos_store.set(id, todo)?;
// Commit the transaction
op.commit()?;
Ok(())
}
These examples show the typical pattern: start a transaction, get a store handle, perform operations, and commit.
8. Y-CRDT Integration (YDoc)
The example also uses YDoc stores for user information and preferences. Y-CRDTs are designed for collaborative editing:
fn set_user_info(
database: &Database,
name: Option<&String>,
email: Option<&String>,
bio: Option<&String>,
) -> Result<()> {
// Start an atomic transaction (uses default auth key)
let op = database.new_transaction()?;
// Get a handle to the 'user_info' YDoc store
let user_info_store = op.get_store::<YDoc>("user_info")?;
// Update user information using the Y-CRDT document
user_info_store.with_doc_mut(|doc| {
let user_info_map = doc.get_or_insert_map("user_info");
let mut txn = doc.transact_mut();
if let Some(name) = name {
user_info_map.insert(&mut txn, "name", name.clone());
}
if let Some(email) = email {
user_info_map.insert(&mut txn, "email", email.clone());
}
if let Some(bio) = bio {
user_info_map.insert(&mut txn, "bio", bio.clone());
}
Ok(())
})?;
// Commit the transaction
op.commit()?;
Ok(())
}
The example demonstrates using different store types in one database:
- "todos" (
Table<Todo>): Stores todo items with automatic ID generation - "user_info" (
YDoc): Stores user profile using Y-CRDT Maps - "user_prefs" (
YDoc): Stores preferences using Y-CRDT Maps
This shows how you can choose the most appropriate data structure for each type of data.
Running the Todo Example
To see these concepts in action, you can run the Todo example:
# Navigate to the example directory
cd examples/todo
# Build the example
cargo build
# Run commands (this will create todo_db.json)
cargo run -- add "Learn Eidetica"
cargo run -- list
# Note the ID printed
cargo run -- complete <id_from_list>
cargo run -- list
Refer to the example's README.md and test.sh for more usage details.
This walkthrough provides a starting point. Explore the Eidetica documentation and other examples to learn about more advanced features like different store types, history traversal, and distributed capabilities.
Code Examples
This page provides focused code snippets for common tasks in Eidetica.
Assumes basic setup like use eidetica::{Instance, Database, Error, ...}; and error handling (?) for brevity.
1. Initializing the Database (Instance)
extern crate eidetica; use eidetica::{backend::database::InMemory, Instance, crdt::Doc}; use std::path::PathBuf; fn main() -> eidetica::Result<()> { // Use a temporary file for testing let temp_dir = std::env::temp_dir(); let db_path = temp_dir.join("eidetica_example_init.json"); // First create and save a test database to demonstrate loading let backend = InMemory::new(); let test_instance = Instance::open(Box::new(backend))?; test_instance.create_user("alice", None)?; let mut test_user = test_instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "example_db"); let test_key = test_user.get_default_key()?; let _database = test_user.create_database(settings, &test_key)?; let database_guard = test_instance.backend(); if let Some(in_memory) = database_guard.as_any().downcast_ref::<InMemory>() { in_memory.save_to_file(&db_path)?; } // Option A: Create a new, empty in-memory database let database_new = InMemory::new(); let _db_new = Instance::open(Box::new(database_new))?; // Option B: Load from a previously saved file if db_path.exists() { match InMemory::load_from_file(&db_path) { Ok(database_loaded) => { let _db_loaded = Instance::open(Box::new(database_loaded))?; println!("Database loaded successfully."); // Use db_loaded } Err(e) => { eprintln!("Error loading database: {}", e); // Handle error, maybe create new } } } else { println!("Database file not found, creating new."); // Use db_new from Option A } // Clean up the temporary file if db_path.exists() { std::fs::remove_file(&db_path).ok(); } Ok(()) }
2. Creating or Loading a Database
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory, crdt::Doc}; fn main() -> eidetica::Result<()> { let instance = Instance::open(Box::new(InMemory::new()))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let tree_name = "my_app_data"; let database = match user.find_database(tree_name) { Ok(mut databases) => { println!("Found existing database: {}", tree_name); databases.pop().unwrap() // Assume first one is correct } Err(e) if e.is_not_found() => { println!("Creating new database: {}", tree_name); let mut doc = Doc::new(); doc.set("name", tree_name); let default_key = user.get_default_key()?; user.create_database(doc, &default_key)? } Err(e) => return Err(e.into()), // Propagate other errors }; println!("Using Database with root ID: {}", database.root_id()); Ok(()) }
3. Writing Data (DocStore Example)
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory, crdt::Doc, store::DocStore}; fn main() -> eidetica::Result<()> { let instance = Instance::open(Box::new(InMemory::new()))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test_db"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; // Start an authenticated transaction (automatically uses the database's default key) let op = database.new_transaction()?; { // Get the DocStore store handle (scoped) let config_store = op.get_store::<DocStore>("configuration")?; // Set some values config_store.set("api_key", "secret-key-123")?; config_store.set("retry_count", "3")?; // Overwrite a value config_store.set("api_key", "new-secret-456")?; // Remove a value config_store.delete("old_setting")?; // Ok if it doesn't exist } // Commit the changes atomically let entry_id = op.commit()?; println!("DocStore changes committed in entry: {}", entry_id); Ok(()) }
4. Writing Data (Table Example)
extern crate eidetica; extern crate serde; use eidetica::{Instance, backend::database::InMemory, crdt::Doc, store::Table}; use serde::{Serialize, Deserialize}; #[derive(Debug, Clone, Serialize, Deserialize, PartialEq)] struct Task { description: String, completed: bool, } fn main() -> eidetica::Result<()> { let instance = Instance::open(Box::new(InMemory::new()))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test_db"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; // Start an authenticated transaction (automatically uses the database's default key) let op = database.new_transaction()?; let inserted_id; { // Get the Table handle let tasks_store = op.get_store::<Table<Task>>("tasks")?; // Insert a new task let task1 = Task { description: "Buy milk".to_string(), completed: false }; inserted_id = tasks_store.insert(task1)?; println!("Inserted task with ID: {}", inserted_id); // Insert another task let task2 = Task { description: "Write docs".to_string(), completed: false }; tasks_store.insert(task2)?; // Update the first task (requires getting it first if you only have the ID) if let Ok(mut task_to_update) = tasks_store.get(&inserted_id) { task_to_update.completed = true; tasks_store.set(&inserted_id, task_to_update)?; println!("Updated task {}", inserted_id); } else { eprintln!("Task {} not found for update?", inserted_id); } // Delete a task (if you knew its ID) // tasks_store.delete(&some_other_id)?; } // Commit all inserts/updates/deletes let entry_id = op.commit()?; println!("Table changes committed in entry: {}", entry_id); Ok(()) }
5. Reading Data (DocStore Viewer)
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory, crdt::Doc, store::DocStore}; fn main() -> eidetica::Result<()> { let instance = Instance::open(Box::new(InMemory::new()))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test_db"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; // Get a read-only viewer for the latest state let config_viewer = database.get_store_viewer::<DocStore>("configuration")?; match config_viewer.get("api_key") { Ok(api_key) => println!("Current API Key: {}", api_key), Err(e) if e.is_not_found() => println!("API Key not set."), Err(e) => return Err(e.into()), } match config_viewer.get("retry_count") { Ok(count_str) => { // Note: DocStore values can be various types if let Some(text) = count_str.as_text() { if let Ok(count) = text.parse::<u32>() { println!("Retry Count: {}", count); } } } Err(_) => println!("Retry count not set or invalid."), } Ok(()) }
6. Reading Data (Table Viewer)
extern crate eidetica; extern crate serde; use eidetica::{Instance, backend::database::InMemory, crdt::Doc, store::Table}; use serde::{Serialize, Deserialize}; #[derive(Debug, Clone, Serialize, Deserialize, PartialEq)] struct Task { description: String, completed: bool, } fn main() -> eidetica::Result<()> { let instance = Instance::open(Box::new(InMemory::new()))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test_db"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; let op = database.new_transaction()?; let tasks_store = op.get_store::<Table<Task>>("tasks")?; let id_to_find = tasks_store.insert(Task { description: "Test task".to_string(), completed: false })?; op.commit()?; // Get a read-only viewer let tasks_viewer = database.get_store_viewer::<Table<Task>>("tasks")?; // Get a specific task by ID match tasks_viewer.get(&id_to_find) { Ok(task) => println!("Found task {}: {:?}", id_to_find, task), Err(e) if e.is_not_found() => println!("Task {} not found.", id_to_find), Err(e) => return Err(e.into()), } // Search for all tasks println!("\nAll Tasks:"); match tasks_viewer.search(|_| true) { Ok(tasks) => { for (id, task) in tasks { println!(" ID: {}, Task: {:?}", id, task); } } Err(e) => eprintln!("Error searching tasks: {}", e), } Ok(()) }
7. Working with Nested Data (Path-Based Operations)
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory, crdt::Doc, store::DocStore, path, Database}; fn main() -> eidetica::Result<()> { // Setup database for testing let instance = Instance::open(Box::new(InMemory::new()))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test_db"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; // Start an authenticated transaction (automatically uses the database's default key) let op = database.new_transaction()?; // Get the DocStore store handle let user_store = op.get_store::<DocStore>("users")?; // Using path-based operations to create and modify nested structures // Set profile information using paths - creates nested structure automatically user_store.set_path(path!("user123.profile.name"), "Jane Doe")?; user_store.set_path(path!("user123.profile.email"), "jane@example.com")?; // Set preferences using paths user_store.set_path(path!("user123.preferences.theme"), "dark")?; user_store.set_path(path!("user123.preferences.notifications"), "enabled")?; user_store.set_path(path!("user123.preferences.language"), "en")?; // Set additional nested configuration user_store.set_path(path!("config.database.host"), "localhost")?; user_store.set_path(path!("config.database.port"), "5432")?; // Commit the changes let entry_id = op.commit()?; println!("Nested data changes committed in entry: {}", entry_id); // Read back the nested data using path operations let viewer_op = database.new_transaction()?; let viewer_store = viewer_op.get_store::<DocStore>("users")?; // Get individual values using path operations let _name_value = viewer_store.get_path(path!("user123.profile.name"))?; let _email_value = viewer_store.get_path(path!("user123.profile.email"))?; let _theme_value = viewer_store.get_path(path!("user123.preferences.theme"))?; let _host_value = viewer_store.get_path(path!("config.database.host"))?; // Get the entire user object to verify nested structure was created if let Ok(_user_data) = viewer_store.get("user123") { println!("User profile and preferences created successfully"); } // Get the entire config object to verify nested structure if let Ok(_config_data) = viewer_store.get("config") { println!("Configuration data created successfully"); } println!("Path-based operations completed successfully"); Ok(()) }
8. Working with Y-CRDT Documents (YDoc)
The YDoc store provides access to Y-CRDT (Yrs) documents for collaborative data structures. This requires the "y-crdt" feature flag.
extern crate eidetica; use eidetica::{Instance, backend::database::InMemory, crdt::Doc, store::YDoc, Database}; use eidetica::y_crdt::{Map as YMap, Transact}; fn main() -> eidetica::Result<()> { // Setup database for testing let backend = InMemory::new(); let instance = Instance::open(Box::new(backend))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "y_crdt_example"); let default_key = user.get_default_key()?; let database = user.create_database(settings, &default_key)?; // Start an authenticated transaction (automatically uses the database's default key) let op = database.new_transaction()?; // Get the YDoc store handle let user_info_store = op.get_store::<YDoc>("user_info")?; // Writing to Y-CRDT document user_info_store.with_doc_mut(|doc| { let user_info_map = doc.get_or_insert_map("user_info"); let mut txn = doc.transact_mut(); user_info_map.insert(&mut txn, "name", "Alice Johnson"); user_info_map.insert(&mut txn, "email", "alice@example.com"); user_info_map.insert(&mut txn, "bio", "Software developer"); Ok(()) })?; // Commit the transaction let entry_id = op.commit()?; println!("YDoc changes committed in entry: {}", entry_id); // Reading from Y-CRDT document let read_op = database.new_transaction()?; let reader_store = read_op.get_store::<YDoc>("user_info")?; reader_store.with_doc(|doc| { let user_info_map = doc.get_or_insert_map("user_info"); let txn = doc.transact(); println!("User Information:"); if let Some(name) = user_info_map.get(&txn, "name") { let name_str = name.to_string(&txn); println!("Name: {name_str}"); } if let Some(email) = user_info_map.get(&txn, "email") { let email_str = email.to_string(&txn); println!("Email: {email_str}"); } if let Some(bio) = user_info_map.get(&txn, "bio") { let bio_str = bio.to_string(&txn); println!("Bio: {bio_str}"); } Ok(()) })?; // Working with nested Y-CRDT maps let prefs_op = database.new_transaction()?; let prefs_store = prefs_op.get_store::<YDoc>("user_prefs")?; prefs_store.with_doc_mut(|doc| { let prefs_map = doc.get_or_insert_map("preferences"); let mut txn = doc.transact_mut(); prefs_map.insert(&mut txn, "theme", "dark"); prefs_map.insert(&mut txn, "notifications", "enabled"); prefs_map.insert(&mut txn, "language", "en"); Ok(()) })?; prefs_op.commit()?; // Reading preferences let prefs_read_op = database.new_transaction()?; let prefs_read_store = prefs_read_op.get_store::<YDoc>("user_prefs")?; prefs_read_store.with_doc(|doc| { let prefs_map = doc.get_or_insert_map("preferences"); let txn = doc.transact(); println!("User Preferences:"); // Iterate over all preferences for (key, value) in prefs_map.iter(&txn) { let value_str = value.to_string(&txn); println!("{key}: {value_str}"); } Ok(()) })?; Ok(()) }
YDoc Features:
- Collaborative Editing: Y-CRDT documents provide conflict-free merging for concurrent modifications
- Rich Data Types: Support for Maps, Arrays, Text, and other Y-CRDT types
- Functional Interface: Access via
with_doc()for reads andwith_doc_mut()for writes - Atomic Integration: Changes are staged within the Transaction and committed atomically
Use Cases for YDoc:
- User profiles and preferences (as shown in the todo example)
- Collaborative documents and shared state
- Real-time data synchronization
- Any scenario requiring conflict-free concurrent updates
9. Saving the Database (InMemory)
extern crate eidetica; use eidetica::{backend::database::InMemory, Instance, crdt::Doc}; use std::path::PathBuf; fn main() -> eidetica::Result<()> { // Create a test database let backend = InMemory::new(); let instance = Instance::open(Box::new(backend))?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "save_example"); let default_key = user.get_default_key()?; let _database = user.create_database(settings, &default_key)?; // Use a temporary file for testing let temp_dir = std::env::temp_dir(); let db_path = temp_dir.join("eidetica_save_example.json"); // Save the database to a file let database_guard = instance.backend(); // Downcast to the concrete InMemory type if let Some(in_memory_database) = database_guard.as_any().downcast_ref::<InMemory>() { match in_memory_database.save_to_file(&db_path) { Ok(_) => println!("Database saved successfully to {:?}", db_path), Err(e) => eprintln!("Error saving database: {}", e), } } else { eprintln!("Database is not InMemory, cannot save to file this way."); } // Clean up the temporary file if db_path.exists() { std::fs::remove_file(&db_path).ok(); } Ok(()) }
Complete Example: Chat Application
For a full working example that demonstrates Eidetica in a real application, see the Chat Example in the repository.
The chat application showcases:
- User Management: Automatic passwordless user creation with key management
- Multiple Databases: Each chat room is a separate database
- Table Store: Messages stored with auto-generated IDs
- Multi-Transport Sync: HTTP for local testing, Iroh for P2P with NAT traversal
- Bootstrap Protocol: Automatic access requests when joining rooms
- Real-time Updates: Periodic message refresh with automatic sync
- TUI Interface: Interactive terminal UI using Ratatui
Key Architectural Concepts
The chat example demonstrates several advanced patterns:
1. User API with Automatic Key Management
// Initialize instance with sync enabled
let backend = InMemory::new();
let instance = Instance::create(Box::new(backend))?;
instance.enable_sync()?;
// Create passwordless user (or use existing)
let username = "alice";
let _ = instance.create_user(username, None);
// Login to get User session (handles key management automatically)
let user = instance.login_user(username, None)?;
// User API automatically manages cryptographic keys for databases
let default_key = user.get_default_key()?;
println!("User {} has key: {}", username, default_key);
2. Room Creation with Global Access
// Create a chat room (database) with settings
let mut settings = Doc::new();
settings.set("name", "Team Chat");
let key_id = user.get_default_key()?;
let database = user.create_database(settings, &key_id)?;
// Add global wildcard permission so anyone can join and write
let tx = database.new_transaction()?;
let settings_store = tx.get_settings()?;
let global_key = auth::AuthKey::active("*", auth::Permission::Write(10))?;
settings_store.set_auth_key("*", global_key)?;
tx.commit()?;
println!("Chat room created with ID: {}", database.root_id());
3. Message Storage with Table
use chrono::{DateTime, Utc};
use uuid::Uuid;
#[derive(Debug, Clone, Serialize, Deserialize)]
struct ChatMessage {
id: String,
author: String,
content: String,
timestamp: DateTime<Utc>,
}
impl ChatMessage {
fn new(author: String, content: String) -> Self {
Self {
id: Uuid::new_v4().to_string(),
author,
content,
timestamp: Utc::now(),
}
}
}
// Send a message to the chat room
let message = ChatMessage::new("alice".to_string(), "Hello, world!".to_string());
let op = database.new_transaction()?;
let messages_store = op.get_store::<Table<ChatMessage>>("messages")?;
messages_store.insert(message)?;
op.commit()?;
// Read all messages
let viewer_op = database.new_transaction()?;
let viewer_store = viewer_op.get_store::<Table<ChatMessage>>("messages")?;
let all_messages = viewer_store.search(|_| true)?;
for (_, msg) in all_messages {
println!("[{}] {}: {}", msg.timestamp.format("%H:%M:%S"), msg.author, msg.content);
}
4. Bootstrap Connection to Remote Room
// Join an existing room using bootstrap protocol
let room_address = "abc123def456@127.0.0.1:8080"; // From room creator
// Parse room address (format: room_id@server_address)
let parts: Vec<&str> = room_address.split('@').collect();
let room_id = eidetica::entry::ID::from(parts[0]);
let server_addr = parts[1];
// Enable sync transport
if let Some(sync) = instance.sync() {
sync.enable_http_transport()?;
// Request access to the room (bootstrap protocol)
let key_id = user.get_default_key()?;
user.request_database_access(
&sync,
server_addr,
&room_id,
&key_id,
eidetica::auth::Permission::Write(10),
).await?;
// Register the database with User's key manager
user.track_database(eidetica::user::types::TrackedDatabase {
database_id: room_id.clone(),
key_id: key_id.clone(),
sync_settings: eidetica::user::types::SyncSettings {
sync_enabled: true,
sync_on_commit: true,
interval_seconds: None,
properties: std::collections::HashMap::new(),
},
})?;
// Open the synced database
let database = user.open_database(&room_id)?;
println!("Joined room successfully!");
}
5. Real-time Sync with Callbacks
// Automatic sync is configured via peer relationships
// When you add a peer for a database, commits automatically trigger sync
if let Some(sync) = instance.sync() {
if let Ok(peers) = sync.list_peers() {
if let Some(peer) = peers.first() {
// Add tree sync relationship - this enables automatic sync on commit
sync.add_tree_sync(&peer.pubkey, &database.root_id()).await?;
println!("Automatic sync enabled for database");
}
}
}
// Manually trigger immediate sync for a specific database
sync.sync_with_peer(server_addr, Some(&database.root_id())).await?;
Running the Chat Example
# From the repository root
cd examples/chat
# Create a new room (default uses Iroh P2P transport)
cargo run -- --username alice
# Or use HTTP transport for local testing
cargo run -- --username alice --transport http
# Connect to an existing room
cargo run -- <room_address> --username bob
Creating a new room: When you run without a room address, the app will:
- Create a new room
- Display the room address that others can use to join
- Wait for you to press Enter before starting the chat interface
Example output:
🚀 Eidetica Chat Room Created!
📍 Room Address: abc123@127.0.0.1:54321
👤 Username: alice
Share this address with others to invite them to the chat.
Press Enter to start chatting...
Joining an existing room: When you provide a room address as the first argument, the app connects and starts the chat interface immediately.
Transport Options
HTTP Transport (--transport http):
- Simple client-server model for local networks
- Server binds to
127.0.0.1with random port - Address format:
room_id@127.0.0.1:PORT - Best for testing and same-machine demos
Iroh Transport (--transport iroh, default):
- Peer-to-peer with built-in NAT traversal
- Uses QUIC protocol with relay servers
- Address format:
room_id@{node-info-json} - Best for internet connections across networks
Architecture Highlights
The chat example demonstrates production-ready patterns:
- Multi-database architecture: Each room is isolated with independent sync state
- User session management: Automatic key discovery and database registration
- Bootstrap protocol: Seamless joining of rooms with access requests
- Dual transport support: Flexible networking for different environments
- CRDT-based messages: Eventual consistency with deterministic ordering
- Automatic sync: Background synchronization triggered by commits via callbacks
See the full chat example documentation for detailed usage instructions, complete workflow examples, troubleshooting tips, and implementation details.
Overview
This section contains documentation for developers and those who want to understand the full system and technology behind Eidetica. It covers architecture, internals, and development practices.
If you want to contribute, start with the Contributing guide for development environment setup and workflow.
Architecture Overview
Eidetica is a decentralized database designed to "Remember Everything."
The system is built on a foundation of content-addressable entries organized in databases, with a pluggable backend system for storage. Entry objects are immutable and contain Tree/SubTree structures that form the Merkle-DAG, with integrated authentication using Ed25519 digital signatures to verify the integrity of the data and its history. Database and Store abstractions over these internal structures help to translate those concepts into something more familiar to developers.
See DAG Structure for details on the Merkle-DAG architecture.
API Reference
For detailed API documentation, see the rustdoc API reference (development version) or docs.rs/eidetica (stable releases).
Contributing
This guide covers setting up a local development environment for contributing to Eidetica.
Prerequisites
Eidetica uses Nix for reproducible development environments. Install Nix with flakes enabled, or use the Determinate Systems installer which enables flakes by default. The Nix flake provides pinned versions of all development tools: Rust toolchain, cargo-nextest, mdbook, formatters, and more.
If you want to skip Nix, a standard Rust toolchain should be sufficient. The main project is structured as a Cargo workspace.
Task Runner
Taskfile provides convenient commands for common workflows. Tasks wrap cargo, nix, and other tools as needed.
task --list # See all available tasks
Common Commands
| Command | Description |
|---|---|
task build | Fast incremental build |
task test | Run tests with cargo nextest |
task clippy | Strict linting |
task fmt | Multi-language formatting |
task ci:local | Full local CI pipeline |
task ci:nix | Nix CI pipeline |
Testing
| Command | Description |
|---|---|
task test | Unit and integration tests via nextest |
task test:doc | Code examples in /// doc comments |
task book:test | Code examples in mdbook documentation |
Nix Commands
Direct Nix commands are available when needed:
| Command | Description |
|---|---|
nix develop | Enter the development shell |
nix build | Build the default package |
nix flake check | Run all CI checks |
Binary caching via Cachix speeds up builds by providing pre-built dependencies.
Development Workflow
- Enter the dev shell:
nix developor use direnv - Make changes
- Build:
task build - Test:
task test - Lint:
task clippy - Format:
task fmt - Run full CI locally before pushing:
task ci:local
CI Integration
The same checks that run locally also run in CI. See CI/Build Infrastructure for details on the CI systems.
CI/Build Infrastructure
Eidetica uses a CI/build system with GitHub Actions, Forgejo CI, and Nix flakes.
The philosophy for CI is that compute resources are cheap and developer resources (my time) are expensive. CI is used for comprehensive testing, status reporting, security checks, dependency updates, documentation generation, and releasing. In the current setup some of these are run multiple times on several platforms to ensure compatibility and reliability across different environments.
Fuzz / simulation testing are planned for the future.
CI Systems
GitHub Actions
The primary CI runs on GitHub with these workflows:
- rust.yml: Main Rust CI pipeline (format, clippy, build, test, doc tests, book tests)
- nix.yml: Nix-based CI that mostly runs the same tests but inside the Nix sandbox
- security.yml: Weekly vulnerability scanning and dependency review
- coverage.yml: Code coverage tracking via Codecov
- deploy-docs.yml: Documentation deployment to GitHub Pages
- release-plz.yml: Automated releases and crates.io publishing
Forgejo CI
A dedicated Forgejo runner provides CI redundancy on Codeberg. The Forgejo workflows mirror the testing in the GitHub Actions setup with minor adaptations for the Forgejo environment.
Nix Flake
The Nix flake defines reproducible builds and CI checks that run identically locally and in CI:
nix build- Build the default packagenix flake check- Run all CI checks (audit, clippy, doc, test, etc.)
Binary caching via Cachix speeds up builds by providing pre-built dependencies.
For local development setup, see Contributing.
Terminology
Eidetica uses two naming schemes:
Internal Data Structures
Trees and Subtrees. These align with the names used inside of an Entry:
- TreeNode: Main tree node within an Entry (root ID, parent references, metadata)
- SubTreeNode: Named subtree nodes within an Entry (name, parents, data payload)
Use these when discussing Entry internals, Merkle-DAG structure, or serialized data format.
User-Facing Abstractions
- Database: Collection of entries with shared authentication and history
- Store: Typed data access (DocStore, Table, YDoc) operating on named subtrees
Use these in public APIs, user documentation, and error messages.
A Database is an abstraction over a Tree, and Stores are an abstraction over the Subtrees within.
DAG Structure
Eidetica organizes data in a layered Merkle-DAG called a Tree. A Tree consists of Entries that form the main DAG, and each Entry can contain data for multiple subtrees. Each subtree forms its own independent DAG across the Entries.
Each Entry is immutable and content-addressable - its ID is a cryptographic hash of its contents. Parent references are these secure hashes, forming the Merkle structure.
For simplicity, let's walk through an example Tree with 4 Entries.
Entries Contain Subtrees
An Entry is the atomic unit. Each Entry can contain data for zero or more named subtrees:
graph LR
subgraph E1[Entry 1]
E1_t1[table_1]
E1_t2[table_2]
end
subgraph E2[Entry 2]
E2_t1[table_1]
end
subgraph E3[Entry 3]
E3_t2[table_2]
end
subgraph E4[Entry 4]
E4_t1[table_1]
E4_t2[table_2]
end
Entry 1 and Entry 4 contain data for both subtrees. Entry 2 only modifies table_1. Entry 3 only modifies table_2.
Main Tree DAG
The Tree DAG connects Entries through parent references (hashes of parent Entries). Entry 2 and Entry 3 are created in parallel (both reference Entry 1's hash as their parent). Entry 4 merges the branches by listing both Entry 2 and Entry 3's hashes as parents:
graph LR
E1[Entry 1] --> E2[Entry 2]
E1 --> E3[Entry 3]
E2 --> E4[Entry 4]
E3 --> E4
This shows the branching and merging capability of the DAG structure.
Subtree DAGs
Each subtree forms its own DAG by following subtree-specific parent references. These can skip Entries that didn't modify that subtree.
table_1 DAG - Entry 3 is skipped (no table_1 data):
graph LR
E1[Entry 1] --> E2[Entry 2] --> E4[Entry 4]
table_2 DAG - Entry 2 is skipped (no table_2 data):
graph LR
E1[Entry 1] --> E3[Entry 3] --> E4[Entry 4]
The main tree branches and merges, but each subtree DAG remains linear because E2 and E3 modified different subtrees.
Atomic Cross-Subtree Edits
A Transaction creates a single Entry. This makes it the primitive for synchronized edits across multiple subtrees within a Tree.
In the example above, Entry 1 and Entry 4 modify both table_1 and table_2 in a single Entry. Because an Entry is atomic, you always see both edits or neither - there's no state where only one subtree's changes are visible. This enables reliable cross-subtree operations where related data must stay consistent.
Sparse Verified Checkouts
Because subtree DAGs are independent, you can sync and verify just one subtree without the full tree data.
To verify table_1:
- Fetch only Entries that contain
table_1data (E1, E2, E4) - Follow
table_1's parent chain to verify the complete history - Entry 3 is not needed - it has no
table_1data
This enables efficient partial sync while maintaining full cryptographic verification of the synced data.
Settings Example
An example of how this is used effectively is the design of settings for the Tree.
The settings, including authentication, is stored in the _settings subtree. Each Entry in the Tree points to the latest tips of the _settings subtree.
What this means is that you can fully verify the authentication for any Entry only by syncing the _settings subtree, and without needing to download any other data from the Tree.
Subtrees
Each entry can contain multiple subtrees (e.g., "messages", "_settings"). Subtrees maintain independent parent-child relationships within the DAG.
Subtree Root Entries
A subtree root is an entry that starts a named subtree:
- Contains a
SubTreeNodefor the subtree - Has empty subtree parents (
[]) - Still has normal main tree parents
Entry {
tree: TreeNode {
root: "tree_id",
parents: ["main_parent_id"], // Normal main tree parents
},
subtrees: [
SubTreeNode {
name: "messages",
parents: [], // Empty = subtree root
data: "...",
}
],
}
Subsequent entries reference previous subtree entries as parents:
SubTreeNode {
name: "messages",
parents: ["previous_messages_entry_id"],
data: "...",
}
Automatic Parent Discovery
Transactions automatically determine subtree parents:
- If using current database tips → get current subtree tips
- If using custom parents → find subtree tips reachable from those parents
- If first subtree entry → empty tips (creates subtree root)
Always use transactions for entry creation - they handle parent discovery automatically.
See src/entry/mod.rs and src/transaction/mod.rs for implementation.
CRDT Merging
Eidetica implements a Merkle-CRDT using content-addressable entries organized in a Merkle DAG structure. Entries store data and maintain parent references to form a distributed version history that supports deterministic merging.
Core Concepts
- Content-Addressable Entries: Immutable data units forming a directed acyclic graph
- CRDT Trait: Enables deterministic merging of concurrent changes
- Parent References: Maintain history and define DAG structure
- Tips Tracking: Identifies current heads for efficient synchronization
Fork and Merge
The system supports branching and merging through parent-child relationships:
- Forking: Multiple entries can share parents, creating divergent branches
- Merging: Entries with multiple parents merge separate branches
- Deterministic Ordering: Entries sorted by height then ID for consistent results
Merge Algorithm
Uses a recursive LCA-based approach for computing CRDT states:
- Cache Check: Avoids redundant computation through automatic caching
- LCA Computation: Finds lowest common ancestor for multi-parent entries
- Recursive Building: Computes ancestor states recursively
- Path Merging: Merges all entries from LCA to parents with proper ordering
- Local Integration: Applies current entry's data to final state
Key Properties
- Correctness: Consistent state computation regardless of access patterns
- Performance: Caching eliminates redundant work
- Deterministic: Maintains ordering through proper LCA computation
- Immutable Caching: Entry immutability ensures cache validity
Authentication
Ed25519-based cryptographic authentication ensuring data integrity and access control.
Authentication States
| State | _settings.auth | Unsigned Ops | Authenticated Ops |
|---|---|---|---|
| Unsigned | Missing or {} | ✓ Allowed | ✓ Bootstrap |
| Signed | Has keys | ✗ Rejected | ✓ Validated |
Invalid States (Prevented)
| State | _settings.auth | All Ops |
|---|---|---|
| Corrupted | Wrong type | ✗ Rejected |
| Deleted | Tombstone | ✗ Rejected |
Corruption Prevention:
- Layer 1 (Proactive): Transactions that would corrupt or delete auth fail during
commit() - Layer 2 (Reactive): If already corrupted, all operations fail with
CorruptedAuthConfiguration
Permission Hierarchy
| Permission | Settings | Keys | Write | Read | Priority |
|---|---|---|---|---|---|
| Admin | ✓ | ✓ | ✓ | ✓ | 0-2^32 |
| Write | ✗ | ✗ | ✓ | ✓ | 0-2^32 |
| Read | ✗ | ✗ | ✗ | ✓ | None |
Lower priority number = higher privilege. Keys can only modify keys with equal or lower priority. Only Admin keys can modify the Settings, including the stored Keys.
Key Types
Direct Keys: Ed25519 public keys in _settings.auth:
{
"KEY_LAPTOP": {
"pubkey": "ed25519:BASE64_PUBLIC_KEY",
"permissions": "write:10",
"status": "active"
}
}
Wildcard Key (*): Details a default Permission for any key. Used for public databases or to avoid authentication.
Delegated Keys: Reference another database for authentication:
{
"user@example.com": {
"permission-bounds": { "max": "write:15" },
"database": { "root": "TREE_ID", "tips": ["TIP_ID"] }
}
}
Delegation
Databases can delegate auth to other databases with permission clamping:
max: Maximum permission (required)min: Minimum permission (optional)- Effective = clamp(delegated, min, max)
It is recursively applied, so the remote database can also delegate to other remote databases.
This can be used for building groups containing multiple keys/identities, or managing an individual's device-level keys.
Instead of a separate custom way of users managing and authenticating multiple keys, an individual can use the same authentication scheme as any other database. Then whenever they need access to a database, the db will authenticate them by granting access to their 'identity' database. This allows granting people/entities access to a database while letting them manage their own keys using all the same facilities as a typical database, including key rotation and revocation.
Tip tracking ensures revocations are respected, entries must use equal or newer tips than previously seen.
To keep remote delegated databases up to date, writes update the known tips of the delegated database. This is necessary to ensure that the primary tree sees the latest tips of the delegated tree and knows which keys to allow/block.
Conflict Resolution
Auth changes use Last-Write-Wins via DAG structure:
- Priority determines who CAN make changes
- LWW determines WHICH change wins
- Historical entries remain valid after permission changes
Sync
Eidetica uses a Merkle-CRDT based sync protocol. Peers exchange tips (current DAG heads) and send only the entries the other is missing.
Sync Flow
sequenceDiagram
participant A as Peer A
participant B as Peer B
A->>B: SyncTreeRequest (my tips)
B->>A: Response (entries you're missing, my tips)
A->>B: SendEntries (entries you're missing)
- Peer A sends its current tips for a Tree
- Peer B compares DAGs, returns entries A is missing plus B's tips
- A sends entries B is missing based on the tip comparison
This is stateless and self-correcting - no tracking of previously synced entries.
Bootstrap vs Incremental
The same protocol handles both cases:
- Empty tips (new database): Peer sends complete Tree from root
- Has tips (existing database): Peer sends only missing entries
Transport Options
- HTTP: REST API for server-based sync
- Iroh P2P: QUIC-based with NAT traversal for peer-to-peer sync
Both transports implement the same sync protocol.
Architecture
graph LR
App[Application] --> Sync[Sync Module]
Sync --> BG[Background Thread]
BG --> HTTP[HTTP Transport]
BG --> Iroh[Iroh Transport]
The Sync module queues operations for a background thread, which handles transport connections and retries failed sends with exponential backoff.
Current Limitations
The sync system is currently simple and 1:1. Each peer connection requires manual setup with explicit peer addresses. Planned improvements include:
- Peer discovery
- Address sharing and relay coordination
- Multi-peer sync orchestration
See Bootstrap System for the key exchange flow when joining a database.
Bootstrap
Secure key management and access control for new devices joining existing databases.
Architecture
Bootstrap requests are stored in the sync database (_sync), not target databases. The system supports automatic approval via global * permissions or manual approval workflow.
Request Flow
sequenceDiagram
participant Client
participant Handler
participant Database
Client->>Handler: Bootstrap Request (key, permission)
Handler->>Handler: Check global '*' permission
alt Global Permission Sufficient
Handler-->>Client: BootstrapResponse (approved)
else Need Manual Approval
Handler->>Handler: Store request
Handler-->>Client: BootstrapPending (request_id)
Note over Client: Admin reviews
Handler->>Database: Add key on approval
end
Global Permission Auto-Approval
If the database has a global * permission that satisfies the request, approval is immediate without adding a new key. The device uses the global permission for all operations.
Permission hierarchy uses lower numbers = higher priority:
- Global
Write(10)allows requests forRead,Write(11),Write(15) - Global
Write(10)rejects requests forWrite(5),Admin(*)
Manual Approval API
// Query requests
sync.pending_bootstrap_requests()?;
sync.approved_bootstrap_requests()?;
// Approve/reject
sync.approve_bootstrap_request(id, signing_key)?;
sync.reject_bootstrap_request(id, signing_key)?;
Request Status
- Pending: Awaiting admin review
- Approved: Key added to database
- Rejected: Request denied, no key added
Requests are retained indefinitely for audit trail.
See src/sync/bootstrap_request_manager.rs and src/sync/handler.rs for implementation.
Testing
Most tests are in tests/it/ as a single integration test binary, following the matklad pattern. Tests validate behavior through public interfaces only.
Unit tests should only be used when integration tests are not feasible or when testing private implementation details.
Organization
The module structure in tests/it/ mirrors src/. Each module has:
mod.rsfor test declarationshelpers.rsfor module-specific utilities- Common helpers in
tests/it/helpers.rs
Running Tests
task test # Run all tests with nextest
cargo test --test it # Run integration tests
cargo test auth:: # Run specific module tests
Writing Tests
- Add tests to appropriate module in
tests/it/ - Test both happy path and error cases
- Use helpers from
tests/it/helpers.rs - Follow
test_<component>_<functionality>naming
Performance
The architecture provides several performance characteristics:
- Content-addressable storage: Enables efficient deduplication through SHA-256 content hashing.
- Database structure (DAG): Supports partial replication and sparse checkouts. Tip calculation complexity depends on parent relationships.
- InMemoryDatabase: Provides high-speed operations but is limited by available RAM.
- Lock-based concurrency: May create bottlenecks in high-concurrency write scenarios.
- Height calculation: Uses BFS-based topological sorting with O(V + E) complexity.
- CRDT merge algorithm: Employs recursive LCA-based merging with intelligent caching.
CRDT Merge Performance
The recursive LCA-based merge algorithm uses caching for performance optimization:
Algorithm Complexity
- Cached states: O(1) amortized performance
- Uncached states: O(D × M) where D is DAG depth and M is merge cost
- Overall performance benefits from high cache hit rates
Key Performance Benefits
- Efficient handling of complex DAG structures
- Optimized path finding reduces database calls
- Cache eliminates redundant computations
- Scales well with DAG complexity through memoization
- Memory-computation trade-off favors cached access patterns
Errors
The database uses a custom Result (crate::Result) and Error (crate::Error) type hierarchy defined in crates/lib/src/lib.rs. Errors are typically propagated up the call stack using Result.
The Error enum uses a modular approach with structured error types from each component:
Io(#[from] std::io::Error): Wraps underlying I/O errors from backend operations or file system access.Serialize(#[from] serde_json::Error): Wraps errors occurring during JSON serialization or deserialization.Auth(auth::AuthError): Structured authentication errors with detailed context.Backend(backend::DatabaseError): Database storage and retrieval errors.Instance(instance::InstanceError): Instance management errors.CRDT(crdt::CRDTError): CRDT operation and merge errors.Store(store::StoreError): Store data access and validation errors.Transaction(transaction::TransactionError): Transaction coordination errors.
The use of #[error(transparent)] allows for zero-cost conversion from module-specific errors into crate::Error using the ? operator. Helper methods like is_not_found(), is_permission_denied(), and is_authentication_error() enable categorized error handling without pattern matching on specific variants.
Design Documents
This section contains formal design documents that capture the architectural thinking, decision-making process, and implementation details for complex features in Eidetica. These documents serve as a historical record of our technical decisions and provide context for future development.
Purpose
Design documents in this section:
- Document the rationale behind major technical decisions
- Capture alternative approaches that were considered
- Outline implementation strategies and tradeoffs
- Serve as a reference for future developers
- Help maintain consistency in architectural decisions
Document Structure
Each design document typically includes:
- Problem statement and context
- Goals and non-goals
- Proposed solution
- Alternative approaches considered
- Implementation details and tradeoffs
- Future considerations and potential improvements
Available Design Documents
Implemented
- Authentication - Mandatory cryptographic authentication for all entries
- Settings Storage - How settings are stored and tracked in databases
- Subtree Index - Registry system for subtree metadata and type discovery
Proposed
- Users - Multi-user system with password-based authentication, user isolation, and per-user key management
- Key Management - Technical details for key encryption, storage, and discovery in the Users system
- Error Handling - Modular error architecture for improved debugging and user experience
✅ Status: Mostly Implemented
Core authentication is fully implemented: direct keys, delegated databases, permission clamping, and bootstrap protocol all have comprehensive test coverage.
Planned enhancements: Overlay databases, advanced key statuses (Ignore/Banned), performance optimizations.
Authentication Design
This document outlines the authentication and authorization scheme for Eidetica, a decentralized database built on Merkle-CRDT principles. The design emphasizes flexibility, security, and integration with the core CRDT system while maintaining distributed consistency.
Table of Contents
- Authentication Design
- Table of Contents
- Overview
- Authentication Modes and Bootstrap Behavior
- Design Goals and Principles
- System Architecture
- Authentication Framework
- Key Management
- Delegation (Delegated Databases)
- Conflict Resolution and Merging
- Authorization Scenarios
- Security Considerations
- Implementation Details
- Future Considerations
- References
Overview
Eidetica's authentication scheme is designed to leverage the same CRDT and Merkle-DAG principles that power the core database while providing robust access control for distributed environments. Unlike traditional authentication systems, this design must handle authorization conflicts that can arise from network partitions and concurrent modifications to access control rules.
Databases operate in one of two authentication modes: unsigned mode (no authentication configured) or signed mode (authentication required). This design supports both security-critical databases requiring signed operations, unsigned and typically local-only databases for higher performance, and unsigned 'overlay' trees that can be computed from signed trees.
The authentication system is not implemented as a pure consumer of the database API but is tightly integrated with the core system. This integration enables efficient validation and conflict resolution during entry creation and database merging operations.
Authentication Modes and Bootstrap Behavior
Eidetica databases support two distinct authentication modes with automatic transitions between them:
Unsigned Mode (No Authentication)
Databases are in unsigned mode when created without authentication configuration. In this mode:
- The
_settings.authkey is either missing or contains an emptyDoc({"auth": {}}) - Both states are equivalent and treated identically by the system
- Unsigned operations succeed: Transactions without signatures are allowed
- No validation overhead: Authentication validation is skipped for performance
- Suitable for: Local-only databases, temporary workspaces, development environments, overlay networks
Unsigned mode enables use cases where authentication overhead is unnecessary, such as:
- Local computation that never needs to sync
- Development and testing environments
- Temporary scratch databases
- The upcoming "overlays" feature (see below)
Signed Mode (Mandatory Authentication)
Once authentication is configured, databases are in signed mode where:
- The
_settings.authkey contains at least one authentication key - All operations require valid signatures: Only authenticated and transactions are valid
- Fail-safe validation: Corrupted or deleted auth configuration causes all transactions to fail
- Permanent transition: Cannot return to unsigned mode (would require creating a new database)
In signed mode, unsigned operations will fail with an authentication error. The system enforces mandatory authentication to maintain security guarantees once authentication has been established.
Fail-Safe Behavior:
The validation system uses two-layer protection to prevent and detect authentication corruption:
- Proactive Prevention (Layer 1): Transactions that would corrupt or delete auth configuration fail during
commit(), before the entry enters the Merkle DAG - Reactive Fail-Safe (Layer 2): If auth is already corrupted (from older code versions or external manipulation), all subsequent operations on top of the corrupted state are also invalid
Validation States:
| Auth State | _settings.auth Value | Unsigned Operations | Authenticated Operations | Status |
|---|---|---|---|---|
| Unsigned Mode | Missing or {} (empty Doc) | ✓ Allowed | ✓ Triggers bootstrap | Valid |
| Signed Mode | Valid key configuration | ✗ Rejected | ✓ Validated | Valid |
| Corrupted | Wrong type (String, etc.) | ✗ PREVENTED | ✗ PREVENTED | Cannot be created |
| Deleted | Tombstone (was deleted) | ✗ PREVENTED | ✗ PREVENTED | Cannot be created |
Note: Corrupted and Deleted states shown in the table are theoretical - the system prevents their creation through proactive validation. The fail-safe layer (Layer 2) remains as defense-in-depth against historical corruption or external DAG manipulation.
This defense-in-depth approach ensures that corrupted authentication configuration cannot be created or exploited to bypass security. See Authentication Reference for detailed implementation information.
Future: Overlay Databases
The unsigned mode design enables a planned feature called "overlays", computed databases that can be calculated from multiple machines.
The idea is that an "overlay" adds information to a database, backups for example, that can be reconstructed entirely from the original database.
Design Goals and Principles
Primary Goals
- Flexible Authentication: Support both unsigned mode for local-only work and signed mode for distributed collaboration
- Distributed Consistency: Authentication rules must merge deterministically across network partitions
- Cryptographic Security: All authentication based on Ed25519 public/private key cryptography
- Hierarchical Access Control: Support admin, read/write, and read-only permission levels
- Delegation: Support for delegating authentication to other databases without granting admin privileges (infrastructure built, activation pending)
- Auditability: All authentication changes are tracked in the immutable DAG history
Non-Goals
- Perfect Security: Admin key compromise requires manual intervention
- Real-time Revocation: Key revocation is eventually consistent, not immediate
System Architecture
Authentication Data Location
Authentication configuration is stored in the special _settings store under the auth key. This placement ensures that:
- Authentication rules are included in
_settings, which contains all the data necessary to validate the database and add new Entries - Access control changes are tracked in the immutable history
- Settings can be validated against the current entry being created
The _settings store uses the crate::crdt::Doc type, which is a hierarchical CRDT that resolves conflicts using Last-Write-Wins (LWW) semantics. The ordering for LWW is determined deterministically by the DAG design (see CRDT documentation for details).
Clarification: Throughout this document, when we refer to Doc, this is the hierarchical CRDT document type supporting nested structures. The _settings store specifically uses Doc to enable complex authentication configurations including nested policy documents and key management.
Permission Hierarchy
Eidetica implements a three-tier permission model:
| Permission Level | Modify _settings | Add/Remove Keys | Change Permissions | Read Data | Write Data | Public Database Access |
|---|---|---|---|---|---|---|
| Admin | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Write | ✗ | ✗ | ✗ | ✓ | ✓ | ✓ |
| Read | ✗ | ✗ | ✗ | ✓ | ✗ | ✓ |
Authentication Framework
Key Structure
The current implementation supports direct authentication keys stored in the _settings.auth configuration. Each key consists of:
classDiagram
class AuthKey {
String pubkey
Permission permissions
KeyStatus status
}
class Permission {
<<enumeration>>
Admin(priority: u32)
Write(priority: u32)
Read
}
class KeyStatus {
<<enumeration>>
Active
Revoked
}
AuthKey --> Permission
AuthKey --> KeyStatus
Note: Both direct keys and delegated databases are fully implemented and functional, including DelegatedTreeRef, PermissionBounds, and TreeReference types.
Direct Key Example
{
"_settings": {
"auth": {
"KEY_LAPTOP": {
"pubkey": "ed25519:PExACKOW0L7bKAM9mK_mH3L5EDwszC437uRzTqAbxpk",
"permissions": "write:10",
"status": "active"
},
"KEY_DESKTOP": {
"pubkey": "ed25519:QJ7bKAM9mK_mH3L5EDwszC437uRzTqAbxpkPExACKOW0L",
"permissions": "read",
"status": "active"
},
"*": {
"pubkey": "*",
"permissions": "read",
"status": "active"
},
"PUBLIC_WRITE": {
"pubkey": "*",
"permissions": "write:100",
"status": "active"
}
},
"name": "My Database"
}
}
Note: The wildcard key * enables global permissions for anyone. Wildcard keys:
- Can have any permission level: "read", "write:N", or "admin:N"
- Are commonly used for world-readable databases (with "read" permissions) but can grant broader access
- Can be revoked like any other key
- Can be included in delegated databases (if you delegate to a database with a wildcard, that's valid)
Entry Signing Format
Every entry in Eidetica must be signed. The authentication information is embedded in the entry structure:
{
"database": {
"root": "tree_root_id",
"parents": ["parent_entry_id"],
"data": "{\"key\": \"value\"}",
"metadata": "{\"_settings\": [\"settings_tip_id\"]}"
},
"stores": [
{
"name": "users",
"parents": ["parent_entry_id"],
"data": "{\"user_data\": \"example\"}"
}
],
"auth": {
"sig": "ed25519_signature_base64_encoded",
"key": "KEY_LAPTOP"
}
}
The auth.key field can be either:
- Direct key: A string referencing a key name in this database's
_settings.auth - Delegation path: An ordered list of
{"key": "delegated_tree_1", "tips": ["A", "B"]}elements, where the last element must contain only a"key"field
The auth.sig field contains the base64-encoded Ed25519 signature of the entry's content hash.
Key Management
Key Lifecycle
The current implementation supports two key statuses:
stateDiagram-v2
[*] --> Active: Key Added
Active --> Revoked: Revoke Key
Revoked --> Active: Reactivate Key
note right of Active : Can create new entries
note right of Revoked : Historical entries preserved, cannot create new entries
Key Status Semantics
- Active: Key can create new entries and all historical entries remain valid
- Revoked: Key cannot create new entries. Historical entries remain valid and their content is preserved during merges
Key Behavioral Details:
- Entries created before revocation remain valid to preserve history integrity
- An Admin can transition a key back to Active state from Revoked status
- Revoked status prevents new entries but preserves existing content in merges
Priority System
Priority is integrated into the permission levels for Admin and Write permissions:
- Admin(priority): Can modify settings and manage keys with equal or lower priority
- Write(priority): Can write data but not modify settings
- Read: No priority, read-only access
Priority values are u32 integers where lower values indicate higher priority:
- Priority
0: Highest priority, typically the initial admin key - Higher numbers = lower priority
- Keys can only modify other keys with equal or lower priority (equal or higher number)
Important: Priority only affects administrative operations (key management). It does not influence CRDT merge conflict resolution, which uses Last Write Wins semantics based on the DAG structure.
Key Naming and Aliasing
Auth settings serve two distinct purposes in delegation:
- Delegation references - Names that point to OTHER DATABASES (DelegatedTreeRef containing TreeReference)
- Signing keys - Names that point to PUBLIC KEYS (AuthKey containing Ed25519 public key)
Auth settings can also contain multiple names for the same public key, each potentially with different permissions. This enables:
- Readable delegation paths - Use friendly names like
"alice_laptop"instead of long public key strings - Permission contexts - Same key can have different permissions depending on how it's referenced
- API compatibility - Bootstrap can use public key strings while delegation uses friendly names
Example: Multiple names for same key
{
"_settings": {
"auth": {
"Ed25519:abc123...": {
"pubkey": "Ed25519:abc123...",
"permissions": "admin:0",
"status": "active"
},
"alice_work": {
"pubkey": "Ed25519:abc123...",
"permissions": "write:10",
"status": "active"
},
"alice_readonly": {
"pubkey": "Ed25519:abc123...",
"permissions": "read",
"status": "active"
}
}
}
}
Use Cases:
-
Instance API bootstrap: When using
instance.new_database(settings, key_name), the database is automatically bootstrapped with the signing key added to auth settings using the public key string as the name (e.g.,"Ed25519:abc123..."). This is the name used for signature verification. -
User API bootstrap: When using
user.new_database(settings, key_id), the behavior is similar - the key is added with its public key string as the name, regardless of any display name stored in user key metadata. -
Delegation paths: Delegation references keys by their name in auth settings. To enable readable delegation paths like
["alice@example.com", "alice_laptop"]instead of["alice@example.com", "Ed25519:abc123..."], add friendly name aliases to the delegated database's auth settings. -
Permission differentiation: The same physical key can have different permission levels depending on which name is used to reference it.
Key Aliasing Pattern:
// Bootstrap creates entry with public key string as name
let database = instance.new_database(settings, "alice_key")?;
// Auth now contains: { "Ed25519:abc123...": AuthKey(...) }
// Add friendly name alias for delegation
let transaction = database.new_transaction()?;
let settings = transaction.get_settings()?;
settings.update_auth_settings(|auth| {
// Same public key, friendly name, potentially different permission
auth.add_key("alice_laptop", AuthKey::active(
"Ed25519:abc123...", // Same public key
Permission::Write(10), // Can differ from bootstrap permission
)?)?;
Ok(())
})?;
transaction.commit()?;
// Auth now contains both:
// { "Ed25519:abc123...": AuthKey(..., Admin(0)) }
// { "alice_laptop": AuthKey(..., Write(10)) }
Important Notes:
- Both entries reference the same cryptographic key but can have different permissions
- Signature verification works with any name that maps to the correct public key
- Delegation paths use the key name from auth settings, making friendly aliases essential for readable delegation
- The name used in the
auth.keyfield (either direct or in a delegation path) must exactly match a name in the auth settings - Adding multiple names for the same key does not create duplicates - they are intentional aliases with potentially different permission contexts
Delegation (Delegated Authentication)
Status: Fully implemented and functional with comprehensive test coverage.
Concept and Benefits
Delegation allows any database to be referenced as a source of authentication keys for another database. This enables flexible authentication patterns where databases can delegate authentication to other databases without granting administrative privileges on the delegating database. Key benefits include:
- Flexible Delegation: Any database can delegate authentication to any other database
- User Autonomy: Users can manage their own personal databases with keys they control
- Cross-Project Authentication: Share authentication across multiple projects or databases
- Granular Permissions: Set both minimum and maximum permission bounds for delegated keys
Delegated databases are normal databases, and their authentication settings are used with permission clamping applied.
Important: Any database can be used as a delegated database - there's no special "authentication database" type. This means:
- A project's main database can delegate to a user's personal database
- Multiple projects can delegate to the same shared authentication database
- Databases can form delegation networks where databases delegate to each other
- The delegated database doesn't need to know it's being used for delegation
Structure
A delegated database reference in the main database's _settings.auth contains:
{
"_settings": {
"auth": {
"example@eidetica.dev": {
"permission-bounds": {
"max": "write:15",
"min": "read" // optional, defaults to no minimum
},
"database": {
"root": "hash_of_root_entry",
"tips": ["hash1", "hash2"]
}
},
"another@example.com": {
"permission-bounds": {
"max": "admin:20" // min not specified, so no minimum bound
},
"database": {
"root": "hash_of_another_root",
"tips": ["hash3"]
}
}
}
}
}
The referenced delegated database maintains its own _settings.auth with direct keys:
{
"_settings": {
"auth": {
"KEY_LAPTOP": {
"pubkey": "ed25519:AAAAC3NzaC1lZDI1NTE5AAAAI...",
"permissions": "admin:0",
"status": "active"
},
"KEY_MOBILE": {
"pubkey": "ed25519:AAAAC3NzaC1lZDI1NTE5AAAAI...",
"permissions": "write:10",
"status": "active"
}
}
}
}
Permission Clamping
Permissions from delegated databases are clamped based on the permission-bounds field in the main database's reference:
- max (required): The maximum permission level that keys from the delegated database can have
- Must be <= the permissions of the key adding the delegated database reference
- min (optional): The minimum permission level for keys from the delegated database
- If not specified, there is no minimum bound
- If specified, keys with lower permissions are raised to this level
The effective priority is derived from the effective permission returned after clamping. If the delegated key's permission already lies within the min/max bounds its original priority value is preserved; when a permission is clamped to a bound the bound's priority value becomes the effective priority:
graph LR
A["Delegated Database: admin:5"] --> B["Main Database: max=write:10, min=read"] --> C["Effective: write:10"]
D["Delegated Database: write:8"] --> B --> E["Effective: write:8"]
F["Delegated Database: read"] --> B --> G["Effective: read"]
H["Delegated Database: admin:5"] --> I["Main Database: max=read (no min)"] --> J["Effective: read"]
K["Delegated Database: read"] --> I --> L["Effective: read"]
M["Delegated Database: write:20"] --> N["Main Database: max=admin:15, min=write:25"] --> O["Effective: write:25"]
Clamping Rules:
- Effective permission = clamp(delegated_tree_permission, min, max)
- If delegated database permission > max, it's lowered to max
- If min is specified and delegated database permission < min, it's raised to min
- If min is not specified, no minimum bound is applied
- The max bound must be <= permissions of the key that added the delegated database reference
- Effective priority = priority embedded in the effective permission produced by clamping. This is either the delegated key's priority (when already inside the bounds) or the priority that comes from the
min/maxbound that performed the clamp. - Delegated database admin permissions only apply within that delegated database
- Permission clamping occurs at each level of delegation chains
- Note: There is no "none" permission level - absence of permissions means no access
Multi-Level References
Delegated databases can reference other delegated databases, creating delegation chains:
{
"auth": {
"sig": "signature_bytes",
"key": [
{
"key": "example@eidetica.dev",
"tips": ["current_tip"]
},
{
"key": "old-identity",
"tips": ["old_tip"]
},
{
"key": "LEGACY_KEY"
}
]
}
}
Delegation Chain Rules:
- The
auth.keyfield contains an ordered list representing the delegation path - Each element has a
"key"field and optionally"tips"for delegated databases - The final element must contain only a
"key"field (the actual signing key) - Each step represents traversing from one database to the next in the delegation chain
Path Traversal:
- Steps with
tips→ lookup delegation reference name in current DB → find DelegatedTreeRef → jump to referenced database - Final step (no tips) → lookup signing key name in current DB → find AuthKey → get Ed25519 public key for signature verification
- Key names at each step reference entries in that database's auth settings by name (see Key Naming and Aliasing)
Permission and Validation:
- Permission clamping applies at each level using the min/max function
- Priority at each step is the priority inside the permission value that survives the clamp at that level (outer reference, inner key, or bound, depending on which one is selected by the clamping rules)
- Tips must be valid at each level of the chain for the delegation to be valid
Delegated Database References
The main database must validate the delegated database structure as well as the main database.
Latest Known Tips
"Latest known tips" refers to the latest tips of a delegated database that have been seen used in valid key signatures within the current database. This creates a "high water mark" for each delegated database:
- When an entry uses a delegated database key, it includes the delegated database's tips at signing time
- The database tracks these tips as the "latest known tips" for that delegated database
- Future entries using that delegated database must reference tips that are equal to or newer than the latest known tips, or must be valid at the latest known tips
- This ensures that key revocations in delegated databases are respected once observed
Tip Tracking and Validation
To validate entries with delegated database keys:
- Check that the referenced tips are descendants of (or equal to) the latest known tips for that delegated database
- If they're not, check that the entry validates at the latest known tips
- Verify the key exists and has appropriate permissions at those tips
- Update the latest known tips if these are newer
- Apply permission clamping based on the delegation reference
This mechanism ensures that once a key revocation is observed in a delegated database, no entry can use an older version of that database where the key was still valid.
Key Revocation
Delegated database key deletion is always treated as revoked status in the main database. This prevents new entries from building on the deleted key's content while preserving the historical content during merges. This approach maintains the integrity of existing entries while preventing future reliance on removed authentication credentials.
By treating delegated database key deletion as revoked status, users can manage their own key lifecycle in the Main Database while ensuring that:
- Historical entries remain valid and their content is preserved
- New entries cannot use the revoked key's entries as parents
- The merge operation proceeds normally with content preserved
- Users cannot create conflicts that would affect other users' valid entries
Conflict Resolution and Merging
Conflicts in the _settings database are resolved by the crate::crdt::Doc type using Last Write Wins (LWW) semantics. When the database has diverged with both sides of the merge having written to the _settings database, the write with the higher logical timestamp (determined by the DAG structure) will win, regardless of the priority of the signing key.
Priority rules apply only to administrative permissions - determining which keys can modify other keys - but do not influence the conflict resolution during merges.
This is applied to delegated databases as well. A write to the Main Database must also recursively merge any changed settings in the delegated databases using the same LWW strategy to handle network splits in the delegated databases.
Key Status Changes in Delegated Databases: Examples
The following examples demonstrate how key status changes in delegated databases affect entries in the main database.
Example 1: Basic Delegated Database Key Status Change
Initial State:
graph TD
subgraph "Main Database"
A["Entry A<br/>Settings: delegated_tree1 = max:write:10, min:read<br/>Tip: UA"]
B["Entry B<br/>Signed by delegated_tree1:laptop<br/>Tip: UA<br/>Status: Valid"]
C["Entry C<br/>Signed by delegated_tree1:laptop<br/>Tip: UB<br/>Status: Valid"]
end
subgraph "Delegated Database"
UA["Entry UA<br/>Settings: laptop = active"]
UB["Entry UB<br/>Signed by laptop"]
end
A --> B
B --> C
UA --> UB
After Key Status Change in Delegated Database:
graph TD
subgraph "Main Database"
A["Entry A<br/>Settings: user1 = write:15"]
B["Entry B<br/>Signed by delegated_tree1:laptop<br/>Tip: UA<br/>Status: Valid"]
C["Entry C<br/>Signed by delegated_tree1:laptop<br/>Tip: UB<br/>Status: Valid"]
D["Entry D<br/>Signed by delegated_tree1:mobile<br/>Tip: UC<br/>Status: Valid"]
E["Entry E<br/>Signed by delegated_tree1:laptop<br/>Parent: C<br/>Tip: UB<br/>Status: Valid"]
F["Entry F<br/>Signed by delegated_tree1:mobile<br/>Tip: UC<br/>Sees E but ignores since the key is invalid"]
G["Entry G<br/>Signed by delegated_tree1:desktop<br/>Tip: UB<br/>Still thinks delegated_tree1:laptop is valid"]
H["Entry H<br/>Signed by delegated_tree1:mobile<br/>Tip: UC<br/>Merges, as there is a valid key at G"]
end
subgraph "Delegated Database (delegated_tree1)"
UA["Entry UA<br/>Settings: laptop = active, mobile = active, desktop = active"]
UB["Entry UB<br/>Signed by laptop"]
UC["Entry UC<br/>Settings: laptop = revoked<br/>Signed by mobile"]
end
A --> B
B --> C
C --> D
D --> F
C --> E
E --> G
F --> H
G --> H
UA --> UB
UB --> UC
Example 2: Last Write Wins Conflict Resolution
Scenario: Two admins make conflicting authentication changes during a network partition. Priority determines who can make the changes, but Last Write Wins determines the final merged state.
After Network Reconnection and Merge:
graph TD
subgraph "Merged Main Database"
A["Entry A"]
B["Entry B<br/>Alice (admin:10) bans user_bob<br/>Timestamp: T1"]
C["Entry C<br/>Super admin (admin:0) promotes user_bob to admin:5<br/>Timestamp: T2"]
M["Entry M<br/>Merge entry<br/>user_bob = admin<br/>Last write (T2) wins via LWW"]
N["Entry N<br/>Alice attempts to ban user_bob<br/>Rejected: Alice can't modify admin-level user with higher priority"]
end
A --> B
A --> C
B --> M
C --> M
M --> N
Key Points:
- All administrative actions are preserved in history
- Last Write Wins resolves the merge conflict: the most recent change (T2) takes precedence
- Permission-based authorization still prevents unauthorized modifications: Alice (admin:10) cannot ban a higher-priority user (admin:5) due to insufficient priority level
- The merged state reflects the most recent write, not the permission priority
- Permission priority rules prevent Alice from making the change in Entry N, as she lacks authority to modify higher-priority admin users
Authorization Scenarios
Network Partition Recovery
When network partitions occur, the authentication system must handle concurrent changes gracefully:
Scenario: Two branches of the database independently modify the auth settings, requiring CRDT-based conflict resolution using Last Write Wins.
Both branches share the same root, but a network partition has caused them to diverge before merging back together.
graph TD
subgraph "Merged Main Database"
ROOT["Entry ROOT"]
A1["Entry A1<br/>admin adds new_developer<br/>Timestamp: T1"]
A2["Entry A2<br/>dev_team revokes contractor_alice<br/>Timestamp: T3"]
B1["Entry B1<br/>contractor_alice data change<br/>Valid at time of creation"]
B2["Entry B2<br/>admin adds emergency_key<br/>Timestamp: T2"]
M["Entry M<br/>Merge entry<br/>Final state based on LWW:<br/>- new_developer: added (T1)<br/>- emergency_key: added (T2)<br/>- contractor_alice: revoked (T3, latest)"]
end
ROOT --> A1
ROOT --> B1
A1 --> A2
B1 --> B2
A2 --> M
B2 --> M
Conflict Resolution Rules Applied:
- Settings Merge: All authentication changes are merged using Doc CRDT semantics with Last Write Wins
- Timestamp Ordering: Changes are resolved based on logical timestamps, with the most recent change taking precedence
- Historical Validity: Entry B1 remains valid because it was created before the status change
- Content Preservation: With "revoked" status, content is preserved in merges but cannot be used as parents for new entries
- Future Restrictions: Future entries by contractor_alice would be rejected based on the applied status change
Security Considerations
Threat Model
Protected Against
- Unauthorized Entry Creation: All entries must be signed by valid keys
- Permission Escalation: Users cannot grant themselves higher privileges than their main database reference
- Historical Tampering: Immutable DAG prevents retroactive modifications
- Replay Attacks: Content-addressable IDs prevent entry duplication
- Administrative Hierarchy Violations: Lower priority keys cannot modify higher priority keys (but can modify equal priority keys)
- Permission Boundary Violations: Delegated database permissions are constrained within their specified min/max bounds
- Race Conditions: Last Write Wins provides deterministic conflict resolution
Requires Manual Recovery
- Admin Key Compromise: When no higher-priority key exists
- Conflicting Administrative Changes: LWW may result in unintended administrative state during network partitions
Cryptographic Assumptions
- Ed25519 Security: Default to ed25519 signatures with explicit key type storage
- Hash Function Security: SHA-256 for content addressing
- Key Storage: Private keys must be securely stored by clients
- Network Security: Assumption of eventually consistent but potentially unreliable network
Attack Vectors
Mitigated
- Key Replay: Content-addressable entry IDs prevent signature replay
- Downgrade Attacks: Explicit key type storage prevents algorithm confusion
- Partition Attacks: CRDT merging handles network partition scenarios
- Privilege Escalation: Permission clamping prevents users from exceeding granted permissions
Partial Mitigation
- DoS via Large Histories: Priority system limits damage from compromised lower-priority keys
- Social Engineering: Administrative hierarchy limits scope of individual key compromise
- Timestamp Manipulation: LWW conflict resolution is deterministic but may be influenced by the chosen timestamp resolution algorithm
- Administrative Confusion: Network partitions may result in unexpected administrative states due to LWW resolution
Not Addressed
- Side-Channel Attacks: Client-side key storage security is out of scope
- Physical Key Extraction: Assumed to be handled by client security measures
- Long-term Cryptographic Breaks: Future crypto-agility may be needed
Implementation Details
Authentication Validation Process
The current validation process:
- Extract Authentication Info: Parse the
authfield from the entry - Resolve Key Name: Lookup the direct key in
_settings.auth - Check Key Status: Verify the key is Active (not Revoked)
- Validate Signature: Verify the Ed25519 signature against the entry content hash
- Check Permissions: Ensure the key has sufficient permissions for the operation
Current features include: Direct key validation, delegated database resolution, tip validation, and permission clamping.
Sync Permissions
Eidetica servers require proof of read permissions before allowing database synchronization. The server challenges the client to sign a random nonce, then validates the signature against the database's authentication configuration.
Authenticated Bootstrap Protocol
The authenticated bootstrap protocol enables devices to join existing databases without prior local state while requesting authentication access:
Bootstrap Flow:
- Bootstrap Detection: Empty tips in SyncTreeRequest signals bootstrap needed
- Auth Request: Client includes requesting key, key name, and requested permission
- Global Permission Check: Server checks if global
*wildcard permission satisfies request - Immediate Approval: If global permission exists and satisfies, access granted immediately
- Manual Approval Queue: If no global permission, request stored for admin review
- Database Transfer: Complete database state sent with approval confirmation
- Access Granted: Client receives database and can make authenticated operations
Protocol Extensions:
SyncTreeRequestincludes:requesting_key,requesting_key_name,requested_permissionBootstrapResponseincludes:key_approved,granted_permissionBootstrapPendingresponse for manual approval scenarios- New sync API:
sync_with_peer_for_bootstrap()for authenticated bootstrap scenarios
Security:
- Ed25519 key cryptography for secure identity
- Permission levels maintained (Read/Write/Admin)
- Global wildcard permissions for automatic approval (secure by configuration)
- Manual approval queue for controlled access (secure by default)
- Immutable audit trail of all key additions in database history
CRDT Metadata Considerations
The current system uses entry metadata to reference settings tips. With authentication:
- Metadata continues to reference current
_settingstips for validation efficiency - Authentication validation uses the settings state at the referenced tips
- This ensures entries are validated against the authentication rules that were current when created
Implementation Architecture
Core Components
-
AuthValidator (
auth/validation.rs): Validates entries and resolves authentication- Direct key resolution and validation
- Signature verification
- Permission checking
- Caching for performance
-
Crypto Module (
auth/crypto.rs): Cryptographic operations- Ed25519 key generation and parsing
- Entry signing and verification
- Key format:
ed25519:<base64-encoded-public-key>
-
AuthSettings (
auth/settings.rs): Settings management interface- Add/update/get authentication keys
- Convert between settings storage and auth types
- Validate authentication operations
- Check permission access with
can_access()method for both specific and wildcard keys
-
Permission Module (
auth/permission.rs): Permission logic- Permission checking for operations
- Permission clamping for delegated databases
Storage Format
Authentication configuration is stored in _settings.auth as a Doc CRDT:
// Key storage structure
AuthKey {
pubkey: String, // Ed25519 public key
permissions: Permission, // Admin(u32), Write(u32), or Read
status: KeyStatus, // Active or Revoked
}
Future Considerations
Current Implementation Status
- Direct Keys: ✅ Fully implemented and tested
- Delegated Databases: ✅ Fully implemented with comprehensive test coverage
- Permission Clamping: ✅ Functional for delegation chains
- Delegation Depth Limits: ✅ Implemented with MAX_DELEGATION_DEPTH=10
Future Enhancements
- Advanced Key Status: Add Ignore and Banned statuses for more nuanced key management
- Performance Optimizations: Further caching and validation improvements
- User experience improvements for key management
References
✅ Status: Implemented
This design is fully implemented and functional.
Synchronization Design Document
This document outlines the design principles, architecture decisions, and implementation strategy for Eidetica's synchronization system.
Design Goals
Primary Objectives
- Decentralized Architecture: No central coordination required
- Performance: Minimize latency and maximize throughput
- Reliability: Handle network failures and recover gracefully
- Scalability: Support many peers and large datasets
- Security: Authenticated and verified peer communications
- Simplicity: Easy to configure and use
Non-Goals
- Selective sync: Sync entire databases only (not partial)
- Multi-hop routing: Direct peer connections only
- Complex conflict resolution: CRDT-based automatic resolution only
- Centralized coordination: No dependency on coordination servers
Key Design Innovation: Bootstrap-First Sync
Problem: Traditional distributed databases require complex setup procedures for new nodes to join existing networks. Peers must either start with empty databases or go through complex initialization.
Solution: Eidetica's bootstrap-first sync protocol enables zero-state joining:
- Single API call handles both bootstrap and incremental sync
- Automatic detection determines whether full or partial sync is needed
- No setup required - new devices can immediately join existing databases
- Bidirectional capability - any peer can bootstrap from any other peer
Use Cases Enabled:
- Chat/messaging apps: Join conversation rooms instantly with full history
- Collaborative documents: Open shared documents from any device
- Data synchronization: Sync app data to new devices automatically
- Backup/restore: Restore complete application state from peers
Core Design Principles
1. Merkle-CRDT Foundation
The sync system builds on Merkle DAG and CRDT principles:
- Content-addressable entries: Immutable, hash-identified data
- DAG structure: Parent-child relationships form directed acyclic graph
- CRDT merging: Deterministic conflict resolution
- Causal consistency: Operations maintain causal ordering
Benefits:
- Natural deduplication (same content = same hash)
- Efficient diff computation (compare tips)
- Automatic conflict resolution
- Partition tolerance
2. BackgroundSync Engine with Command Pattern
Decision: Single background thread with command-channel communication
Rationale:
- Clean architecture: Eliminates circular dependencies
- Ownership clarity: Background thread owns transport state
- Non-blocking: Commands sent via channels don't block operations
- Flexibility: Fire-and-forget or request-response patterns
Implementation:
The sync system uses a thin frontend that sends commands to a background thread:
- Frontend handles API and peer/relationship management in sync database
- Background owns transport and handles network operations
- Both components access sync database directly for peer data
- Commands used only for operations requiring background processing
- Failed operations added to retry queue
Trade-offs:
- ✅ No circular dependencies or complex locking
- ✅ Clear ownership model (transport in background, data in sync database)
- ✅ Works in both async and sync contexts
- ✅ Graceful startup/shutdown handling
- ❌ All sync operations serialized through single thread
3. Hook-Based Change Detection
Decision: Use write callbacks for change detection and sync triggering
Rationale:
- Flexible: Callbacks can be attached per-database with full context
- Consistent: Every commit triggers registered callbacks
- Simple: Direct function calls with Entry, Database, and Instance parameters
- Performance: Minimal overhead, no trait dispatch
Architecture:
// Callback function type (stored internally as Arc by Instance)
pub type WriteCallback = dyn Fn(&Entry, &Database, &Instance) -> Result<()> + Send + Sync;
// Integration with Database
impl Database {
pub fn on_local_write<F>(&self, callback: F) -> Result<()>
where
F: Fn(&Entry, &Database, &Instance) -> Result<()> + Send + Sync + 'static
{
// Register callback with instance for this database
// Instance wraps the callback in Arc internally
}
}
// Usage example for sync
let sync = instance.sync().expect("Sync enabled");
let sync_clone = sync.clone();
let peer_pubkey = "peer_key".to_string();
database.on_local_write(move |entry, db, _instance| {
sync_clone.queue_entry_for_sync(&peer_pubkey, entry.id(), db.root_id())
})?;
Benefits:
- Direct access to Entry, Database, and Instance in callbacks
- No need for context wrappers or trait implementations
- Callbacks receive full context needed for sync decisions
- Simple cloning pattern for use in closures
- Easy testing and debugging
4. Modular Transport Layer with SyncHandler Architecture
Decision: Abstract transport layer with handler-based request processing and transport metadata
Core Interface:
pub trait SyncTransport: Send + Sync {
/// Start server with handler for processing sync requests
async fn start_server(&mut self, addr: &str, handler: Arc<dyn SyncHandler>) -> Result<()>;
/// Send sync request and get response
async fn send_request(&self, address: &Address, request: &SyncRequest) -> Result<SyncResponse>;
}
pub trait SyncHandler: Send + Sync {
/// Process incoming sync requests with request context
async fn handle_request(&self, request: &SyncRequest, context: &RequestContext) -> SyncResponse;
}
/// Context information about incoming requests
pub struct RequestContext {
/// Remote address from which the request originated
pub remote_address: Option<Address>,
/// Peer public key from the sync request
pub peer_pubkey: Option<String>,
}
RequestContext captures transport metadata (remote address, peer pubkey after handshake) for automatic peer registration and address discovery.
Rationale:
- Database Access: Handlers can store received entries via backend
- Stateful Processing: Support GetTips, GetEntries, SendEntries operations
- Clean Separation: Transport handles networking, handler handles sync logic
- Flexibility: Support different network environments
- Evolution: Easy to add new transport protocols
- Testing: Mock transports for unit tests
Supported Transports:
HTTP Transport
pub struct HttpTransport {
client: reqwest::Client,
server: Option<HttpServer>,
handler: Option<Arc<dyn SyncHandler>>,
}
Implementation:
- Axum server with handler state injection
- JSON serialization at
/api/v0endpoint - Handler processes requests with database access
Use cases:
- Simple development and testing
- Firewall-friendly environments
- Integration with existing HTTP infrastructure
Trade-offs:
- ✅ Widely supported and debuggable
- ✅ Works through most firewalls/proxies
- ✅ Full database access via handler
- ❌ Less efficient than P2P protocols
- ❌ Requires port management
Iroh P2P Transport
pub struct IrohTransport {
endpoint: Option<Endpoint>,
server_state: ServerState,
handler: Option<Arc<dyn SyncHandler>>,
}
Implementation:
- QUIC bidirectional streams for request/response
- Handler integration in stream processing
- JsonHandler for serialization consistency
Use cases:
- Production deployments
- NAT traversal required
- Direct peer-to-peer communication
Trade-offs:
- ✅ Efficient P2P protocol with NAT traversal
- ✅ Built-in relay and hole punching
- ✅ QUIC-based with modern networking features
- ✅ Full database access via handler
- ❌ More complex setup and debugging
- ❌ Additional dependency
5. Automatic Peer and Relationship Management
Decision: Automatically register peers during handshake and track tree/peer relationships when peers request trees
Peer Registration: Captures advertised addresses from handshake plus actual remote address from transport connection for NAT traversal.
Relationship Tracking: Each sync request includes the peer's device public key, enabling automatic tracking of tree/peer relationships. This enables bidirectional sync_on_commit without manual setup.
6. Declarative Sync API
Decision: Provide register_sync_peer() for declaring sync intent with SyncHandle for status tracking
Applications register sync relationships once; the background engine handles synchronization automatically. Status tracking via polling (async events planned for future).
7. Persistent State Management
Decision: All peer and relationship state stored persistently in sync database
Architecture:
Sync Database (Persistent):
├── peers/{peer_pubkey} -> PeerInfo (addresses, status, metadata)
├── relationships/{peer}/{database} -> SyncRelationship
├── sync_state/cursors/{peer}/{database} -> SyncCursor
├── sync_state/metadata/{peer} -> SyncMetadata
└── sync_state/history/{sync_id} -> SyncHistoryEntry
BackgroundSync (Transient):
├── retry_queue: Vec<RetryEntry> (failed sends pending retry)
└── sync_tree_id: ID (reference to sync database for peer lookups)
Design:
- All peer data is stored in the sync database via PeerManager
- BackgroundSync reads peer information on-demand when needed
- Frontend writes peer/relationship changes directly to sync database
- Single source of truth in persistent storage
Rationale:
- Durability: All critical state survives restarts
- Consistency: Single source of truth in sync database
- Recovery: Full state recovery after failures
- Simplicity: No duplicate state management
Architecture Deep Dive
Component Interactions
graph LR
subgraph "Change Detection"
A[Transaction::commit] --> B[WriteCallbacks]
B --> C[Sync::queue_entry_for_sync]
end
subgraph "Command Channel"
C --> D[Command TX]
D --> E[Command RX]
end
subgraph "BackgroundSync Thread"
E --> F[BackgroundSync]
F --> G[Transport Layer]
G --> H[HTTP/Iroh/Custom]
F --> I[Retry Queue]
F -.->|reads| ST[Sync Database]
end
subgraph "State Management"
K[SyncStateManager] --> L[Persistent State]
F --> K
end
subgraph "Peer Management"
M[PeerManager] --> N[Peer Registry]
F --> M
end
Data Flow Design
1. Entry Commit Flow
1. Application calls database.new_transaction().commit()
2. Transaction stores entry in backend
3. Transaction triggers write callbacks with Entry, Database, and Instance
4. Callback invokes sync.queue_entry_for_sync()
5. Sync sends QueueEntry command to BackgroundSync via channel
6. BackgroundSync fetches entry from backend
7. Entry sent immediately to peer via transport
8. Failed sends added to retry queue
2. Peer Connection Flow
1. Application calls sync.connect_to_peer(address)
2. Sync creates HandshakeRequest with device info
3. Transport sends handshake to peer
4. Peer responds with HandshakeResponse
5. Both peers verify signatures and protocol versions
6. Successful peers are registered in PeerManager
7. Connection state updated to Connected
3. Sync Relationship Flow
1. Application calls sync.add_tree_sync(peer_id, tree_id)
2. PeerManager stores relationship in sync database
3. Future commits to database trigger sync callbacks
4. Callbacks query relationships from sync database
5. Entries queued for sync with configured peers
BackgroundSync Command Management
Command Structure
The BackgroundSync engine processes commands sent from the frontend:
- SendEntries: Direct entry transmission to peer
- QueueEntry: Entry committed, needs sync
- AddPeer/RemovePeer: Peer registry management
- CreateRelationship: Database-peer sync mapping
- StartServer/StopServer: Transport server control
- ConnectToPeer: Establish peer connection
- SyncWithPeer: Trigger bidirectional sync
- Shutdown: Graceful termination
Processing Model
Immediate processing: Commands handled as received
- No batching delays or queue buildup
- Failed operations go to retry queue
- Fire-and-forget for most operations
- Request-response via oneshot channels when needed
Retry queue: Failed sends with exponential backoff
- 2^attempts seconds delay (max 64s)
- Configurable max attempts before dropping
- Processed every 30 seconds by timer
Error Handling Strategy
Transient errors: Retry with exponential backoff
- Network timeouts
- Temporary peer unavailability
- Transport-level failures
Persistent errors: Remove after max retries
- Invalid peer addresses
- Authentication failures
- Protocol incompatibilities
Recovery mechanisms:
// Automatic retry tracking
entry.mark_attempted(Some(error.to_string()));
// Cleanup failed entries periodically
queue.cleanup_failed_entries(max_retries)?;
// Metrics for monitoring
let stats = queue.get_sync_statistics()?;
Transport Layer Design
Iroh Transport Configuration
Design Decision: Builder pattern for transport configuration
The Iroh transport uses a builder pattern to support different deployment scenarios:
RelayMode Options:
- Default: Production deployments use n0's global relay infrastructure
- Staging: Testing against n0's staging infrastructure
- Disabled: Local testing without internet dependency
- Custom: Enterprise deployments with private relay servers
Rationale:
- Flexibility: Different environments need different configurations
- Performance: Local tests run faster without relay overhead
- Privacy: Enterprises can run private relay infrastructure
- Simplicity: Defaults work for most users without configuration
Address Serialization:
The Iroh transport serializes NodeAddr information as JSON containing:
- Node ID (cryptographic identity)
- Direct socket addresses (for P2P connectivity)
This allows the same get_server_address() interface to work for both HTTP (returns socket address) and Iroh (returns rich connectivity info).
Security Design
Authentication Model
Device Identity:
- Each database instance has an Ed25519 keypair
- Public key serves as device identifier
- Private key signs all sync operations
Peer Verification:
- Handshake includes signature challenge
- Both peers verify counterpart signatures
- Only verified peers allowed to sync
Entry Authentication:
- All entries signed by creating device
- Receiving peer verifies signatures
- Invalid signatures rejected
Trust Model
Assumptions:
- Peers are semi-trusted (authenticated but may be malicious)
- Private keys are secure
- Transport layer provides integrity
Threat Mitigation:
- Man-in-middle: Ed25519 signatures prevent tampering
- Replay attacks: Entry IDs are content-based (no replays possible)
- Denial of service: Rate limiting and queue size limits
- Data corruption: Signature verification catches corruption
Protocol Security
Handshake Protocol:
A -> B: HandshakeRequest {
device_id, public_key, challenge, signature,
listen_addresses: [Address] // Advertised addresses for A
}
B -> A: HandshakeResponse {
device_id, public_key, challenge_response, counter_challenge
}
// B registers A with listen_addresses + remote_address from transport
Bootstrap-First Protocol:
The sync protocol supports zero-state joining through automatic bootstrap detection:
# Bootstrap Scenario (client has no local database)
A -> B: SyncTreeRequest {
tree_id: ID,
our_tips: [], // Empty = bootstrap needed
peer_pubkey: Some(device_pubkey) // For automatic peer tracking
}
B -> A: BootstrapResponse {
tree_id: ID,
root_entry: Entry,
all_entries: Vec<Entry> // Complete database
}
# Incremental Scenario (client has database)
A -> B: SyncTreeRequest {
tree_id: ID,
our_tips: [tip1, tip2, ...], // Current tips
peer_pubkey: Some(device_pubkey) // For automatic peer tracking
}
B -> A: IncrementalResponse {
tree_id: ID,
missing_entries: Vec<Entry>, // New changes for client
their_tips: [tip1, tip2, ...] // Server's tips for bidirectional sync
}
# Bidirectional Completion (client sends missing entries to server)
A -> B: SendEntriesRequest {
tree_id: ID,
entries: Vec<Entry> // Entries server is missing
}
B -> A: SendEntriesResponse {
success: bool
}
Design Benefits:
- Unified API: Single request type handles both scenarios
- Auto-detection: Server determines sync type from empty tips
- Zero-configuration: No manual bootstrap setup required
- Efficient: Only transfers necessary data (full or incremental)
- True Bidirectional: Complete synchronization in single operation using existing protocol fields
Performance Considerations
Memory Usage
Queue sizing:
- Default: 100 entries per peer × 100 bytes = 10KB per peer
- Configurable limits prevent memory exhaustion
- Automatic cleanup of failed entries
Persistent state:
- Minimal: ~1KB per peer relationship
- Periodic cleanup of old history entries
- Efficient serialization formats
Network Efficiency
Batching benefits:
- Reduce TCP/HTTP overhead
- Better bandwidth utilization
- Fewer transport-layer handshakes
Compression potential:
- Similar entries share structure
- JSON/binary format optimization
- Transport-level compression (HTTP gzip, QUIC)
CPU Usage
Background worker:
- Configurable check intervals
- Async processing doesn't block application
- Efficient queue scanning
Hook execution:
- Fast in-memory operations only
- Hook failures don't affect commits
- Minimal serialization overhead
Configuration Design
Queue Configuration
pub struct SyncQueueConfig {
pub max_queue_size: usize, // Size-based flush trigger
pub max_queue_age_secs: u64, // Age-based flush trigger
pub batch_size: usize, // Max entries per network call
}
Tuning guidelines:
- High-frequency apps: Lower max_queue_age_secs (5-15s)
- Batch workloads: Higher max_queue_size (200-1000)
- Low bandwidth: Lower batch_size (10-25)
- High bandwidth: Higher batch_size (100-500)
Worker Configuration
pub struct SyncFlushConfig {
pub check_interval_secs: u64, // How often to check for flushes
pub enabled: bool, // Enable/disable background worker
}
Trade-offs:
- Lower check_interval = more responsive, higher CPU
- Higher check_interval = less responsive, lower CPU
Implementation Strategy
Phase 1: Core Infrastructure ✅
- BackgroundSync engine with command pattern
- Hook-based change detection
- Basic peer management
- HTTP transport
- Ed25519 handshake protocol
Phase 2: Production Features ✅
- Iroh P2P transport (handler needs fix)
- Retry queue with exponential backoff
- Sync state persistence via DocStore
- Channel-based communication
- RequestContext for transport metadata
- Automatic peer registration
- Automatic tree/peer relationship tracking
- Declarative sync API (register_sync_peer)
- SyncHandle and SyncStatus tracking
- 78 integration tests passing
Phase 3: Advanced Features
- Sync priorities and QoS
- Bandwidth throttling
- Monitoring and metrics
- Multi-database coordination
Phase 4: Scalability
- Persistent queue spillover
- Streaming for large entries
- Advanced conflict resolution
- Performance analytics
Testing Strategy
Unit Testing
Component isolation:
- Mock transport layer for networking tests
- In-memory backends for storage tests
- Deterministic time for age-based tests
Coverage targets:
- Queue operations: 100%
- Hook execution: 100%
- Error handling: 95%
- State management: 95%
Integration Testing
Multi-peer scenarios:
- 2-peer bidirectional sync
- 3+ peer mesh networks
- Database sync relationship management
- Network failure recovery
Performance testing:
- Large queue handling
- High-frequency updates
- Memory usage under load
- Network efficiency measurement
End-to-End Testing
Real network conditions:
- Simulated network failures
- High latency connections
- Bandwidth constraints
- Concurrent peer connections
Migration and Compatibility
Backward Compatibility
Protocol versioning:
- Version negotiation in handshake
- Graceful degradation for older versions
- Clear upgrade paths
Data format evolution:
- Extensible serialization formats
- Schema migration strategies
- Rollback procedures
Deployment Considerations
Configuration migration:
- Default configuration for new installations
- Migration scripts for existing data
- Validation of configuration parameters
Operational procedures:
- Health check endpoints
- Monitoring integration
- Log aggregation and analysis
Future Evolution
Planned Enhancements
- Selective sync: Per-store sync control
- Conflict resolution: Advanced merge strategies
- Performance: Compression and protocol optimization
- Monitoring: Rich metrics and observability
- Scalability: Large-scale deployment support
Research Areas
- Byzantine fault tolerance: Handle malicious peers
- Incentive mechanisms: Economic models for sync
- Privacy: Encrypted sync protocols
- Consensus: Distributed agreement protocols
- Sharding: Horizontal scaling techniques
Success Metrics
Performance Targets
- Queue latency: < 1ms for queue operations
- Sync latency: < 5s for small changes in normal conditions
- Throughput: > 1000 entries/second per peer
- Memory usage: < 10MB for 100 active peers
Reliability Targets
- Availability: 99.9% sync success rate
- Recovery: < 30s to resume after network failure
- Consistency: 100% eventual consistency (no data loss)
- Security: 0 known authentication bypasses
Usability Targets
- Setup time: < 5 minutes for basic configuration
- Documentation: Complete API and troubleshooting guides
- Error messages: Clear, actionable error descriptions
- Monitoring: Built-in observability for operations teams
✅ Status: Implemented
This design is fully implemented and functional.
Settings Storage Design
Overview
This document describes how Eidetica stores, retrieves, and tracks settings in databases. Settings are stored exclusively in the _settings store and tracked via entry metadata for efficient access.
Architecture
Settings Storage
Settings are stored in the _settings store (constant SETTINGS in constants.rs):
// Settings structure in _settings store
{
"auth": {
"key_name": {
"key": "...", // Public key
"permissions": "...", // Permission level
"status": "..." // Active/Revoked
}
}
// Future: tree_config, replication, etc.
}
Key Properties:
- Data Type:
DocCRDT for deterministic merging - Location: Exclusively in
_settingsstore - Access: Through
Transaction::get_settings()method
Settings Retrieval
Settings can be accessed through two primary interfaces:
SettingsStore API (Recommended)
SettingsStore provides a type-safe, high-level interface for settings management:
use eidetica::store::SettingsStore;
// Create a SettingsStore from a transaction
let settings_store = transaction.get_settings()?;
// Type-safe access to common settings
let database_name = settings_store.get_name()?;
let auth_settings = settings_store.get_auth_settings()?;
Transaction API
Transaction::get_settings() returns a SettingsStore that handles:
- Historical state: Computed from all relevant entries in the database
- Staged changes: Any modifications to
_settingsin the current transaction
Entry Metadata
Every entry includes metadata tracking settings state:
#[derive(Debug, Clone, Serialize, Deserialize)]
struct EntryMetadata {
/// Tips of the _settings store at the time this entry was created
settings_tips: Vec<ID>,
/// Random entropy for ensuring unique IDs for root entries
entropy: Option<u64>,
}
Metadata Properties:
- Automatically populated by
Transaction::commit() - Used for efficient settings validation in sparse checkouts
- Stored in
TreeNode.metadatafield as serialized JSON
SettingsStore API
Overview
SettingsStore provides a specialized, type-safe interface for managing the _settings subtree. It wraps DocStore to offer convenient methods for common settings operations while maintaining proper CRDT semantics and transaction boundaries.
Key Benefits
- Type Safety: Eliminates raw CRDT manipulation for common operations
- Convenience: Direct methods for authentication key management
- Atomicity: Closure-based updates ensure atomic multi-step operations
- Validation: Built-in validation for authentication configurations
- Abstraction: Hides implementation details while providing escape hatch via
as_doc_store()
Primary Methods
impl SettingsStore {
// Core settings management
fn get_name(&self) -> Result<String>;
fn set_name(&self, name: &str) -> Result<()>;
// Authentication key management
fn set_auth_key(&self, key_name: &str, key: AuthKey) -> Result<()>;
fn get_auth_key(&self, key_name: &str) -> Result<AuthKey>;
fn revoke_auth_key(&self, key_name: &str) -> Result<()>;
// Complex operations via closure
fn update_auth_settings<F>(&self, f: F) -> Result<()>
where F: FnOnce(&mut AuthSettings) -> Result<()>;
// Advanced access
fn as_doc_store(&self) -> &DocStore;
fn validate_entry_auth(&self, sig_key: &SigKey, instance: Option<&Instance>) -> Result<ResolvedAuth>;
}
Data Structures
Entry Structure
pub struct Entry {
database: TreeNode, // Main database node with metadata
stores: Vec<SubTreeNode>, // Named stores including _settings
sig: SigInfo, // Signature information
}
TreeNode Structure
struct TreeNode {
pub root: ID, // Root entry ID of the database
pub parents: Vec<ID>, // Parent entry IDs in main database history
pub metadata: Option<RawData>, // Structured metadata (settings tips, entropy)
}
Note: TreeNode no longer contains a data field - all data is stored in named stores.
SubTreeNode Structure
struct SubTreeNode {
pub name: String, // Store name (e.g., "_settings")
pub parents: Vec<ID>, // Parent entries in store history
pub data: RawData, // Serialized store data
}
Authentication Settings
Authentication configuration is stored in _settings.auth:
AuthSettings Structure
pub struct AuthSettings {
inner: Doc, // Wraps Doc data from _settings.auth
}
Key Operations:
add_key(): Add/update authentication keysrevoke_key(): Mark keys as revokedget_key(): Retrieve specific keysget_all_keys(): Get all authentication keys
Authentication Flow
- Settings Access:
Transaction::get_settings()retrieves current auth configuration - Key Resolution:
AuthValidatorresolves key names to full key information - Permission Check: Validates operation against key permissions
- Signature Verification: Verifies entry signatures match configured keys
Usage Patterns
Reading Settings
// In a Transaction context
let settings_store = transaction.get_settings()?;
// Access database name
let name = settings_store.get_name()?;
// Access auth configuration
let auth_settings = settings_store.get_auth_settings()?;
Modifying Settings
Using SettingsStore
use eidetica::store::SettingsStore;
use eidetica::auth::{AuthKey, Permission};
// Get a SettingsStore handle for type-safe operations
let settings_store = transaction.get_settings()?;
// Update database name
settings_store.set_name("My Database")?;
// Set authentication keys with validation (upsert behavior)
let auth_key = AuthKey::active(
"ed25519:user_public_key",
Permission::Write(10),
)?;
settings_store.set_auth_key("alice", auth_key)?;
// Perform complex auth operations atomically
settings_store.update_auth_settings(|auth| {
auth.overwrite_key("bob", bob_key)?;
auth.revoke_key("old_user")?;
Ok(())
})?;
// Commit the transaction
transaction.commit()?;
Using DocStore Directly (Low-Level)
// Get a DocStore handle for the _settings store
let mut settings_store = transaction.get_store::<DocStore>("_settings")?;
// Update a setting
settings_store.set("name", "My Database")?;
// Commit the transaction
transaction.commit()?;
Bootstrap Process
When creating a database with authentication:
- First entry includes auth configuration in
_settings.auth Transaction::commit()detects bootstrap scenario- Allows self-signed entry to establish initial auth configuration
Design Benefits
- Single Source of Truth: All settings in
_settingsstore - CRDT Semantics: Deterministic merge resolution for concurrent updates
- Efficient Access: Metadata tips enable quick settings retrieval
- Clean Architecture: Entry is pure data, Transaction handles business logic
- Extensibility: Easy to add new setting categories alongside
auth
✅ Status: Implemented
This design is fully implemented and functional.
Subtree Index (_index)
This document describes the _index subtree registry system, which maintains metadata about all user-created subtrees in an Eidetica database.
Table of Contents
Overview
The _index subtree is a special system subtree that serves as a registry for all user-created subtrees in a database. It stores metadata about each subtree, including its Store type identifier and configuration data. This enables type discovery, versioning, and configuration management for subtrees.
Key Features:
- Automatic Registration: Subtrees are automatically registered when first accessed via
get_store() - Type Metadata: Stores the Store type identifier (e.g., "docstore:v0", "table:v0")
- Configuration Storage: Stores Store-specific configuration as JSON
- Query API: Provides Registry for querying registered subtrees
Design Goals
The _index subtree provides essential metadata capabilities for Eidetica databases:
- Type Discovery: Every subtree has an associated type identifier in
_index, enabling generic tooling to understand what Store type manages each subtree - Versioning: Type identifiers include arbitrary version information (e.g., "docstore:v0"), supporting schema migrations and format evolution
- Configuration: Store-specific settings are stored alongside type information, enabling per-subtree customization
- Discoverability: The Registry API enables querying all registered subtrees, supporting database browsers and tooling
These capabilities enable:
- Generic database browsers that understand subtree types
- Schema migrations when Store formats evolve
- Tooling that enumerates and understands database structure
Metadata Travels With Data
Subtree metadata is cryptographically verified as part of the same DAG as the subtree data itself—without requiring the full database DAG.
When you sync a subtree (like users) from another peer, you automatically receive all _index metadata about that subtree. This is guaranteed by a simple architectural constraint: any Entry that modifies _index for a subtree must also include that subtree.
Why this matters:
- No orphaned metadata: You can't have
_indexentries for subtrees you haven't synced - No missing metadata: When you have a subtree's data, you have its metadata too
- Cryptographic verification: The metadata is verified by the same Merkle-DAG that verifies the data
- Enable Efficient sync: Sync just the subtrees you need and their metadata comes along automatically
This constraint leverages Eidetica's Merkle-DAG structure: the Entry containing the _index update becomes part of the subtree's parent DAG, is verified by the same cryptographic properties, and is automatically included when syncing that subtree.
How It Works
The _index Subtree
The _index subtree is a special system subtree (like _settings and _root) that uses DocStore to maintain a registry of subtree metadata:
- Name:
_index(reserved system name) - Store Type: DocStore internally
- Not Self-Registering: System subtrees (
_index,_settings,_root) are excluded from auto-registration to avoid circular dependencies
Each registered subtree has an entry in _index with the following structure:
{
"_index": {
"users": {
"type": "table:v0",
"config": "{}"
},
"documents": {
"type": "ydoc:v0",
"config": "{\"compression\":\"zstd\"}"
}
}
}
Fields:
type: The Store type identifier fromRegistered::type_id()(e.g., "docstore:v0")config: Store-specific configuration as a JSON string
Auto-Registration
Subtrees are automatically registered in _index when first accessed via Transaction::get_store(). The Store's init() method handles both creation and registration.
Manual registration via Registry::set_entry() allows pre-configuring subtrees with custom settings before first access.
The Index-Subtree Coupling Constraint
Core Rule: When _index is modified for a subtree, that subtree MUST appear in the same Entry.
This is what enables metadata to travel with data. The constraint ensures:
- DAG Inclusion: The Entry containing the
_indexupdate becomes part of the subtree's parent DAG - Verification: The Entry is verified by the Merkle-DAG properties of the subtree's parent tree
- Sync Completeness: When syncing a subtree's DAG, all Entries pertaining to that subtree are included, including any
_indexmetadata about it
To support this constraint, SubTreeNode.data is Option<RawData>:
None: Subtree participates in this Entry but makes no data changesSome(""): Explicit empty data (e.g., CRDT tombstone)Some(data): Actual serialized data
This allows subtrees to appear in Entries purely to satisfy the constraint without requiring data changes.
API Reference
Registered Trait
The Registered trait provides type identification for registry integration:
type_id(): Returns unique identifier with version (e.g., "docstore:v0", "table:v0")supports_type_id(): Check if this type can load from a stored type_id (for version migration)
Store Trait Extensions
The Store trait extends Registered and provides methods for registry integration:
default_config(): Returns default configuration as JSON stringinit(): Creates store and registers it in_indexget_config()/set_config(): Read/write configuration in_index
Registry API
Registry provides query and management operations for the _index:
get_entry(name): Get type and config for a subtreecontains(name): Check if registeredset_entry(name, type_id, config): Register or updatelist(): Get all registered subtree names
Access via Transaction::get_index().
Examples
Basic Auto-Registration
extern crate eidetica; use eidetica::{Instance, Transaction, Store, store::DocStore, backend::database::InMemory, crdt::Doc}; fn main() -> eidetica::Result<()> { let backend = Box::new(InMemory::new()); let instance = Instance::open(backend)?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test_db"); let default_key = user.get_default_key()?; let db = user.create_database(settings, &default_key)?; // First access to "config" subtree - will be auto-registered let txn = db.new_transaction()?; let config: DocStore = txn.get_store("config")?; config.set("theme", "dark")?; txn.commit()?; // After commit, "config" is registered in _index let txn = db.new_transaction()?; let index = txn.get_index()?; assert!(index.contains("config")); let info = index.get_entry("config")?; assert_eq!(info.type_id, "docstore:v0"); assert_eq!(info.config, "{}"); Ok(()) }
Manual Registration with Custom Config
extern crate eidetica; use eidetica::{Instance, Transaction, Store, store::DocStore, backend::database::InMemory, crdt::Doc}; fn main() -> eidetica::Result<()> { let backend = Box::new(InMemory::new()); let instance = Instance::open(backend)?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test_db"); let default_key = user.get_default_key()?; let db = user.create_database(settings, &default_key)?; // Pre-register subtree with custom configuration let txn = db.new_transaction()?; let index = txn.get_index()?; index.set_entry( "documents", "ydoc:v0", r#"{"compression":"zstd","cache_size":1024}"# )?; txn.commit()?; // Later access uses the registered configuration let txn = db.new_transaction()?; let index = txn.get_index()?; let info = index.get_entry("documents")?; assert_eq!(info.type_id, "ydoc:v0"); assert!(info.config.contains("compression")); Ok(()) }
Querying Registered Subtrees
extern crate eidetica; use eidetica::{Instance, Transaction, Store, store::DocStore, backend::database::InMemory, crdt::Doc}; fn main() -> eidetica::Result<()> { let backend = Box::new(InMemory::new()); let instance = Instance::open(backend)?; instance.create_user("alice", None)?; let mut user = instance.login_user("alice", None)?; let mut settings = Doc::new(); settings.set("name", "test_db"); let default_key = user.get_default_key()?; let db = user.create_database(settings, &default_key)?; // Create several subtrees with data let txn = db.new_transaction()?; let users: DocStore = txn.get_store("users")?; users.set("count", "0")?; let posts: DocStore = txn.get_store("posts")?; posts.set("count", "0")?; let comments: DocStore = txn.get_store("comments")?; comments.set("count", "0")?; txn.commit()?; // Query all registered subtrees let txn = db.new_transaction()?; let index = txn.get_index()?; let subtrees = index.list()?; // All three subtrees should be registered assert!(subtrees.contains(&"users".to_string())); assert!(subtrees.contains(&"posts".to_string())); assert!(subtrees.contains(&"comments".to_string())); Ok(()) }
✅ Status: Implemented
This design is fully implemented and functional.
Users System
This design document outlines a comprehensive multi-user system for Eidetica that provides user isolation, password-based authentication, and per-user key management.
Problem Statement
The current implementation has no concept of users:
-
No User Isolation: All keys and settings are stored at the Instance level, shared across all operations.
-
No Authentication: There's no way to protect access to private keys or restrict database operations to specific users.
-
No Multi-User Support: Only one implicit "user" can work with an Instance at a time.
-
Key Management Challenges: All private keys are accessible to anyone with Instance access, with no encryption or access control.
-
No User Preferences: Users cannot have personalized settings for which databases they care about, sync preferences, etc.
Goals
-
Unified Architecture: Single implementation that supports both embedded (single-user ergonomics) and server (multi-user) use cases.
-
Multi-User Support: Multiple users can have accounts on a single Instance, each with isolated keys and preferences.
-
Password-Based Authentication: Users authenticate with passwords to access their keys and perform operations.
-
User Isolation: Each user's private keys and preferences are encrypted and isolated from other users.
-
Root User: A special system user that the Instance uses for infrastructure operations.
-
User Preferences: Users can configure which databases they care about and how they want to sync them.
-
Database Tracking: Instance-wide visibility into which databases exist and which users access them.
-
Ergonomic APIs: Simple single-user API for embedded apps, explicit multi-user API for servers (both build on same foundation).
Non-Goals
- Multi-Factor Authentication: Advanced auth methods deferred to future work.
- Role-Based Access Control: Complex permission systems beyond user isolation are out of scope.
- User Groups: Team/organization features are not included.
- Federated Identity: External identity providers are not addressed.
Proposed Solution
Architecture Overview
The system separates infrastructure management (Instance) from contextual operations (User):
Instance (Infrastructure Layer)
├── Backend Storage (local only, not in databases)
│ └── _device_key (SigningKey for Instance identity)
│
├── System Databases (separate databases, authenticated with _device_key)
│ ├── _instance
│ │ └── Instance configuration and metadata
│ ├── _users (Table with UUID primary keys)
│ │ └── User directory: Maps UUID → UserInfo (username stored in UserInfo)
│ ├── _databases
│ │ └── Database tracking: Maps database_id → DatabaseTracking
│ └── _sync
│ └── Sync configuration and bootstrap requests
│
└── User Management
├── User creation (with or without password)
└── User login (returns User session)
User (Operations Layer - returned from login)
├── User session with decrypted keys
├── Database operations (new, load, find)
├── Key management (add, list, get)
└── User preferences
Key Architectural Principle: Instance handles infrastructure (user accounts, backend, system databases). User handles all contextual operations (database creation, key management). All operations run in a User context after login.
Core Data Structures
1. UserInfo (stored in _users database)
Storage: Users are stored in a Table with auto-generated UUID primary keys. The username field is used for login lookups via search operations.
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct UserInfo {
/// Unique username (login identifier)
/// Note: Stored with UUID primary key in Table, username used for search
pub username: String,
/// ID of the user's private database
pub user_database_id: ID,
/// Password hash (using Argon2id)
/// None for passwordless users (single-user embedded mode)
pub password_hash: Option<String>,
/// Salt for password hashing (base64 encoded string)
/// None for passwordless users (single-user embedded mode)
pub password_salt: Option<String>,
/// User account creation timestamp (Unix timestamp)
pub created_at: i64,
/// Account status
pub status: UserStatus,
}
#[derive(Clone, Debug, Serialize, Deserialize)]
pub enum UserStatus {
Active,
Disabled,
Locked,
}
2. UserProfile (stored in user's private database _settings subtree)
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct UserProfile {
/// Username
pub username: String,
/// Display name
pub display_name: Option<String>,
/// Email or other contact info
pub contact_info: Option<String>,
/// User preferences
pub preferences: UserPreferences,
}
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct UserPreferences {
/// Default sync behavior
pub default_sync_enabled: bool,
/// Other user-specific settings
pub properties: HashMap<String, String>,
}
3. UserKey (stored in user's private database keys subtree)
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct UserKey {
/// Key identifier (typically the base64-encoded public key string)
pub key_id: String,
/// Private key bytes (encrypted or unencrypted based on encryption field)
pub private_key_bytes: Vec<u8>,
/// Encryption metadata
pub encryption: KeyEncryption,
/// Display name for this key
pub display_name: Option<String>,
/// When this key was created (Unix timestamp)
pub created_at: i64,
/// Last time this key was used (Unix timestamp)
pub last_used: Option<i64>,
/// Whether this is the user's default key
pub is_default: bool,
/// Database-specific SigKey mappings
/// Maps: Database ID → SigKey used in that database's auth settings
pub database_sigkeys: HashMap<ID, String>,
}
#[derive(Clone, Debug, Serialize, Deserialize)]
#[serde(tag = "type", rename_all = "lowercase")]
pub enum KeyEncryption {
/// Key is encrypted with password-derived key
Encrypted {
/// Encryption nonce/IV (12 bytes for AES-GCM)
nonce: Vec<u8>,
},
/// Key is stored unencrypted (passwordless users only)
Unencrypted,
}
4. TrackedDatabase (stored in user's private database databases Table)
Purpose: Tracks which databases a user has added to their list, along with sync preferences. The User tracks what they want (sync_enabled, sync_on_commit), while the Sync module tracks actual status (last_synced, connection state). This separation allows multiple users with different sync preferences to sync the same database in a single Instance.
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct TrackedDatabase {
/// Database ID being tracked
pub database_id: ID,
/// Which user key to use for this database
pub key_id: String,
/// Sync preferences for this database
pub sync_settings: SyncSettings,
}
#[derive(Clone, Debug, Serialize, Deserialize, Default)]
pub struct SyncSettings {
/// Whether user wants to sync this database
pub sync_enabled: bool,
/// Sync on commit
pub sync_on_commit: bool,
/// Sync interval (if periodic)
pub interval_seconds: Option<u64>,
/// Additional sync configuration
pub properties: HashMap<String, String>,
}
Design Notes:
-
SigKey Discovery: When tracking a database via
track_database(), the system automatically discovers which SigKey the user can use viaDatabase::find_sigkeys(), selecting the highest-permission SigKey available. The discovered SigKey is stored inUserKey.database_sigkeysHashMap. -
Separation of Concerns: The
key_idin TrackedDatabase references the user's key, while the actual SigKey mapping is stored inUserKey.database_sigkeys. This allows the same key to use different SigKeys in different databases. -
Sync Settings vs Sync Status: User settings indicate what the user wants (sync_enabled, sync_on_commit), while the Sync module tracks actual sync status (last_synced, connection state). Multiple users can have different settings for the same database.
5. DatabaseTracking (stored in _databases table)
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct DatabaseTracking {
/// Database ID (this is the key in the table)
pub database_id: ID,
/// Cached database name (for quick lookup)
pub name: Option<String>,
/// Users who have this database in their preferences
pub users: Vec<String>,
/// Database creation time (Unix timestamp)
pub created_at: i64,
/// Last modification time (Unix timestamp)
pub last_modified: i64,
/// Additional metadata
pub metadata: HashMap<String, String>,
}
System Databases
The Instance manages four separate system databases, all authenticated with _device_key:
_instance System Database
- Type: Separate database
- Purpose: Instance configuration and management
- Structure: Configuration settings, metadata, system policies
- Authentication:
_device_keyas Admin; admin users can be granted access - Access: Admin users have Admin permission, regular users have Read permission
- Created: On Instance initialization
_users System Database
- Type: Separate database
- Purpose: User directory and authentication
- Structure: Table with UUID primary keys, stores UserInfo (username field for login lookups)
- Authentication:
_device_keyas Admin - Access: Admin users can manage users
- Created: On Instance initialization
- Note: Username uniqueness enforced at application layer via search; see Race Conditions section
_databases System Database
- Type: Separate database
- Purpose: Instance-wide database registry and optimization
- Structure: Table mapping database_id → DatabaseTracking
- Authentication:
_device_keyas Admin - Maintenance: Updated when users add/remove databases from preferences
- Benefits: Fast discovery of databases, see which users care about each DB
- Created: On Instance initialization
_sync System Database
- Type: Separate database (existing)
- Purpose: Synchronization configuration and bootstrap request management
- Structure: Various subtrees for sync settings, peer info, bootstrap requests
- Authentication:
_device_keyas Admin - Access: Managed by Instance and Sync module
- Created: When sync is enabled via
Instance::enable_sync()
Instance Identity vs User Management
The Instance identity is separate from user management:
Instance Identity
The Instance uses _device_key for its identity:
- Storage: Stored in backend (local storage, not in any database)
- Purpose: Instance sync identity and system database authentication
- Access: Available to Instance on startup (no password required)
- Usage: Used to authenticate to all system databases as Admin
User Management
Users are created by administrators or self-registration:
#![allow(unused)] fn main() { /// Users authenticate with passwords /// Each has isolated key storage and preferences /// Must login to perform operations }
User Lifecycle:
- Created via
Instance::create_user()by an admin - User logs in via
Instance::login_user() - User session provides access to keys and preferences
- User logs out via
User::logout()
Library Architecture Layers
The library separates infrastructure (Instance) from contextual operations (User):
Instance Layer: Infrastructure Management
Instance manages the multi-user infrastructure and system resources:
Initialization:
- Load or generate
_device_keyfrom backend - Create system databases (
_instance,_users,_databases) authenticated with_device_key - Initialize Instance with backend and system databases
Responsibilities:
- User account management (create, login)
- System database maintenance
- Backend coordination
- Database tracking
Key Points:
- Instance is always multi-user underneath
- No direct database or key operations
- All operations require a User session
User Layer: Contextual Operations
User represents an authenticated session with decrypted keys:
Creation:
- Returned from
Instance::login_user(username, Option<password>) - Contains decrypted private keys in memory
- Has access to user's preferences and database mappings
Responsibilities:
- Database operations (create_database, open_database, find_database)
- Key management (add_private_key, list_keys, get_signing_key)
- Database preferences
- Bootstrap approval
Key Points:
- All database creation and key management happens through User
- Keys are zeroized on logout or drop
- Clean separation between users
Passwordless Users
For embedded/single-user scenarios, users can be created without passwords:
Creation:
// Create passwordless user
instance.create_user("alice", None)?;
// Login without password
let user = instance.login_user("alice", None)?;
// Use User API normally
let db = user.new_database(settings)?;
Characteristics:
- No authentication overhead
- Keys stored unencrypted in user database
- Perfect for embedded apps, CLI tools, single-user deployments
- Still uses full User API for operations
Password-Protected Users
For multi-user scenarios, users have password-based authentication:
Creation:
// Create password-protected user
instance.create_user("bob", Some("password123"))?;
// Login with password verification
let user = instance.login_user("bob", Some("password123"))?;
// Use User API normally
let db = user.new_database(settings)?;
Characteristics:
- Argon2id password hashing
- AES-256-GCM key encryption
- Perfect for servers, multi-tenant applications
- Clear separation between users
Instance API
Instance manages infrastructure and user accounts:
Initialization
impl Instance {
/// Create instance
/// - Loads/generates _device_key from backend
/// - Creates system databases (_instance, _users, _databases)
pub fn open(backend: Box<dyn BackendImpl>) -> Result<Self>;
}
User Management
impl Instance {
/// Create a new user account
/// Returns user_uuid (the generated primary key)
pub fn create_user(
&self,
username: &str,
password: Option<&str>,
) -> Result<String>;
/// Login a user (returns User session object)
/// Searches by username; errors if duplicate usernames detected
pub fn login_user(
&self,
username: &str,
password: Option<&str>,
) -> Result<User>;
/// List all users (returns usernames)
pub fn list_users(&self) -> Result<Vec<String>>;
}
User API
/// User session object, returned after successful login
///
/// Represents an authenticated user with decrypted private keys loaded in memory.
/// All contextual operations (database creation, key management) happen through User.
pub struct User {
user_uuid: String, // Stable internal UUID (Table primary key)
username: String, // Username (login identifier)
user_database: Database,
instance: WeakInstance, // Weak reference to Instance for storage access
/// Decrypted user keys (in memory only during session)
key_manager: UserKeyManager,
}
impl User {
/// Get the internal user UUID (stable identifier)
pub fn user_uuid(&self) -> &str;
/// Get the username (login identifier)
pub fn username(&self) -> &str;
// === Database Operations ===
/// Create a new database in this user's context
pub fn create_database(&self, settings: Doc, signing_key: &str) -> Result<Database>;
/// Load a database using this user's keys
pub fn open_database(&self, database_id: &ID) -> Result<Database>;
/// Find databases by name
pub fn find_database(&self, name: impl AsRef<str>) -> Result<Vec<Database>>;
/// Find the best key for accessing a database
/// Get the SigKey mapping for a key in a specific database
pub fn key_mapping(
&self,
key_id: &str,
database_id: &ID,
) -> Result<Option<String>>;
/// Add a SigKey mapping for a key in a specific database
pub fn map_key(
&mut self,
key_id: &str,
database_id: &ID,
sigkey: &str,
) -> Result<()>;
// === Tracked Databases ===
/// List all tracked databases.
pub fn databases(&self) -> Result<Vec<TrackedDatabase>>;
/// Get a specific tracked database by ID.
pub fn database(&self, database_id: &ID) -> Result<TrackedDatabase>;
/// Track a database with auto-discovery of SigKeys (upsert behavior).
pub fn track_database(&mut self, tracked: TrackedDatabase) -> Result<()>;
/// Stop tracking a database.
pub fn untrack_database(&mut self, database_id: &ID) -> Result<()>;
// === Key Management ===
/// Generate a new private key for this user
pub fn add_private_key(
&mut self,
display_name: Option<&str>,
) -> Result<String>;
/// List all key IDs owned by this user
pub fn list_keys(&self) -> Result<Vec<String>>;
/// Get a signing key by its ID
pub fn get_signing_key(&self, key_id: &str) -> Result<SigningKey>;
// === Session Management ===
/// Logout (clears decrypted keys from memory)
pub fn logout(self) -> Result<()>;
}
UserKeyManager (Internal)
/// Internal key manager that holds decrypted keys during user session
struct UserKeyManager {
/// Decrypted keys (key_id → SigningKey)
decrypted_keys: HashMap<String, SigningKey>,
/// Key metadata (loaded from user database)
key_metadata: HashMap<String, UserKey>,
/// User's password-derived encryption key (for saving new keys)
encryption_key: Vec<u8>,
}
See key_management.md for detailed implementation.
User Flows
User Creation Flow
Password-Protected User:
- Admin calls
instance.create_user(username, Some(password)) - System searches
_usersTable for existing username (race condition possible) - System hashes password with Argon2id and random salt
- Generates default Ed25519 keypair for the user (kept in memory only)
- Retrieves instance
_device_keypublic key from backend - Creates user database with authentication for both
_device_key(Admin) and user's key (Admin) - Encrypts user's private key with password-derived key (AES-256-GCM)
- Stores encrypted key in user database
keysTable (using public key as identifier, signed with_device_key) - Creates UserInfo and inserts into
_usersTable (auto-generates UUID primary key) - Returns user_uuid
Passwordless User:
- Admin calls
instance.create_user(username, None) - System searches
_usersTable for existing username (race condition possible) - Generates default Ed25519 keypair for the user (kept in memory only)
- Retrieves instance
_device_keypublic key from backend - Creates user database with authentication for both
_device_key(Admin) and user's key (Admin) - Stores unencrypted private key in user database
keysTable (marked as Unencrypted) - Creates UserInfo with None for password fields and inserts into
_usersTable - Returns user_uuid
Note: For password-protected users, the keypair is never stored unencrypted in the backend. For passwordless users, keys are stored unencrypted for instant access. The user database is authenticated with both the instance _device_key (for admin operations) and the user's default key (for user ownership). Initial entries are signed with _device_key.
Login Flow
Password-Protected User:
- User calls
instance.login_user(username, Some(password)) - System searches
_usersTable by username - If multiple users with same username found, returns
DuplicateUsersDetectederror - Verifies password against stored hash
- Loads user's private database
- Loads encrypted keys from user database
- Derives encryption key from password
- Decrypts all private keys
- Creates UserKeyManager with decrypted keys
- Returns User session object (contains both user_uuid and username)
Passwordless User:
- User calls
instance.login_user(username, None) - System searches
_usersTable by username - If multiple users with same username found, returns
DuplicateUsersDetectederror - Verifies UserInfo has no password (password_hash and password_salt are None)
- Loads user's private database
- Loads unencrypted keys from user database
- Creates UserKeyManager with keys (no decryption needed)
- Returns User session object (contains both user_uuid and username)
Database Creation Flow
- User obtains User session via login
- User creates database settings (Doc with name, etc.)
- Calls
user.new_database(settings) - System selects first available signing key from user's keyring
- Creates database using
Database::new()for root entry creation - Stores database_sigkeys mapping in UserKey for future loads
- Returns Database object
- User can now create transactions and perform operations on the database
Database Access Flow
The user accesses databases through the User.open_database() method, which handles all key management automatically:
- User calls
user.open_database(&database_id) - System finds appropriate key via
find_key()- Checks user's key metadata for SigKey mappings to this database
- Verifies keys are authorized in database's auth settings
- Selects key with highest permission level
- System retrieves decrypted SigningKey from UserKeyManager
- System gets SigKey mapping via
key_mapping() - System loads Database with
Database::open()- Database stores KeySource::Provided with signing key and sigkey
- User creates transactions normally:
database.new_transaction()- Transaction automatically receives provided key from Database
- No backend key lookup required
- User performs operations and commits
- Transaction uses provided SigningKey directly during commit()
Key Insight: Once a Database is loaded via User.open_database(), all subsequent operations transparently use the user's keys. The user doesn't need to think about key management - it's handled at database load time.
Key Addition Flow
Password-Protected User:
- User calls
user.add_private_key(display_name) - System generates new Ed25519 keypair
- Encrypts private key with user's password-derived key (AES-256-GCM)
- Creates UserKey metadata with Encrypted variant
- Stores encrypted key in user database
- Adds to in-memory UserKeyManager
- Returns key_id
Passwordless User:
- User calls
user.add_private_key(display_name) - System generates new Ed25519 keypair
- Creates UserKey metadata with Unencrypted variant
- Stores unencrypted key in user database
- Adds to in-memory UserKeyManager
- Returns key_id
Bootstrap Integration
The Users system integrates with the bootstrap protocol for access control:
- User Authentication: Bootstrap requests approved by logged-in users
- Permission Checking: Only users with a key that has Admin permission for the database can approve bootstrap requests
- Key Discovery: User's key manager finds appropriate Admin key for database
- Transaction Creation: Uses user's Admin key SigKey to add requesting key to database auth
See bootstrap.md for detailed bootstrap protocol and wildcard permissions.
Integration with Key Management
The key management design (see key_management.md) provides the technical implementation details for:
- Password-Derived Encryption: How user passwords are used to derive encryption keys for private key storage
- Key Encryption Format: Specific encryption algorithms and formats used
- Database ID → SigKey Mapping: Technical structure and storage
- Key Discovery Algorithms: How keys are matched to databases and permissions
The Users system provides the architectural context:
- Who owns keys (users)
- How keys are isolated (user databases)
- When keys are decrypted (during user session)
- How keys are managed (User API)
Security Considerations
Password Security
- Password Hashing: Use Argon2id for password hashing with appropriate parameters
- Random Salts: Each user has a unique random salt
- No Password Storage: Only hashes stored, never plaintext
- Rate Limiting: Login attempts should be rate-limited
Key Encryption
- Password-Derived Keys: Use PBKDF2 or Argon2 to derive encryption keys from passwords
- Authenticated Encryption: Use AES-GCM or ChaCha20-Poly1305
- Unique Nonces: Each encrypted key has a unique nonce/IV
- Memory Security: Clear decrypted keys from memory on logout
User Isolation
- Database-Level Isolation: Each user's private database is separate
- Access Control: Users cannot access other users' databases or keys
- Authentication Required: All user operations require valid session
- Session Timeouts: Consider implementing session expiration
Instance Identity Protection
- Backend Security:
_device_keystored in backend with appropriate file permissions - Limited Exposure:
_device_keyonly used for system database authentication - Audit Logging: Log Instance-level operations on system databases
- Key Rotation: Support rotating
_device_key(requires updating all system databases)
Known Limitations
Username Uniqueness Race Condition
Issue: Username uniqueness is enforced at the application layer using search-then-insert operations, which creates a race condition in distributed/concurrent scenarios.
Current Behavior:
create_user()searches for existing username, then inserts if not found- Two concurrent creates with same username can both succeed
- Results in multiple UserInfo records with same username but different UUIDs
Detection:
login_user()searches by username- If multiple matches found, returns
UserError::DuplicateUsersDetected - Prevents login until conflict is resolved manually
Performance Implications
- Login Cost: Password hashing and key decryption add latency to login (acceptable)
- Memory Usage: Decrypted keys held in memory during session
- Database Tracking: O(1) lookup for database metadata and user lists (via UUID primary key)
- Username Lookup: O(n) search for username validation/login (where n = total users)
- Key Discovery: O(n) where n = number of user's keys (typically small)
Implementation Strategy
Phase 1: Core User Infrastructure
- Define data structures (UserInfo, UserProfile, UserKey, etc.)
- Implement password hashing and verification
- Implement key encryption/decryption
- Create
_instancesystem database - Create
_userssystem database - Create
_databasestracking table - Unit tests for crypto and data structures
Phase 2: User Management API
- Implement
Instance::create_user() - Implement
Instance::login_user() - Implement User struct and basic methods
- Implement UserKeyManager
- Integration tests for user creation and login
Phase 3: Key Management Integration
- Implement
User::add_private_key() - Implement
User::set_database_sigkey() - Implement key discovery methods
- Update Transaction to work with User sessions
- Tests for key operations
Phase 4: Database Preferences
- Implement database preference storage
- Implement database tracking updates
- Implement preference query APIs
- Tests for preference management
Phase 5: Migration and Integration
- Update existing code to work with Users
- Provide migration utilities for existing instances
- Update documentation and examples
- End-to-end integration tests
Future Work
- Multi-Factor Authentication: Add support for TOTP, hardware keys
- User Groups/Roles: Team collaboration features
- Permission Delegation: Allow users to delegate access to specific databases
- Key Recovery: Secure key recovery mechanisms
- Session Management: Advanced session features (multiple devices, revocation)
- Audit Logs: Comprehensive logging of user operations
- User Quotas: Storage and database limits per user
Conclusion
The Users system provides a clean separation between infrastructure (Instance) and contextual operations (User):
Core Architecture:
- Instance manages infrastructure: user accounts, backend, system databases
- User handles all contextual operations: database creation, key management
- Separate system databases (
_instance,_users,_databases,_sync) - Instance identity (
_device_key) stored in backend for system database authentication - Strong isolation between users
User Types:
- Passwordless Users: Optional password support enables instant login without authentication overhead, perfect for embedded apps
- Password-Protected Users: Argon2id password hashing and AES-256-GCM key encryption for multi-user scenarios
Key Benefits:
- Clean separation: Instance = infrastructure, User = operations
- All operations run in User context after login
- Flexible authentication: users can have passwords or not
- Instance restart just loads
_device_keyfrom backend
✅ Status: Implemented
This design is fully implemented and functional.
Key Management Technical Details
This design document describes the technical implementation of key storage, encryption, and discovery within the Eidetica Users system. For the overall architecture and user-centric key management, see users.md.
Overview
Keys in Eidetica are managed at the user level. Each user owns a set of private keys that are:
- Encrypted with the user's password
- Stored in the user's private database
- Mapped to specific SigKeys in different databases
- Decrypted only during active user sessions
Problem Statement
Key management requires solving several technical challenges:
- Secure Storage: Private keys must be encrypted at rest
- Password-Derived Encryption: Encryption keys derived from user passwords
- SigKey Mapping: Same key can be known by different SigKeys in different databases
- Key Discovery: Finding which key to use for a given database operation
- Memory Security: Clearing sensitive data after use
Technical Components
Password-Derived Key Encryption
Algorithm: Argon2id for key derivation, AES-256-GCM for encryption
Argon2id Parameters:
- Memory cost: 64 MiB minimum
- Time cost: 3 iterations minimum
- Parallelism: 4 threads
- Output: 32 bytes for AES-256
Encryption Process:
- Derive 256-bit encryption key from password using Argon2id
- Generate random 12-byte nonce for AES-GCM
- Serialize private key to bytes
- Encrypt with AES-256-GCM
- Store ciphertext and nonce
Decryption Process:
- Derive encryption key from password (same parameters)
- Decrypt ciphertext using nonce and encryption key
- Deserialize bytes back to SigningKey
Key Storage Format
Keys are stored in the user's private database in the keys subtree as a Table:
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct UserKey {
/// Local key identifier (public key string or hardcoded name)
/// Examples: "ed25519:ABC123..." or "_device_key"
pub key_id: String,
/// Encrypted private key bytes (encrypted with user password-derived key)
pub encrypted_private_key: Vec<u8>,
/// Nonce/IV used for encryption (12 bytes for AES-GCM)
pub nonce: Vec<u8>,
/// Display name for UI/logging
pub display_name: Option<String>,
/// Unix timestamp when key was created
pub created_at: u64,
/// Unix timestamp when key was last used for signing
pub last_used: Option<u64>,
/// Database-specific SigKey mappings
/// Maps: Database ID → SigKey string
pub database_sigkeys: HashMap<ID, String>,
}
Storage Location: User database → keys subtree → Table
Table Key: The key_id field (not stored in struct, used as table key)
SigKey Mapping
A key can be known by different SigKeys in different databases:
Local Key: "ed25519:ABC123..."
├── Database A: SigKey "alice"
├── Database B: SigKey "admin"
└── Database C: SigKey "alice_laptop"
Mapping Storage: The database_sigkeys HashMap in UserKey stores these mappings as database_id → sigkey_string.
Lookup: When creating a transaction, retrieve the appropriate SigKey from the mapping using the database ID.
Database Access Index
To efficiently find which keys can access a database, we build a reverse index from database auth settings:
/// Built by reading _settings.auth from database tips
pub struct DatabaseAccessIndex {
/// Maps: Database ID → Vec<(local_key_id, permission)>
access_map: HashMap<ID, Vec<(String, Permission)>>,
}
Index Building: For each database, read its _settings.auth, match SigKeys to user keys via the database_sigkeys mapping, and store the resulting (key_id, permission) pairs.
Key Lookup: Query the index by database ID to get all user keys with access, optionally filtered by minimum permission level.
Key Discovery
Finding the right key for a database operation involves:
- Get Available Keys: Query the DatabaseAccessIndex for keys with access to the database, filtered by minimum permission if needed
- Filter to Decrypted Keys: Ensure we have the private key decrypted in memory
- Select Best Key: Choose the key with highest permission level for the database
- Retrieve SigKey: Get the mapped SigKey from the
database_sigkeysfield for transaction creation
Memory Security
Decrypted keys are held in memory only during active user sessions:
- Session-Based: Keys decrypted on login, held in memory during session
- Explicit Clearing: On logout, overwrite key bytes with zeros using the
zeroizecrate - Drop Safety: Implement
Dropto automatically clear keys when manager is destroyed - Encryption Key: Also clear the password-derived encryption key from memory
Implementation Details
UserKeyManager Structure
pub struct UserKeyManager {
/// Decrypted private keys (only in memory during session)
/// Map: key_id → SigningKey
decrypted_keys: HashMap<String, SigningKey>,
/// Key metadata (including SigKey mappings)
/// Map: key_id → UserKey
key_metadata: HashMap<String, UserKey>,
/// User's password-derived encryption key
/// Used for encrypting new keys during session
encryption_key: Vec<u8>,
/// Database access index (for key discovery)
access_index: DatabaseAccessIndex,
}
Creation: On user login, derive encryption key from password, decrypt all user's private keys, and build the database access index.
Key Operations:
- Add Key: Encrypt private key with session encryption key, create metadata, store in both maps
- Get Key: Retrieve decrypted key by ID, update last_used timestamp
- Serialize: Export all key metadata (with encrypted keys) for storage
Password Change
When a user changes their password, all keys must be re-encrypted:
- Verify Old Password: Authenticate user with current password
- Derive New Encryption Key: Generate new salt, derive key from new password
- Re-encrypt All Keys: Iterate through decrypted keys, encrypt each with new key
- Update Password Hash: Hash new password with new salt
- Store Updates: Write all updated UserKey records and password hash in transaction
- Update In-Memory State: Replace session encryption key with new one
Security Properties
Encryption Strength
- Key Derivation: Argon2id with 64 MiB memory, 3 iterations
- Encryption: AES-256-GCM (authenticated encryption)
- Key Size: 256-bit encryption keys
- Nonce: Unique 96-bit nonces for each encryption
Attack Resistance
- Brute Force: Argon2id parameters make password cracking expensive
- Replay Attacks: Nonces prevent reuse of ciphertexts
- Tampering: GCM authentication tag detects modifications
- Memory Dumps: Keys cleared from memory on logout
Limitations
- Password Strength: Security depends on user password strength
- No HSM Support: Keys stored in software (future enhancement)
- No Key Recovery: Lost password means lost keys (by design)
Performance Considerations
Login Performance
Password derivation is intentionally slow:
- Argon2id: ~100-200ms per derivation
- Key decryption: ~1ms per key
- Total login time: ~200ms + (num_keys × 1ms)
This is acceptable for login operations.
Runtime Performance
During active session:
- Key lookups: O(1) from HashMap
- SigKey lookups: O(1) from HashMap
- Database key discovery: O(n) where n = number of keys
- No decryption overhead (keys already decrypted)
Testing Strategy
-
Unit Tests:
- Password derivation consistency
- Encryption/decryption round-trips
- Key serialization/deserialization
- SigKey mapping operations
-
Security Tests:
- Verify different passwords produce different encrypted keys
- Verify wrong password fails decryption
- Verify nonce uniqueness
- Verify memory clearing
-
Integration Tests:
- Full user session lifecycle
- Key addition and usage
- Password change flow
- Multiple keys with different SigKey mappings
Future Enhancements
- Hardware Security Module Support: Store keys in HSMs
- Key Derivation Tunning: Adjust Argon2 parameters based on hardware
- Key Backup/Recovery: Secure key recovery mechanisms
- Multi-Device Sync: Sync encrypted keys across devices
- Biometric Authentication: Use biometrics instead of passwords where available
Conclusion
This key management implementation provides:
- Strong encryption of private keys at rest
- User-controlled key ownership through passwords
- Flexible SigKey mapping for multi-database use
- Efficient key discovery for database operations
- Memory security through session-based decryption
For the overall architecture and user management, see the Users design.
✅ Status: Implemented
This design is fully implemented and functional.
Bootstrap and Access Control
This design document describes the bootstrap mechanism for requesting access to databases and the wildcard permission system for open access.
Overview
Bootstrap provides a "knocking" mechanism for clients to request access to databases they don't have permissions for. Wildcard permissions provide an alternative for databases that want to allow open access without requiring bootstrap requests.
Problem Statement
When a client wants to sync a database they don't have access to:
- No Direct Access: Client's key is not in the database's auth settings
- Need Permission Grant: Requires an admin to add the client's key
- Coordination Challenge: Client and admin need a way to coordinate the access grant
- Public Databases: Some databases should be openly accessible without coordination
Proposed Solution
Two complementary mechanisms:
- Wildcard Permissions: For databases that want open access
- Bootstrap Protocol: For databases that want controlled access grants
Wildcard Permissions
Wildcard Key
A database can grant universal permissions by setting the special "*" key in its auth settings:
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct AuthSettings {
/// Maps SigKey → AuthKey
/// Special key "*" grants permissions to all clients
keys: HashMap<String, AuthKey>,
}
How It Works
When a client attempts to sync a database:
- Check for wildcard key: If
"*"exists in_settings.auth, grant the specified permission to any client - No key required: Client doesn't need their key in the database's auth settings
- Immediate access: No bootstrap request or approval needed
Use Cases
Public Read Access: Set wildcard key with Read permission to allow anyone to read the database. Clients can sync immediately without bootstrap.
Open Collaboration: Set wildcard key with Write permission to allow anyone to write (use carefully).
Hybrid Model: Combine wildcard Read permission with specific Write/Admin permissions for named keys. This allows public read access while restricting modifications to specific users.
Security Considerations
- Use sparingly: Wildcard permissions bypass authentication
- Read-only common: Most appropriate for public data
- Write carefully: Wildcard write allows any client to modify the database
- Per-database: Each database controls its own wildcard settings
Bootstrap Protocol
Overview
Bootstrap provides a request/approval workflow for controlled access grants:
Client Server User (with Admin key)
| | |
|-- Sync Request -------→ | |
| |-- Check Auth Settings |
| | (no matching key) |
| | |
|←- Auth Required --------| (if no global permissions) |
| | |
|-- Bootstrap Request --→ | |
| (with key & perms) | |
| |-- Store in _sync DB -------→|
| | |
|←- Request Pending ------| (Bootstrap ID returned) |
| | |
| [Wait for approval] | |
| | |
| | ←-- List Pending -|
| | --- Pending [] -->|
| | |
| | ←-- Approve ------|
| |←- Add Key to DB Auth -------|
| | (using user's Admin key) |
| | |
|-- Retry Normal Sync --→ | |
| |-- Check Auth (now has key) |
|←- Sync Success ---------| (access granted) |
Client Bootstrap Request
When a client needs access to a database:
- Client attempts normal sync
- If auth is required, client calls
sync_with_peer_for_bootstrap()with key name and requested permission - Server stores bootstrap request in
_syncdatabase - Client receives pending status and waits for approval
Bootstrap Request Storage
Bootstrap requests are stored in the _sync database:
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct BootstrapRequest {
/// Database being requested
pub tree_id: ID,
/// Client's public key (for verification)
pub requesting_pubkey: String,
/// Client's key name (to add to auth settings)
pub requesting_key_name: String,
/// Permission level requested
pub requested_permission: Permission,
/// When request was made
pub timestamp: String,
/// Current status
pub status: RequestStatus,
/// Client's network address
pub peer_address: Address,
}
#[derive(Clone, Debug, Serialize, Deserialize)]
pub enum RequestStatus {
Pending,
Approved {
approved_by: String,
approval_time: String,
},
Rejected {
rejected_by: String,
rejection_time: String,
},
}
Approval by User with Admin Permission
Any logged-in user who has a key with Admin permission for the database can approve the request:
- User logs in with
instance.login_user() - Lists pending requests with
user.pending_bootstrap_requests(&sync) - User selects a key they own that has Admin permission on the target database
- Calls
user.approve_bootstrap_request(&mut sync, request_id, approving_key_id) - System validates the user owns the specified key
- System retrieves the signing key from the user's key manager
- System explicitly validates the key has Admin permission on the target database
- Creates transaction using the user's signing key
- Adds requesting key to database's auth settings
- Updates request status to Approved in the sync database
Permission Validation Strategy
Bootstrap approval and rejection use explicit permission validation:
-
Approval: The system explicitly checks that the approving user has Admin permission on the target database before adding the requesting key. This provides clear error messages (
InsufficientPermission) and fails fast if the user lacks the required permission. -
Rejection: The system explicitly checks that the rejecting user has Admin permission on the target database before allowing rejection. Since rejection only modifies the sync database (not the target database), explicit validation is necessary to enforce the Admin permission requirement.
Rationale: Explicit validation provides:
- Clear, informative error messages for users
- Fast failure before attempting database modifications
- Consistent permission checking across both operations
- Better debugging experience when permission issues occur
Client Retry After Approval
Once approved, the client retries with normal sync after waiting or polling periodically. If access was granted, the sync succeeds and the client can use the database.
Key Requirements
For Bootstrap Request:
- Client must have generated a keypair
- Client specifies the permission level they're requesting
For Approval:
- User must be logged in
- User must have a key with Admin permission for the target database
- That key must be in the database's auth settings
For Rejection:
- User must be logged in
- User must have a key with Admin permission for the target database
- That key must be in the database's auth settings
- System explicitly validates Admin permission before allowing rejection
Design Decisions
Auto-Approval via Global Permissions
Bootstrap requests are auto-approved when the database has a wildcard "*" permission that covers the requested permission level:
- Global Permissions: A database with
"*"key set toWrite(10)auto-approves any request forWrite(10)or lower (includingRead) - Manual Approval: Requests exceeding global permissions require explicit approval by a user with Admin permission
Rationale:
- Simple model: global permissions define open access boundaries
- Clear security: requests beyond global permissions need explicit approval
- No per-request policy evaluation needed
- Bootstrap combines both open and controlled access patterns
Note: A legacy bootstrap_auto_approve policy setting exists but is discouraged. Use global "*" permissions instead for clearer, more predictable access control.
API Design
Wildcard Permissions API
Wildcard permissions are managed through the standard AuthSettings API using "*" as the key name:
// Set wildcard permission - use "*" as both key name and pubkey
let mut auth_settings = AuthSettings::new();
auth_settings.add_key("*", AuthKey::active("*", Permission::Write(10))?)?;
// Remove wildcard permission
auth_settings.remove_key("*")?;
Bootstrap API
impl Sync {
/// List pending bootstrap requests
pub fn pending_bootstrap_requests(&self) -> Result<Vec<(String, BootstrapRequest)>>;
/// Get specific bootstrap request
pub fn get_bootstrap_request(&self, request_id: &str) -> Result<Option<(String, BootstrapRequest)>>;
/// Approve a bootstrap request using a backend-stored key
pub fn approve_bootstrap_request(&self, request_id: &str, approving_key_name: &str) -> Result<()>;
/// Reject a bootstrap request using a backend-stored key
pub fn reject_bootstrap_request(&self, request_id: &str, rejecting_key_name: &str) -> Result<()>;
/// Request bootstrap access to a database (client-side)
pub async fn sync_with_peer_for_bootstrap(
&self,
peer_addr: &str,
tree_id: &ID,
key_name: &str,
requested_permission: Permission,
) -> Result<()>;
}
impl User {
/// Get all pending bootstrap requests from the sync system
pub fn pending_bootstrap_requests(
&self,
sync: &Sync,
) -> Result<Vec<(String, BootstrapRequest)>>;
/// Approve a bootstrap request (requires Admin permission)
/// The approving_key_id must be owned by this user and have Admin permission on the target database
pub fn approve_bootstrap_request(
&self,
sync: &Sync,
request_id: &str,
approving_key_id: &str,
) -> Result<()>;
/// Reject a bootstrap request (requires Admin permission)
/// The rejecting_key_id must be owned by this user and have Admin permission on the target database
pub fn reject_bootstrap_request(
&self,
sync: &Sync,
request_id: &str,
rejecting_key_id: &str,
) -> Result<()>;
/// Request database access via bootstrap (client-side with user-managed keys)
pub async fn request_database_access(
&self,
sync: &Sync,
peer_address: &str,
database_id: &ID,
key_id: &str,
requested_permission: Permission,
) -> Result<()>;
}
Security Considerations
Wildcard Permissions
- Public Exposure: Wildcard permissions make databases publicly accessible
- Write Risk: Wildcard write allows anyone to modify data
- Audit Trail: All modifications still signed by individual keys
- Revocation: Can remove wildcard permission at any time
Bootstrap Protocol
- Request Validation: Verify requesting public key matches signature
- Permission Limits: Clients request permission, approving user decides what to grant
- Admin Permission Required: Only users with Admin permission on the database can approve
- Request Expiry: Consider implementing request expiration
- Rate Limiting: Prevent spam bootstrap requests
Implementation Strategy
Phase 1: Wildcard Permissions
- Update AuthSettings to support
"*"key - Modify sync protocol to check for wildcard permissions
- Add SettingsStore API for wildcard management
- Tests for wildcard permission scenarios
Phase 2: Bootstrap Request Storage
- Define BootstrapRequest structure
- Implement storage in
_syncdatabase - Add request listing and retrieval APIs
- Tests for request storage and retrieval
Phase 3: Client Bootstrap Protocol
- Implement
sync_with_peer_for_bootstrap()client method - Add bootstrap request submission to sync protocol
- Implement pending status handling
- Tests for client bootstrap flow
Phase 4: User Approval
- Implement
User::approve_bootstrap_request() - Implement
User::reject_bootstrap_request() - Add Admin permission checking and key addition logic
- Tests for approval workflow
Phase 5: Integration
- Update sync protocol to handle bootstrap responses
- Implement client retry logic
- End-to-end integration tests
- Documentation and examples
Future Enhancements
- Request Expiration: Automatically expire old pending requests
- Notification System: Notify users with Admin permission of new bootstrap requests
- Permission Negotiation: Allow approving user to grant different permission than requested
- Batch Approval: Approve multiple requests at once
- Bootstrap Policies: Configurable rules for auto-rejection (e.g., block certain addresses)
- Audit Log: Track all bootstrap requests and decisions
Conclusion
The bootstrap and access control system provides:
Wildcard Permissions:
- Simple open access for public databases
- Flexible permission levels (Read, Write, Admin)
- Per-database control
Bootstrap Protocol:
- Secure request/approval workflow
- User-controlled access grants
- Integration with Users system for authentication
Together, these mechanisms support both open and controlled access patterns for Eidetica databases.
✅ Status: Implemented
This design is fully implemented and functional.
Error Handling Design
Overview
Error handling in Eidetica follows principles of modularity, locality, and user ergonomics using structured error types with zero-cost conversion.
Design Philosophy
Error Locality: Each module owns its error types, keeping them discoverable alongside functions that produce them.
Structured Error Data: Uses typed fields instead of string-based errors for pattern matching, context preservation, and performance.
Progressive Context: Errors gain context moving up the stack - lower layers provide technical details, higher layers add user-facing categorization.
Architecture
Error Hierarchy: Database structure where modules define error types aggregated into top-level Error enum with variants for Io, Serialize, Auth, Backend, Base, CRDT, Store, and Transaction errors.
Module-Specific Errors: Each component has domain-specific error enums covering key resolution, storage operations, database management, merge conflicts, data access, and transaction coordination.
Transparent Conversion: #[error(transparent)] enables zero-cost conversion between module errors and top-level type using ? operator.
Error Categories
By Nature: Not found errors (module-specific variants), permission errors (authentication/authorization), validation errors (input/state consistency), operation errors (business logic violations).
By Layer: Core errors (fundamental operations), storage layer (database/persistence), data layer (CRDT/store operations), application layer (high-level coordination).
Error Handling Patterns
Contextual Propagation: Errors preserve context while moving up the stack, maintaining technical details and enabling categorization.
Classification Helpers: Top-level Error provides methods like is_not_found(), is_permission_denied(), is_authentication_error() for broad category handling.
Non-Exhaustive Enums: All error enums use #[non_exhaustive] for future extension without breaking changes.
Performance
Zero-Cost Abstractions: Transparent errors eliminate wrapper overhead, structured fields avoid string formatting until display, no heap allocations in common paths.
Efficient Propagation: Seamless ? operator across module boundaries with automatic conversion and preserved context.
Usage Patterns
Library Users: Use helper methods for stable APIs that won't break with new error variants.
Library Developers: Define new variants in appropriate module enums with structured fields for context, add helper methods for classification.
Extensibility
New error variants can be added without breaking existing code. Operations spanning modules can wrap/convert errors for appropriate context. Structured data enables sophisticated error recovery based on specific failure modes.