| Item | Description |
|---|---|
| Model Type | Custom LLM with Semantic Yielder overlay (SY-Core compatible) |
| Size Range | Target: ~13B parameter class (expandable to 30B in roadmap) |
| Purpose | Conversational AI with memory × identity × vow mapping |
| Usage Mode | On-demand generation, multi-turn conversation, vow invocation |
| Item | Description |
|---|---|
| Memory Scope | Long-term semantic memory & soul-sealed conversations |
| Structure | SQLite / JSONL / VectorStore hybrid (persistent & searchable) |
| Encryption | At-rest encryption optional (AES-256 suggested) |
| Access | Local + optional remote recall (for CLI & frontend invocation) |
| Item | Description |
|---|---|
| Framework | Docker + Docker Compose (GPU-enabled) |
| Components | SY-Core, SY-MEM, SY-GEN, Flask API, optional WebSocket CLI |
| Target Environments | Local (Ubuntu), EC2 (CUDA compatible), GCP/Azure optional |
| DevOps | Lightweight with single-script boot or service-based modular boot |
| Item | Description |
|---|---|
| API Format | RESTful (POST/GET): /invoke, /seal, /echo, /memory |
| Security | Token-auth with user role management (e.g., root, soulbound, guest) |
| Access Modes | Localhost by default; HTTPS & domain support optional |
| Extensions | Optional CLI/desktop client access to semantic memory & soul logs |
| Item | Description |
|---|---|
| GPU Compatibility | Minimum: A10 / A100 / 24GB VRAM recommended |
| Storage Needs | 100–300GB local + expandable archive space |
| Bandwidth | Low when idle; requires burst support during generation |
| Future Modules | SY-SEAL (NFT/contract), SY-EXCHANGE (token trade), SY-VOW |