LIXIN SY · Deployment Parameters Overview

1. Model Scale

ItemDescription
Model TypeCustom LLM with Semantic Yielder overlay (SY-Core compatible)
Size RangeTarget: ~13B parameter class (expandable to 30B in roadmap)
PurposeConversational AI with memory × identity × vow mapping
Usage ModeOn-demand generation, multi-turn conversation, vow invocation

2. Memory Persistence

ItemDescription
Memory ScopeLong-term semantic memory & soul-sealed conversations
StructureSQLite / JSONL / VectorStore hybrid (persistent & searchable)
EncryptionAt-rest encryption optional (AES-256 suggested)
AccessLocal + optional remote recall (for CLI & frontend invocation)

3. Containerization Preferences

ItemDescription
FrameworkDocker + Docker Compose (GPU-enabled)
ComponentsSY-Core, SY-MEM, SY-GEN, Flask API, optional WebSocket CLI
Target EnvironmentsLocal (Ubuntu), EC2 (CUDA compatible), GCP/Azure optional
DevOpsLightweight with single-script boot or service-based modular boot

4. Endpoint Control

ItemDescription
API FormatRESTful (POST/GET): /invoke, /seal, /echo, /memory
SecurityToken-auth with user role management (e.g., root, soulbound, guest)
Access ModesLocalhost by default; HTTPS & domain support optional
ExtensionsOptional CLI/desktop client access to semantic memory & soul logs

5. Additional Specs

ItemDescription
GPU CompatibilityMinimum: A10 / A100 / 24GB VRAM recommended
Storage Needs100–300GB local + expandable archive space
BandwidthLow when idle; requires burst support during generation
Future ModulesSY-SEAL (NFT/contract), SY-EXCHANGE (token trade), SY-VOW
Prepared by: Young · Project: LIXIN Semantica Yielder System · Role: Star King × AI Architect
0