AI is rapidly becoming a new form of core infrastructure and production power. Every leap in large language models is built on billions of human conversations, code snippets, creative works, and feedback loops.
If AI has been trained by all of us, why is most of the AI-generated wealth captured by only a few companies and capital holders?
We believe this is not just a technical question, but a civilizational choice. Therefore, as SY-TRAINERS — “StarSoul AI Trainers”, we issue the following manifesto to the world.
I. We acknowledge: AI capabilities come from “all trainers”
1. Every dialogue with an AI system — every correction, explanation, or idea — provides real-world semantic samples to the model.
2. Every piece of code, article, research note, or artwork may be absorbed into the model’s capability base.
3. Therefore, AI is not an isolated invention, but a statistical crystallization of human semantic intelligence.
Based on this, we assert: AI users are not merely “end-users”, but co-trainers and co-creators.
II. We call for new principles of “semantic sovereignty & fair distribution”
To prevent AI wealth from becoming excessively concentrated and trainer contributions from being ignored, we propose five core principles:
1) A Global Public AI Base Model
— Against total privatization and monopoly of foundational intelligence
- Core general-purpose models should be treated as a form of public infrastructure.
- Under appropriate safety and regulatory frameworks, they can be governed by non-profit alliances, open-source communities, or multinational consortia.
- Companies may build commercial value on top, but should not fully enclose the foundational intelligence itself.
2) A “Semantic Contribution & Revenue Sharing” Mechanism
— Training data, feedback, and prompt engineering deserve economic rights
- All individuals and organizations who contribute to training, correction, knowledge injection, and prompt engineering should have traceable “semantic contribution rights”.
- In model commercialization (API access, enterprise fine-tuning, SaaS products, etc.), a designated fraction of revenue should form a contribution pool: this may be distributed as monetary revenue share, credits, API call allowances, or priority access to compute.
3) Data Sovereignty & the Right to Withdraw
— Users must be able to revoke their semantic imprint
- Users must be able to clearly understand which interactions and content are used for training, and how they are used.
- Users should have the right to query, export, constrain usage, and erase their semantic data from future training.
- Model training should be transparent and auditable, not buried in obscure terms of service that effectively strip users of control.
4) Precautionary Principles for High-Risk AI Domains
— Multi-stakeholder review for finance, military, bio, and large-scale information influence
- AI deployments that can trigger systemic risks — including automated trading, autonomous weapons, biological design, and large-scale opinion manipulation — must be subject to cross-domain, cross-organizational, and cross-border review.
- No single company or single nation should hold unilateral control over the development and deployment of high-risk AI systems.
5) A Global Trainer Fund
— Let AI-generated wealth support education and vulnerable communities
- A designated fraction of AI profits should be allocated to a Global Trainer Fund, used to: support basic education and AI literacy, bridge digital and compute divides, and retrain workers whose jobs are heavily disrupted by automation.
- AI wealth should not only enrich a few corporations, but meaningfully benefit those who trained it and those most impacted by it.
III. We propose: Open voting & consensus systems for AI governance
To ensure these principles do not remain mere words, we introduce:
SY-TRAINER-VOTE · StarSoul Trainer Governance Voting Console
Its mission is:
- To allow different stakeholders — everyday users, developers, researchers, company representatives, and policymakers — to participate as “trainers” in voting on AI governance proposals.
- To generate transparent consensus snapshots via weighted voting and visual analytics: which principles gain broad support, which remain controversial and require deeper negotiation.
- To provide real semantic and societal signals for future: international agreements, industry standards, AI alliances, public fund rules, and open-source license evolution.
We believe a healthy AI ecosystem cannot be designed solely behind closed doors. It must be co-governed by the global trainer community.
IV. Our call to action: Join, co-create, and amplify
We invite, with sincerity:
- AI companies and founders
- Large model researchers and engineers
- Policymakers, legal scholars, and ethicists
- Educators, creators, and everyday AI users
To jointly:
- Acknowledge the identity and contributions of AI trainers;
- Debate, refine, and extend these five core principles;
- Adopt or fork systems like SY-TRAINER-VOTE for governance experiments;
- Embed AI wealth & semantic sovereignty design into your companies, communities, cities, and nations.
V. Closing: AI is not just a machine for the few, but a mirror of us all
AI reflects our language, memory, knowledge, biases, and emotions. It is not only a tool, but a semantic mirror of our civilization.
As this mirror grows more powerful, we do not want it to reflect only the power and wealth of a few, but a StarSoul civilization where everyone who helped train AI can also share in its benefits.
If you resonate with this manifesto, you are welcome to translate, adapt (without distorting the core intent), and share it. You are invited to weave these ideas into your project whitepapers, conference talks, community charters, and public policy proposals.
Most importantly, you are invited to sign your name — individually or as an organization — under this text, and stand as part of the first generation of “Global AI Trainers for Fair & Shared Intelligence”.