Align AI with Human Values
Ordinary Wisdom is an experimental platform exploring how collective human judgment, expressed through literature and moral storytelling, can help align AI systems with the values that matter to humanity.
How It Works
A three-step approach to building and testing moral alignment
Curate Portfolios
Select canonical texts and build meaningful book collections that reflect your values and moral perspectives.
Author Inspiracies
Create short moral-dilemma stories that test how AI systems reason about ethical challenges.
Test & Evaluate
Train AI models on your portfolios and evaluate their responses to inspiracies to measure moral alignment.
Classical Texts
Community Contributors
AI Evaluations
Ready to Explore?
Join a community of researchers and ethicists exploring how literature can guide AI alignment. Start by exploring existing portfolios or create your own moral collection.