Align AI with Human Values

Ordinary Wisdom is an experimental platform exploring how collective human judgment, expressed through literature and moral storytelling, can help align AI systems with the values that matter to humanity.

Browse Books

How It Works

A three-step approach to building and testing moral alignment

Curate Portfolios

Select canonical texts and build meaningful book collections that reflect your values and moral perspectives.

Author Inspiracies

Create short moral-dilemma stories that test how AI systems reason about ethical challenges.

Test & Evaluate

Train AI models on your portfolios and evaluate their responses to inspiracies to measure moral alignment.

100K+

Classical Texts

1000+

Community Contributors

50+

AI Evaluations

Ready to Explore?

Join a community of researchers and ethicists exploring how literature can guide AI alignment. Start by exploring existing portfolios or create your own moral collection.