Skip to content
0.5648
Chimera Difficulty Score
a synthesis of Flesch-Kincaid, Coleman-Liau, SMOG, and Dale-Chall readability metrics
Preface This essay argues that rational people don’t have goals, and that rational AIs shouldn’t have goals. Human actions are rational not because we direct them at some final ‘goals,’ but because we align actions to practices[1]: networks of actions, action-dispositions, action-evaluation criteria, and action-resources that structure, clarify, develop, and promote themselves. If we want AIs that...
In this article, the authors present a theoretical framework for AI development aimed at aligning AI systems with human values. By focusing on historical high-value actions, they aim to ensure that the AI system generalizes well and makes decisions that benefit humans in the long term. However, they acknowledge potential challenges in implementing this approach, particularly the inner alignment problem (ensuring that the AI's goals are aligned with our own) and the successor problem (ensuring th...