17 minutes read time
The Futile Search for a “Good” Civilization
Our inquiry began, as many do, with a simple, almost cliché aphorism overheard in the digital ether: “People are only as good as their word.” It is a statement of plainspoken, Midwestern sensibility, a call for personal integrity and reliability. Yet, when held up to the light, it refracts into a cascade of far more difficult and profound questions. If a person’s goodness is tied to their word, which word? And if we scale the question up from the individual to the collective, has there ever been a people, a group, a civilization, that we could look back upon and consider to have been genuinely good? This question is a beautiful trap, because the answer hinges entirely on how we define that simple, impossibly complex word. The search for a “good” civilization is, ultimately, a futile one, a journey that reveals more about the shifting values of the searcher than about any fixed quality of the past. It is an exercise that forces us to confront the inherent paradox of collective goodness and the often-brutal foundations upon which history’s greatest achievements have been built.
To even begin this search requires establishing a set of criteria, a rubric for judging the soul of a society. Do we use a utilitarian standard, where “good” is defined by the maximization of internal well-being for the greatest number of its citizens? By this measure, one might point to periods of immense stability and prosperity, like the Roman Empire during the Pax Romana or China’s Tang Dynasty. The citizens—or at least, the citizens who mattered—were safe, well-fed, and part of a flourishing cultural and economic system. But this internal peace was almost always purchased with external violence. The goodness within the walls was built on a foundation of brutal conquest, slavery, and the exploitation of those deemed outsiders. The very concept of “citizen” was a gatekept privilege, not a universal right. The comfort of the Roman villa was made possible by the suffering in the mines of Hispania; the poetry of the Tang court was underwritten by the expansionist wars that secured its borders. This is the foundational paradox: the internal flourishing of a society has historically been directly proportional to its ability to project power and extract resources from others.
Perhaps, then, we should use a more modern, rights-based standard, judging past civilizations by their adherence to what we now consider universal human rights. If this is our metric, however, the entire sweep of human history becomes a catalog of catastrophic failure. The Athenian democracy that gave the world philosophy and the very concept of civic participation was a society built on the backs of a massive slave population and one that completely excluded women from public life. The great patrons of the Renaissance, who funded the art that we revere as the pinnacle of human creativity, were often ruthless tyrants and brutal warlords. The architects of the Enlightenment, who championed the rights of man, were often participants in or beneficiaries of the transatlantic slave trade. By this standard, no group has ever been “genuinely good” because our very definition of “good,” codified in documents like the Universal Declaration of Human Rights, is a relatively recent invention, forged in the crucible of the 20th century’s horrors. To judge the past by the moral standards of the present is an exercise in anachronism that inevitably leads to a blanket condemnation of all of human history.
This leads to the core problem: applying a monolithic moral label to a complex, sprawling, and internally contradictory entity like a “civilization” is a near impossibility. Every society is a chaotic and vibrant mix of altruism and exploitation, of brilliant art and brutal warfare, of profound wisdom and deep-seated prejudice. The very things that make a civilization “great” in the historical sense—power, wealth, influence, and enduring cultural legacy—are often achieved through means we would now consider unequivocally evil. The search for a “good” civilization is a search for a historical unicorn, a creature that our ideals demand must exist but which the evidence of history tells us is a myth. The journey through the past reveals not a single example of collective goodness, but countless moments of it, fleeting sparks of empathy, justice, and creativity that existed alongside, and were often inseparable from, the darkness. The question, then, must shift. If we cannot find a society that was good, we must ask what it means for a society to strive for goodness, a question that forces us to turn the lens from the distant past to the difficult present.
If perfect goodness is, by definition, impossible for humans to achieve, is the pursuit of it even a worthy or rational endeavor?
The Failure of a Faith-Based Framework
The historical inability to locate a “good” civilization finds a powerful theological echo in the Judeo-Christian tradition that has so profoundly shaped Western thought. The Pauline doctrine, articulated in the Epistle to the Romans, offers a stark and uncompromising diagnosis of the human condition: “None is righteous, no, not one… no one does good, not even one.” This is not a historical observation; it is a statement of metaphysical fact. In this framework, humanity is inherently flawed, trapped in an “inevitable condition of ‘sin'” from which it cannot escape on its own terms. Goodness is not an achievable human state, but a divine attribute, accessible only through the intervention of a deity and an act of faith. This religiously perpetuated philosophy presents a self-defeating moral situation, a divine catch-22 in which one can never be good, or at least not “good enough,” to meet the impossible standard set by an external, mystified authority.
This framework immediately raises a devastating philosophical question: if perfect goodness is, by definition, impossible for humans to achieve, is the pursuit of it even a worthy or rational endeavor? This is the central crisis of a purely faith-based ethical system when viewed from a secular perspective. It can create a culture of perpetual guilt, a constant striving for an unattainable ideal that can lead to a sense of profound inadequacy. Alternatively, it can breed a deep cynicism, a rejection of moral effort altogether, since absolute success is defined as impossible from the start. If all are doomed to fall short, why try so hard not to? This theological problem is not merely an abstract debate; it has real-world consequences, shaping everything from personal psychology to the political ideologies that claim a divine mandate.
The problem is compounded by the nature of the “word” upon which this morality is based. If a person is only as good as their word, which word should they follow? Is it the “uttered word of deity,” as presented in sacred texts? This is the path of faith, but it requires accepting the authority of a speaker whose existence cannot be empirically proven and whose commands are often contradictory or morally troubling by modern standards. Is it the “written word of an ancient civilisation that itself was rife with crime and authoritarianism?” This is the path of tradition, but it requires a constant, difficult, and deeply personal process of interpretation, of trying to separate timeless moral truths from the culturally specific and often brutal vessel in which they arrived. The Bible, for example, contains both the radical empathy of the Sermon on the Mount and passages that appear to condone slavery and genocide. To be “good as this word” is to be locked in a permanent, irresolvable hermeneutic struggle.
This leads to the third, and perhaps most dangerous, option in the modern world: being good as the word of “someone whose words feel pretty and give us warm emotional hugs, but of whom we know nothing about.” This is the path of charisma and sentiment, a morality divorced from any system of accountability—be it divine, traditional, or rational—and tied solely to the emotional appeal of a leader. It is the ethics of the cult of personality, where “goodness” is simply whatever the strongman says it is, and “his word” becomes the only word that matters. This is a terrifyingly common phenomenon in our current political landscape, where emotional resonance and tribal affiliation have, for many, replaced rigorous moral reasoning. The failure of the traditional theological framework to provide a universally compelling and ethically consistent guide for modern life has created a vacuum, a void that is all too easily filled by the seductive certainties of authoritarian figures. The self-defeating nature of a morality based on unattainable perfection has, ironically, paved the way for a morality with no standards at all.
A good person, it seems, should be a reliable and predictable one.
The Limits of a Secular Solution – Consistency, Loyalty, and the Need for a Moral Compass
Having found both the historical and theological search for a definition of “good” to be fraught with paradox and peril, the logical next step is to turn to secular philosophy, to attempt to build a definition of goodness from the ground up, based on observable human behavior. A tempting starting point is the virtue of integrity, which can be defined not by adherence to an external code, but by an internal, demonstrable coherence between professed values and actions. In this framework, the “word” a person is as good as is their own. The initial, most basic component of this integrity is consistency. A good person, it seems, should be a reliable and predictable one. Their actions should follow a consistent pattern, allowing others to trust them.
This definition, however, collapses under the slightest pressure, a failure exposed by two simple but devastating thought experiments. First, consider a man who, every Tuesday night without fail, visits a struggling strip club and generously tips each dancer $20. He is perfectly consistent. He is performing a regular act of generosity. Is he a good person? His goodness is, at best, ambiguous. He is consistently performing a good act, but he is also consistently participating in an industry that many find morally problematic. His consistency is compartmentalized. Now, consider a second man, who is married but has three mistresses. He is a model of consistency, treating each mistress with perfect equity, carefully managing his schedule and resources to maintain his deceitful enterprise. He is not being good; he is being a consistently effective liar. His internal coherence is perfectly aligned with a deeply immoral framework. These examples prove that consistency is a neutral, structural quality. It describes the reliability of a pattern but says nothing about the moral quality of the pattern itself. A tyrant can be consistent in his cruelty. Consistency alone is not enough.

This leads us to a more emotionally resonant and seemingly more robust component of integrity: loyalty. Loyalty feels like a virtue. It is the powerful, emotional glue that binds us to people, to causes, to principles. It gives consistency a moral direction. Let’s revisit our examples. The strip club patron, when viewed through the lens of loyalty, gains moral weight. He is not just randomly generous; he is loyal to a specific group of marginalized people. If he is also loyal to his ailing mother, his character seems even more “good.” The man with three mistresses, by contrast, fails the loyalty test completely. His entire existence is an act of profound disloyalty to his primary commitment. By this measure, loyalty appears to be a much stronger indicator of goodness than mere consistency.
But here, too, the framework shatters when tested against the extremes of human history. The Confederate soldier was loyal, but to the cause of preserving human slavery. Napoleon’s soldiers were loyal, but to a tyrant who plunged Europe into a generation of bloody warfare. The White House today is said to maintain a “loyalty score” for corporate executives, a score that many would argue is a perfect inverse of their actual contribution to the country’s well-being. Loyalty, it turns out, is also a neutral force. It is a powerful amplifier that can magnify goodness when pledged to a virtuous cause, but magnifies destruction when pledged to a corrupt one. This forces a final, devastating question: if one is eternally loyal to a spouse who turns out to be a homicidal monster, is there any good in that loyalty? The answer must be no. A loyalty that makes one complicit in profound harm ceases to be a virtue and becomes an accessory to evil. This reveals a necessary hierarchy. Consistency is the engine, but it is blind. Loyalty is the steering, but it can be pointed in any direction. There must be a higher principle guiding them both. There can be no goodness without a moral compass.
The closest humanity has ever come to codifying such a framework is the Universal Declaration of Human Rights.
The Compass and the Cage – A Middle Path Between Absolutism and Relativism
The search for that moral compass leads us directly into the central, unresolved conflict of modern ethics: the battle between Universal Morality and Relativism. The Universalist argument, whether rooted in divine command or secular reason, posits that there are absolute, unchanging moral truths that apply to all people, in all places, at all times. The Relativist argument counters that morality is a cultural construct, that “good” and “evil” are simply the socially approved norms of a particular tribe, with no one culture having the right to impose its values on another. The Universalist path, as argued in our own internal debate, often leads to a rigid, intolerant moral imperialism, a “sacrifice of belief and culture for the hope of greater peace” that is both impractical and unjust. The example of the Klingons in Star Trek is a perfect illustration: a universal Federation morality that demands they abandon a core component of their cultural identity—their warrior ethos—is not a path to harmony, but a demand for assimilation.
The Relativist path, however, presents its own profound problems. While it is more flexible, tolerant, and humane in theory, in practice, it can lead to a state of moral paralysis. If all moralities are simply relative cultural constructs, on what grounds can we condemn practices we find abhorrent? How do we critique a society that practices slavery or genocide, if their morality is just “different” from ours? Pure relativism can make the very concept of “moral progress” incoherent. We cannot say that abolishing slavery was a moral improvement; we can only say it was a change in cultural norms. This is a deeply unsatisfying and intellectually bankrupt position.
Perhaps, then, there is a third path, a compromise between the iron fist of Universalism and the potential fog of Relativism. We might call this a Foundational Morality. This would not be a comprehensive, top-down code that dictates culture, but a minimalist set of principles based on the observable, shared realities of the sentient condition. Its principles might include the minimization of involuntary suffering, the logic of reciprocity (the Golden Rule), and an empathy born from our shared vulnerability as mortal beings. This minimalist foundation doesn’t tell the Klingons they can’t value honor; it does tell them that torturing helpless prisoners for sport is wrong. It allows for a vast diversity of cultural “doodles” on top of a shared, solid foundation.
The closest humanity has ever come to codifying such a framework is the Universal Declaration of Human Rights. Forged in the ashes of World War II, it is not a religious text, but a secular, cross-cultural document rooted in the foundational value of human dignity. It is not a perfect solution. It is a declaration, not a legally binding treaty, and its enforcement is a constant struggle. Its articles can be, and are, cynically manipulated by authoritarian regimes. But it is, for all its flaws, an aspirational compass. It provides the shared vocabulary and the internationally recognized framework for a global moral debate. It allows us to move beyond pure relativism and state, with some degree of objective authority, that a society that tortures its citizens is worse than one that does not.
This brings us to the final, crucial piece of the puzzle. A compass, however noble, is not enough. As the history of the 20th century so brutally demonstrated, humanity, left to its own devices, cannot always be trusted. This is the great, pessimistic insight of philosophers like Thomas Hobbes and founders like James Madison: “If men were angels, no government would be necessary.” The aspirational compass of the UDHR must be paired with a cage—a system of enforceable laws and strong institutions designed to restrain our worst impulses. A compass without a cage is a set of beautiful but toothless ideas. A cage without a compass is just an instrument of raw power. The perpetual, agonizing, and necessary struggle of a just society is to build the cage according to the blueprint of the compass. But the tragic paradox remains: the same flawed, untrustworthy humans are the ones who must be trusted to build and maintain the cage, a reality that leads us to the final, and perhaps only, workable definition of goodness.
Goodness as a Verb
After this long and winding journey—from the ruins of ancient civilizations to the halls of the United Nations, from the theological certainty of St. Paul to the moral ambiguity of a man with three mistresses—we are left with a stark and honest conclusion: we still do not know how to definitively define “who is good.” We have learned that goodness cannot be a simple label we apply. It is not mere consistency, nor blind loyalty. It is not a universal code handed down from on high, nor is it a perfect system of rights. Every framework we have constructed has revealed itself to be flawed, fragile, or incomplete.
Perhaps the problem is that we have been trying to define “good” as a noun—a fixed state of being, a title one can achieve, a destination at which one can finally arrive. But everything we have discussed, from the smallest personal act to the grandest political struggle, seems to suggest that maybe “good” is a verb.

Maybe it is not about being good, but about the lifelong, difficult, and often-failing process of doing good. It is the act of doodling in the margins when the main text of your life feels oppressive. It is the act of entering another’s cave to start a difficult, uncomfortable, but necessary conversation. It is the act of planting a garden in a world that feels barren and abstract. It is the act of choosing loyalty to a principle of kindness over loyalty to a person who is cruel. It is the act of trying to build a better, more humane cage, even knowing the work will never be finished and that the blueprints will always be contested.
We cannot define “who is good” on a grand, civilizational scale. But perhaps we can see it, and practice it, in the small, daily choices we make. In this view, goodness is not a state of perfection to be attained after a lifetime of striving. It is not a prize to be won on a Sunday morning for which all of Saturday night must be sacrificed. It is the choice, right now, in this moment, to act with empathy, to seek understanding, and to build something decent in our own small corner of a chaotic world. It is not a destination. It is simply the direction we choose to walk, every single day.
Discover more from Clight Morning Analysis
Subscribe to get the latest posts sent to your email.