Home » Publications » Reports and papers » Publications » Moving Beyond the Left-Right Divide: Governance by First Principles and Natural Intelligences

Moving Beyond the Left-Right Divide: Governance by First Principles and Natural Intelligences

By John Clippinger

Published on February 14, 2024 on LinkedIn

I have seen Americans making great and sincere sacrifices for the key common good and a hundred times I have noticed that, when needs be, they almost always gave each other faithful support” (Tocqueville 1840, 594-595).

 

Introduction and Overview:

This essay is a provocation and challenge. It makes the claim that there is a computational science of intelligences and that there is no essential difference between natural and artificial intelligences. It further argues that well established fundamental principles in physics, such as the Free Energy Principle and Quantum Information theory, and Bayesian prediction offer a cogent, testable, and scale and domain free framework for explaining and designing a variety of intelligences. Such frameworks are themselves computational models, and hence, imminently testable, and practical for biological as well as digital phenomena.  The essay postulates that the principles that shape and govern intelligences are proximate if not identical to those that perform the same functions in creating and sustaining life. In other words, to exhibit intelligence is to stay alive and to stay alive is to be intelligent.  Biological processes are manifestations of evolved intelligences not just at the scale of the brain but at the cell as well. Living things are the result of sensing, sampling, and shaping nature, and then making and sharing predictions about both nature and self that ensure survival. The fact that such sensing and predicting can be shared with others through messaging, rituals, images, norms, and languages demonstrates that intelligences exist outside the skull as forms of sharable collective beliefs. When these collective beliefs become codified prescriptions on how to perceive and act in the world, they become ideologies.

 

This essay is the first of series of essays. This first essay begins by arguing how the findings of computational physics and neuroscience will fundamentally upend current Enlightenment and Newtonian based conceptions of governmental, social, and economic institutions. It examines the weakness of American democratic institutions, markets, and policies predicated on Enlightenment and Cartesian principles. It critiques core American Constitutional principles that assert that protection of personal freedoms and rights can be secured through current electoral, legislative, and judicial processes. It also challenges the efficacy of the separation of powers doctrine and the claim that sovereignty and personal freedoms can be achieved through the doctrine of subsidiarization and decentralization. It then provides historical examples to illustrate how from its inception, the growth and governance of the American Republic was driven by policies of land acquisition and speculation as facilitated by the powers of the Federal Government through the Commerce Clause of the Constitution. This historical assessment posits that the judicial doctrine of textual originalism is inherently unscientific and unsustainable. The essay then argues for reimaging democratic institutions and principles as intelligent, adaptive, and self-correcting political and economic entities designed to embody and augment core democratic values of equity and agency. A key finding of both Quantum Information theory and the “Predictive Brain” thesis as  applied to social groups and democratic institutions is that there is no “objective” reality outside the shared observations, actions, and predictions of collections of “sentient agents”. Furthermore, it is posited that the intelligences of such agents, be they people, synthetic agents, or societies, are tied to their capacity to stay alive by adhering to the First Principle of Free Energy Minimization, or in other terms, the Hamiltonian Least Action Principle of Physics.  In other words, for both life forms and societies to survive, they need to recognize and abide by this First Principle. One therefore, can regard recent advances in “natural based intelligences” as developing the multiscale and multi-domain intelligences for institutions, corporations, and democracies to recognize and abide by these First Principles.

 

The second essay is an exploration of the underlying science of Quantum Information Theory, computational biology, Active Inference and Predictive Brain theory to test the claim that collective beliefs or ideologies can be studied with the same rigor and predictability as biological and genomic processes. If that claim proves true, (which at this point it is still a conjecture), it is argued that it will become possible to have a principled basis for evaluating and addressing the health or pathologies of different kinds of social, political and economic ideologies. The implications of this could be especially far reaching in the field of Artificial Intelligence, as it implies a principled and evidence based computational method for evaluating the suitability of different AI models as suitable ideologies in media, finance, health, and government.  In the simplest terms, there are science-based methods for assessing the health or pathology of individual and societal ideologies regardless of their material or informational expression.

 

Needless tsay, this perspective is a major break with the postulates of Cartesian dualism: the separation of mind and body, information and matter, and the objective and the subjective. It also challenges the validity of many reflexively held Enlightenment ideals, such as, the rational actor, the objective observer, and the hallowed democratic doctrine of free speech, free markets, and the electoral process. Given such an affront to the pillars of established social, economic, and political convention, the essay takes considerable pains to demonstrate why we need to move on from our commitment to such antiquated ideals and rudimentary science to recognize and utilize the science and technologies of our times, especially a nature based Artificial Intelligence. After developing the scientific rationale from the computational neuroscience of the “Predictive Brain” and the Free Energy Principle, the essay makes the argument for using this science of intelligences as a principled approach for rethinking democratic principles and the redesign of democratic processes.

 

From Enlightenment to Entanglement[1]

For the last six hundred years the success of the West can be attributed to its embrace of science and the principles of the Enlightenment Rationality, in particular, Cartesian dualism, (Damassio,1995) and its application to industry and government and military organization. Through the separation of the mind from the body, the objective condition of Nature could be understood and mastered.  Nature could be understood and engineered in its own material terms without any need for “meta-physical” or clerical justifications or limitations. Applied to commerce, there was no need for market oversight, as the “invisible hand” of price and supply and demand could replace the dictates of the state and monarch to fairly and efficiently allocate value. Likewise, there was no need for a legacy of a “Divine Right” to confer legitimacy, as the “demos”, the People, are invested with the secular authority and capacity to express and realize their own will through representative processes. Hence, freedom of action and expression and individual choice became a constitutional right and the key marker of Western democratic identity. The doctrine of laissez-faire became the means of realizing and enforcing such rights by putatively allowing ideas to flourish, initiative to be rewarded, and fair opportunities granted to all on an equal basis. Gone were the hierarchies of privilege and birth when the rights of freedom of thought, expression and action were protected.

 

This vision of democratic freedom and individual rights, however, had a counter narrative roughly starting at the same time. This narrative grew from the recognition that the unrestrained exercise of freedoms leads to new form of accumulated privileges and powers that concentrates power resulting in cruel disparities of wealth and opportunity. Variously deemed Levelers, Socialists, Marxists, and Progressives, advocates for this new narrative fundamentally challenged Mercantilism and Capitalism at their core. They offered a new vision of social well-being and justice that sought to correct the ‘systemic forces” that concentrated wealth and power through egalitarian measures of governmental oversight and wealth redistribution.

 

However, for “free rights maximalists” granting any such powers to any governmental body violated a core democratic principle, that of individual freedoms and self-determination. To this group value is created by individual effort and risk taking, and therefore, the unbounded accumulation of value and privilege is a just reward. There is no acknowledgment of systemic or contextual factors that might shape, skew, and tip the balance for success in life. In contrast, the “social justice maximalist” see socio-economic and systemic factors as determinative, where privilege and success are not so much the result of individual merit, but the result of the accumulated contribution of others and random good fortune. Seen through a biological evolutionary lens, the “social justice maximalists” ascribe success to “niche construction and favorable conditions”, whereas the “free choice maximalist” ascribe success to superior genes, individual effort, and sacrifice. This biological perspective is one to which we will later return.

 

In fact, both groups are wrong; each is stuck in the same 17th century mindset.  They are simply arguing opposite sides of the same Cartesian coin. The “Right” identifies liberty with freedom of choice and autonomy, and the “the Left” with freedom from want and abuse. The “democratic process” can be caricatured as a flipping of the democratic coin in the hope that the different sides will balance one another out without one side shaving the weights of the coin to their benefit. In the more extreme versions of their ideological selves, the “free rights maximalist” espouses a reality defying minimalist version of government, paradoxically enabled by blind obedience to authoritarian rule. On the other extreme, the “social justice maximalists’, construe any hardship, offense, or aggression as a deliberate systemic injustice and moral failure, thereby legitimating its own version of intolerance and imposition of moralistic correctness. The former emphasizes the importance of duty while the latter the importance of rights and entitlements. In both cases, consistent with their Enlightenment view of science, both groups see rules as being universal in the assumption that freedom and justice are neither contingent nor contextual. Moreover, when such absolutist positions are taken, ideological integrity is equated with immutability, (e.g. “Originalism’) thereby rendering the prospect of democratic “compromise” or consensus impossible.  Under such conditions, the “People’s representatives” cease being advocates for their constituents but advocates for those for whom ideological purity serves their particular interest. For the Right, this is Wall Street and intrenched oligarchic interests, and for the Left, those who grant and benefit from entitlements. Given that the purity or immutability of any ideological position, Left or Right, is itself contrary to the scientific method of openness and evidence-based verification, such dogma are a guarantee of failure. When democratic processes and institutions become captive to such dynamics, they truly become inimical to the very rationale for their founding.

 

The Ideology of Free Speech and Press

Especially difficult for any democratic society is the treatment or regulation of “speech” or the press, which in current terms, also applies to the treatment of content, information, and social media.  As offspring of the Enlightenment, democratic societies were founded as secular institutions that politically manifested the mind-body ideology through the separation of church and state, which delegated the control of secular affairs to the state, and matters of the mind to meta-physical, non-state institutions such as, the media, arts, education, and the church. This left the Press and media in a particular kind of limbo.  As the “Fourth Estate,” the “Press” was essential to the functioning of democratic processes but was beyond the reach of effective oversight. Since newly formed democratic societies were eager to defend their newly asserted freedoms, they distanced themselves from any semblance of “content” censorship and market oversight. As a result, democratic societies intrinsically lack principled policies on the oversight of speech, content, or media. What oversight there is typically treats information, speech, or content as market commodities, regardless of the character of the content or the intent of the creator.  Speech and content creation is presumed to be an inanimate fungible “thing” without intent, context, or consequence. While there may be special provisions for “hate speech” or defamation, in most cases, such designation applies only if that content or act results in tangible material, psychic, or bodily damage. There is little provision for any notion of individual or collective mental damages short of a material manifestation. The presiding presumption is that intangible things in the realm of speech cannot by themselves have consequential negative consequences.   The restoration and rebalancing public and private interests is once again invested in the magic of the “invisible hand” that dutifully and independently establishes “the fair” value of information and content to enable a “free society”. There is no consideration of how the invisible hand would resolve asymmetries in the distribution of information to avoid the matter of “adverse selection” which is a well-recognized cause of market failures. But that might be regarded as a form of “systemic bias” which is seen and dismissed as the purview of the Left.

 

This idealized notion of unconstrained natural forces achieving a “fair” equilibrium has been repeatedly shown not to work for market goods and information “goods”, e.g., content (Piketty, 2013, Zuboff, 2018, Boghosian, 2020, Picard, 2017, 2022). In contrast to the Enlightenment era, our 21st century understanding of information, content, computation, mental adaptation, and prediction has advanced enormously.  We have a deep and principled scientific understanding of the physics of information, cognition, and life, which form the technological and economic underpinnings of our era. Yet we persist to govern and think of ourselves in highly simplistic and antiquated terms. Our current technologies and practices are living refutations of the Cartesian dualism and the thesis of the “rational”, objective actor, and observer.  We have the mathematics of information theory for determining the channel capacity for information transmission and the encoding and decoding of digital communications. We have the theory and practice of computation that determines what is computable or knowable. More recently, through a new physics of life and intelligence, (Friston, 2017, Clark, 2023, Levin, 2022, Edelman &Tononi, 2017, Seth,202) we can evaluate the predictive capacity of brains, artificial and natural, in our meaning making, to how we make sense of and shape and move through the world. Within the context of these new capacities, we need not view information, content, intent, and even ideologies or beliefs as “hands off unknowables”, but as a dimension of activity, both in value formation and destruction, that is critical to the well-being of all societies and to the preservation of the planet.

 

Lacking a principled and effective means for distinguishing “free speech” from “hate speech”, “disinforming from informing speech”, and fact from fiction, democracies are incapable of establishing causal links between the types of speech acts and content and their material and cognitive consequences. Yet this linkage between code and action is done instantaneously trillions of times a minute; it is the essence of information and computer science that drive and sustain the current global society and economy. Yet democratic societies today persist in ruling themselves today solely through the reductionist mechanisms of the 17th century. In doing so, they fail to comprehend, much less control and direct the very information and biological technologies that dominate and define our era.  Lacking a nuanced and principled understanding of the difference between “free and censored”, hateful and political speech, disinformation, and true information, they continue to perpetuate and constitutionally defend, embed, and extent the flawed notion that only through the absence of constraint, can both freedom of action and personal liberties and opportunity be achieved.

 

This notion of freedom as “constraint free” action is not just problematic but detrimental to any highly functional, inclusive, and complex society. It is a legacy of Enlightenment thinking trying to wean itself from the theological categories of the constraints of the church and monarchy. Just as the separation of mind and body entailed a separation of church and state, so too has the notion of laissez-faire in markets and social norms negated any notion of the legitimacy of collective agency or shared purpose for a society. Again, any notion of agency, teleology or shared purposes conjures the same specter of clerical oversight and collectivism and immediately sets off warning lights of the ‘metaphysical” or pseudo-science. This is still a contested area today in evolutionary biology where an older generation of Neo-Darwinists insist on holding onto Cartesian dualism, rationalism, reductionism, and observer independence. A younger generation of computational biologists, cognitive scientists, and Quantum physicists, on the other hand, categorically reject Cartesian dualism and recognize the role of agency and information in evolution at both micro and macro scales. It is this transition from an inanimate, indeed, dead and reductionist view of Nature to an animate one that is generative, abundant, and expansive that is being contested today in science, politics, and culture.  It marks not only a new era in science and technology, but a new era in how we see and govern ourselves by being grounded in an ever-evolving science of evidence-based beliefs. In making this transition the polarities of contention between “freedom” verses “justice” dissipate through a science-based narrative that combines the material and the immaterial and accommodates a nuanced and rigorous understanding of information, agency, and intelligence. It gives citizens, representatives, and policy makers the legitimacy to make evidence-based assessments as to the efficacy and viability of different types of belief and ideologies. It provides a principled and evidence-based means for defining free speech and hate speech, authentic and inauthentic information and by modeling the underlying causes that drive and shape individual and collective beliefs.

 

In the second companion essay we will draw upon the fields of informational physics, computational neuroscience, biology, and Principled AI to make the argument for this transition to an animated and generative view of democratic ideologies and forms of organizations. At this moment in democratic transition, the very principles that could propel human well-being, ecological synchrony, and more inclusive and resilient societies, are precluded even from consideration by the very framing of current constitutional principles of freedom and rights. Rather than consider such “freedoms” as constitutional “primitives”, the argument being made here is that there needs to be a new set of constitutional primitives based up scientifically derived principles of living societies comprised of agentive, predictive, mutualistic, meaning making entities, human and non-human.  Future societies can achieve meaning and prosperity through their ability to amplify agency for all, and to incorporate complexity and diversity in order to have more predictable and adaptive relationships with themselves and the natural world. 

 

The ongoing future goal of democratic governance should be “complexity stability”, a concept which we will explore in the companion essay. In the follow-on essay, we will show how the dynamics of Bayesian belief formation and “Active Inferencing”, and related “predictive models” of the physical brain and the social mind can explain how such ideologies are formed and how and why they degrade and fail to adapt. We will then argue how the capacity to sustain complexity stability under variable conditions of uncertainty is a way to square the circle of “freedom” and “justice” by increasing individual and group agency and order.

 

 

From Politics to Free Energy Principles for Evidence Based Governance:

A seemingly recurrent and inevitable point of failure democracies and republics comes when the question, “Who guards the guards” (Quis custodiet ipsos custodes?) become both foremost and unanswerable.” Yet this question is preceded by another even more difficult to answer. By what rule do rulers rule? (Principe per quam regulam regunt?) This question about the origin of legitimacy applies to authoritarians as well to democracies and republics. In non-secular societies there is a kind of infinite regress to some opaque and distant notion of “divine right or mandate” that is granted to a ruler which invests him or her with unassailable legitimacy. In secular democratic societies and republics there is a nod to a notion of a deistic “Primal Mover”, as in the United States creeds, “In God We Trust” – “One Nation Under God”.  But the actual and defining operating principle of legitimate rule is derived from a similarly opaque but secular construct, “The People”. For a secular democracy, the legitimacy, hence, acceptability, for any rule formation and enforcement process requires that rules, in effect, laws and statutes, should be determined by those being ruled.

 

Another problematic founding principle taken from the American Declaration of Independence is the assertion that “all men are created equal”. This hallowed claim rings hollow in that it states “men” and by that it means only white propertied men, and not women nor the poor nor other races, are equal before the law.  And those not explicitly mentioned, women, the propertyless, blacks, Native Americans. presumably are not equal as would be the “originalist” interpretation of the current United States Supreme Court majority. Also, this assertion is overtly false, people are not born equal in their physical, mental, and emotional capacities, talents or their place in society. One can generously presume that this claim means “equal before the law,” but that says nothing about how equal treatment before the law or right of participation in the democratic processes are to be ensured and enforced. It is aspirational if not a performative claim without teeth. It does not address the obvious point that different persons, individually and collectively, have different and systemically skewed starting points, and that equality of opportunity like equality of market access are points of competition, contention, even concentration. Yet again, there is the presumption of fair treatment, opportunity, and access secured by the powers of the founding freedoms, (assembly, press, right to bear arms, and vote,) which automatically and without oversight or direction are expected to ensure equality among very unequal peoples.  In short, the critical processes for achieving and realizing equality is negated by the very “hands off” principles by which they are asserted.

 

 

Formal and Informal Rule Making

Another flawed foundational democratic principle is the formal process of legislation whereby representatives convene and deliberate to draft rules and statutes to protect and implement the principles of freedom and equality. In practice, however, the legislative process is often as much governed by “animal spirits”, influence peddling, settling scores, and identity signaling, as it is by evidence-based deliberation. As the imminent Civil War veteran and Supreme Court Justice, Oliver Wendell Holmes noted in his famous treatise on Common Law (Holmes, 1857), common laws and statutes evolve from norms and customs and have a mind and purpose of their own: “the rule adapts itself to the new reasons which have been found for it and enters into a new career. The old form receives a new content, and in time even this form modifies itself to fit the meaning which it has received”. Holmes seems to imply that the formation and acceptance of formal laws are not a rigorous rational process but a natural extension of an informal and emergent social process of social norm formation and invention. In other words, it is not the statutes that govern the customs and norms but the reverse.  The Greeks seemed implicitly to recognize this relationship between group custom and the political process. “Polis”, from which politics was derived is associated with the city the operative unit of free assembly, free speech, and debate where citizens voted on matters critical to their identity and survival as a coherent group. In that sense, in its early foundation a polis was at a scale where a direct form of democracy was feasible. As simple democracies grew and became complex Republics through the means of elected representatives, the actual processes of voting and rulemaking became more variable, arbitrary, and opaque. But what principles should determine the resolution or consolidation of conflicting points of view and goals? What authority is to determine what is “right” and legitimate? Typically, in democracies it is the principle of majority rule with provisions for highly variable protections for minority rights. This places an enormous burden on the process by which rules are made. And the devils are in the details.  How this “representative deliberative” process functions or malfunctions can determine the overall fate and legitimacy of democracies.

 

Another hypothesis of current democracies and republics is that a separation of powers and the process of representative voting are a fair, representative, restorative, and self-corrective process. But this presumption is itself grounded in the same failed ‘laissez faire” premise of market “self-regulation;” for, as in the case of markets, initial conditions and genuine optionality are shaped by those who can capture and benefit from the process. Since there is no independent “meta-principle” of governance, the operative means of governance, e.g., the “people” becomes victim to the classic “principal – agent problem” where the agent, in this case the representative, acts in their own interest and not that of the principal – the people. The dynamics are such that there is every incentive for self-interested agents to capture the very deliberative processes for themselves, and thereby, consolidate the putative separation of powers of government to suit their own ends. Indeed, in America the recent formation of two political parties as sole arbitrators of political power would seem to vindicate this view. For many burgeoning democracies and republics, this is an outcome that is almost preordained and inherent in the “initial conditions” of their fragile formation where there is a weak judiciary, an autonomous military, and a history of patronage and nepotism by elites. Like market capture, the “representative process”, can be gamed from the beginning and without any independent oversight, becomes corrupted, and thereby, raises the perennial question – “Who guards the guards?” If history is any guide, when that question becomes foremost, for the democracy or republic in question, it is often too late.

 

The Core Principle of Effective Democracies: Common Good

This brings us back to the primal question, raised earlier, By what rule do rulers rule? Is there a principle of rulemaking that is both legitimate and resistant to capture that can effectively and fairly govern a complex democracy or republic? The historian and blogger on Roman History and Revolutions, Mike Duncan, (Duncan,2017) noted a recurrent pattern throughout the failure of republics in general but collapse of the Roman Republic in particular. In the course of a healthy republic’s lifetime even major differences between parties were resolved through deference to a tacit code called “mos maiorum”. This unwritten code was in effect a “custom” of placing the preservation of the republic above the interests of individual parties. In effect, this informal code of deference to the “common good” was the underlying “meta-principle” that underscored effective democratic processes. It was not enforced through formal statutes or even force, but through informal oaths of loyalty and deference to the need for shared common beliefs. It represented an unspoken shared long held ideology for the preservation and benefit of the Roman people as exemplified and sanctioned in its founding narrative. In the case of the Roman Republic, it consisted in a set of long held norms such as, “fides”, “pietas’, “gravitas”, “dignitas”, and “virtu”. It is notable that all such beliefs require limitations on individual freedoms in deference to others combined with a recognition of the need for self-discipline to attain a higher standard of personal conduct.  Such values cannot be legislated but only realized socially by individuals internalizing, honoring, and adhering to such principles. Perhaps in the “rule by which rulers can rule” is not so much a rule as a founding principle whereby individual self-interest is subordinated to a “common good” as a personal duty and a predicate for sustaining their personal and collective well-being. That might be the core principle that generates democratic processes, as it frames the context and constraints within which deliberations, policies, and rules much operate. This is a far cry from the enshrinement of individual rights and entitlements as predicates for constitutional rights. When the “entropy” of self-interests become too high, the fabric of democratic union dissolves and the institutions of democracy fail. This condition devolves into what Duncan’s terms the entropy of revolutionary forces when the polity fails to fashion a new political consensus for governance. Failing to achieve an accepted functioning consensus, they devolve into authoritarian rule as the simplest means to achieve order and a confined predictability. The reversion to the predictability of familiar norms and institutions is vividly illustrated by the wave counter revolutions of 1848 in Europe when failed attempts to achieve bottom-up democratic revolutions were quickly followed by coups of top-down authoritarian rule.

 

Democratic Processes as Symbiosis: 

A possible key to preserving democracy may not lie in protecting individual freedoms per se, but something more fundamental, and from which the very notion of individual freedoms is derived. Paradoxically, this may lie in having constitutional rights tied to mutual constraints and duties, as well as to individual freedoms. For example, to enjoy the rewards of playing a game, the players must adhere to predictable rules of the game. Importantly, those who set and referee the rules of the game cannot be same as those that concurrently play the game. Yet in the case of the “game of democracy” those who are ruled are also those who make the rules and enforce the rules. This creates a foundational conflict of interests and potential for high entropy/uncertainty through the contesting and renegotiation of rules while playing the game. As is often the case, when a highly incented player loses the “game”, especially when the stakes are very high, and the powers and protections of the referee are weak, such players coopt the referee and change the rules of the game to their benefit. When power and resources are vast and concentrated, they readily erode the independence of the legal system by finding judges, legislatures, lobbyists, and jurisdictions that they can control, intimidate, or pay off. In reality, there are very few circumstances where there is a level playing field as initial condition, and even when there are, the force vectors of special interest are such to seek out, exploit, and capitalize on special advantages within and outside the rules of the game. Without counter measures and durable enforceable means of sufficient and persistent magnitude to counter the usurping forces, the entropy of concentrated self-interests will eventually prevail. Hence, it is not so much the structure of the game per se, but the capacity to independently enforce the rules that becomes determinative of fairness. Again, when the norms of self-moderation and constraint fail, the question “who guards the guards” raises its ugly head once again.

 

 

The American Conflation of Freedom with Decentralization

This flaw of lacking independent referees with enforcement powers has played out throughout the history of the American Republic as a contest between a centralized government, Federalism, and localized decentralized State institutions, Jeffersonian Democracy. The early presumption in the drafting of the U.S. Constitution was to accrue powers to a central or Federal government only as a last resort. The Continental Congress was notoriously ineffective and weak as none of the very different states wanted to relinquish their sovereignty to any central authority. This was especially true of Virginia which wielded disproportionate power. The Virginian planter, James Madison, for instance, though an early ally of the Federalist Alexander Hamilton, over time like many of his peers changed his position in the support of State powers. What forced the concession of powers to a central government was the pending defeat of General Washington’s rag tag army of 3,000 men and a ballooning debt to the French, which prevented the purchase of equipment, munitions, uniforms, boots, and arms by the Continental Army. It took Alexander Hamilton’s exceptional financial and military operational acumen to form an effective Office of the Treasury, a Central Bank, as well as a War Department with a funded standing army. The founding premise of the American Republic was that most matters could and should be resolved at the level of the State, and only in cases of overt failures, would powers accrue to the Federal government. The principle of subsidiarity was presumed as local institutions were considered best suited to resolving societal problems and thereby best for preserving freedoms.

 

In its early inception there was little sense of the United States Federal government having a positive coherent identity and prospective purpose. Born in opposition to the tyranny of the British Crown, there was a widely shared abhorrence to any accumulation of Federal powers over the autonomy of each State. The presumption was that each of the States, which had very different settlement patterns, economies, religions, customs, and class structures, were in effect, their own sovereign nations (Woodward, 2011). They each had very different notions of basic democratic institutions. The New England states were settled by Puritans who successfully overthrew and executed a monarch, Charles I, and were opposed to hierarchy, the aristocracy and slavery. The Southern states, on the other hand, derived their democratic institutions from Greek and Roman slave and class-based democracies. The Southern gentry modeled themselves on Roman Patricians and the Cavalier classes of Great Britain. Many had fled Cromwell and Puritan rule to the South while others were planters who had migrated from the Caribbean sugar plantations after the 1804 Haitian slave revolution. These late arrivals were especially vigilant about supporting militia to prevent slave rebellion in America and advocates for slavery in the new territories.

 

Into this mix of radically different cultures and economies was born a definition of freedom that is singularly American. It was the one common principle that united these disparate States under a common banner. Freedom was defined as absence of governmental control and was cultivated in fact and in myth in the expanding American Frontier. The smaller, e.g., more decentralized, the government the freer the people. This played well for landless immigrants escaping the wars in Europe and it equated the frontier, and the Western expansion with freedom. Hence an expanded role of government was broadly accepted by immigrants and gentry alike for the acquisition and distribution of land rights. Through the Northwest Indian War of 1784, Louisiana Purchase of 1803, the Transatlantic Treaty of 1821, and The Mexican American War of 1846, the young Federal government essentially brought, negotiated, invaded, and annexed virtually all bordering territories from the native populations, the Spanish, French, and the Mexicans.   By doing so it built a huge land inventory and treasury by which it could finance its road, canal, postal infrastructures, military, and social policies of land grant payments and the forced resettlements of indigenous peoples. In this capacity a militarily powerful Federal government was a boon to all the States and in a tangible economic sense, a bastion of “freedom” by annexing and purchasing new territories. Not surprisingly in wielding such massive economic powers in the issuing of land grants to officers, settlers, and politicians, the Federal government became an object of focus if not capture by legacy economic interests. This was a practice that preceded the American Revolution as Virginia planters including George Washington, The Jefferson family, George Masson, Patrick Henry, and many of the powerful “First Families of Virginia” formed the Ohio Company in 1748 to speculate on virtually the entire land mass of Ohio. Benjamin Franklin was also involved in a variety of related land speculation projects for the Northwest Territories and worked nearly for a decade in London to secure rights to up 20 million acres which eventually failed and resulting his return to Philadelphia. Illustrative of the speculative magnitude of such syndicates was the 1784 Phelps and Gorham Purchase of 6,000,000 acres from the Seneca and Iroquois nations in Western New York, which unraveled a year later because of currency fluctuations and a failure to meet debt obligations. The Ohio and New Territories became a virtual limitless piggy bank whereby the Federal government could finance military and infrastructure projects, incentivize settlements, and pay Revolutionary War veterans. According to Wikipedia, “The amount of land varied with rank, ranging from 100 acres for privates and noncommissioned officers to 500 acres for colonels and 1,100 acres for major generals. After the Revolution, the federal government reserved several million acres in Ohio for the settlement of Veterans who earned a bounty land warrant”. In short, land speculation and Federal land, transportation infrastructure, finance and military policies went hand in hand to advance the interests of insider economic and political elites in the North and the South. Federal policies only became contentious when North-South differences over economic practices, small farm holdings verses plantation economics, and slaveholding became foundational issues about how the new territories and States were to be governed. Were they to be slave holding States or not? That became the issue that shattered a North South consensus and destroyed the Whig Party which had reflected that joint consensus, and which eventually lead to the Civil War.

 

The “liberation” of the Republic of Texas from Mexico by the Texas Revolution of 1835 and its annexation as a State in 1845 exemplifies the interplay of Federal land policies with the dubious claims of State self-sovereignty and local freedoms. In this case the will of the people of Texas was transformed through the rapid influx of Southern plantation settlers and speculators who wanted to preserve Texas as a “free” slave state in “democratic” opposition to the “central” Mexican government which wanted to retain its control as well as preserve the abolition of slavery in Texas. By contemporary standards, the “liberation” of Texas was a step backward into a slave-based planation economy, favoring Anglo American residents, and the diminishment of women’s rights to hold property and act for themselves. The “liberation” of Texas also perpetuated a plantation economic model of concentrated ownership into the cattle and cotton industries with the ironic invention of the” minimum wage itinerant “Cowboy” as an emblem of personal freedom and vigilante enforcer of social justice.  The fact that in 1845 Texas voted to become the 28th State reinforces the argument that neither decentralization verses centralization nor the pursuit of Jeffersonian freedoms were the driving forces of the liberation of Texas. Rather the underlying driving force was the preservation and expansion of the economic interests of Southern planters and Northern land speculators. In effect, it was an example of the land policy of the U.S. Federal government of that era that succeeded without direct involvement but through surrogates such as, military volunteers, speculators, local and national politicians, and material suppliers.

 

Where the truce between North and South interests eventually broke down was when the Federal government had to negotiate and eventually regulate the exchange of free and enslaved peoples, resources, and commerce, between the territories and set policies for new States. This occurred in the Missouri Compromise of 1820 where a fixed boundary outlawed slavery in the territory north of latitude 36°30′ north (except for Missouri), and then was modified by the Kansas -Nebraska Act of 1854 when it was determined that the residents of Kansas and Nebraska could vote for themselves as to their slave holding status. That legislation has been treated historically as an instance of the United States government protecting local freedoms through decentralization and the right of self-determination. But that is more of cover story than the actual effect. The bill was also intended to support the Transcontinental Railroad and had the attendant effect of drawing settlers into what were once protected Indian Territories, which in turn, had the effect of forcing Indian tribes through military force into smaller and smaller and less desirable reservations. The financing of the Transcontinental Railroad was another instance of Federal government policy providing massive land grants, financial subsidies, and a blind eye to the practices of private companies. Some of these companies made it their practice to overcharge for construction and influence the easements and the direction of the railway lines to suit insider land speculators. The net beneficiaries were not the Jeffersonian small famers and homesteaders, but the planation economies of cotton and cattle. Speculation in land became a precursor to speculation in transportation infrastructure, waterways, and railroads, which in turn became precursors to speculation and control over telegraph, telephone, electricity, roads and highway, dams, and aqueducts.  The Federal government became the vehicle for large scale industrial agriculture and settlement and infrastructures patterns that favored a concentrated set of economic and political interests. This was achieved under the pretext of creating a public good in the interest of every small homesteader and “pioneer”. It has been heralded as the hallmark of a democracy of equals, but in effect was driven by the interests if Southern planation gentility and Eastern industrial trusts.  Their powers would spread from land speculation and profiteering to all forms of infrastructure investments from waterways to transportation to the electric grid.

 

This is not to say that Federal infrastructure investments were neither unnecessary nor not beneficial, nor that centralization is inherently bad and an infringement on freedoms and autonomy. Rather the failure to “referee and enforce” the interfaces and boundaries of multi-scale and cross domain (different kinds of State cultures, economies, identities) value exchanges, inevitably results in the capture and concentration of value. The mere act of asserting subsidiarity or decentralization does not resolve the governance issue to preserves freedoms. When there are increasing rates, scales, and types of exchange across boundaries, trade, resettlement, communications, new interdependencies, and conflicts are created.  From the mobility of different demographics, migrants, workers, and speculators new boundary conditions and rules and laws need to be devised. They cannot be done solely by the different contending groups, but jointly in a manner that represents and respects individual sovereignty but also enables new forms of joint sovereignty and agency.

 

True to the argument that complexity and growth comes through the interaction of different agents or parties, the likely vehicle for the growth of the Federal government and attendant special interest groups came through the Commerce Clause of the U.S. Constitution. This Article states that the United States shall have power “to regulate Commerce foreign Nations, and among the several States, and with the Indian Tribes”. The definition of commerce can be quite broad in interpretation, and therefore, offers an expansion of powers as defined in the Necessary and Proper Clause, also known as the Elastic Clause, is a clause in Article I, Section 8 of the U.S. Constitution: The Congress shall have Power… To make all Laws which shall be necessary and proper for carrying into Execution the foregoing Powers”.  Neither Article states how such “Powers” should be applied nor by what principles and for what purpose. Hence, they are inherently and inevitably subject to capture by interested parties since there are no effective means to resolve the “Quis custodiet ipsos custodes?)” dilemma. Such conflicts could be adjudicated but that is a timely and costly process that is easily gamed by parties with the wealth and means. Even if it were adjudicated at the level of the U.S. Supreme, there are competing constitutional doctrines, textual originalism and living constitutionalism, which offer competing, if not antithetical opinions, thereby resulting in fluctuating, hence, gameable zero-sum resolutions.

 

 

Intelligent Generative Interfaces Between Contending Beliefs

From the perspective being advocated here, the application of scientific principles, computational physics and biology, the definition of Powers in the Elastic Clause Article equates to the Free Energy Principle as used in the Active Inference definition of intelligence as models of increased prediction, hence, reduced uncertainty. Hence, the reduction of uncertainty is a principle to provide order, which in essence, is the goal of government and law.

 

Free Energy is generated at the interface of two disparate populations of particles, and in the case of living agents follows the Principle of Free Energy Minimization to generate a synthesis to achieve a new “homeostasis” that combines the interest of disparate agents into a new set of combined beliefs or models. This synthesis of disparate beliefs increases the joint likelihood of each agent for increased value, complexity, and order predicated on the best available evidence. In current practice, however, the contrary is the case. The application of the doctrine of “laissez-faire” is the presumed starting point for dispute resolution through the aegis of the “free market” or a magistrate.  Given that in the current legal framework of a zero-sum dispute resolution mechanism, the process favors those with the resources to pursue a market or judicial resolution. There is no ready decentralized provision to generate new rules to accommodate new conditions and agents. Rather than enforcing a principle of subsidiarity for dispute resolution, the opposite occurs as resolution is forced up a judicial hierarchy of trials and appeals and eventually could require new legislation.  Given the cost and time to pursue judicial resolution, much less draft, write, and pass new legislation, which can take decades matters often become “resolved” through a winner-take-all default.

 

Complex, living, and sentient beings do not operate in this way, and if they did, they would cease to survive. The other important contribution of computational biology and physics to resolving the decentralization-centralization paradox, is that the same Free Energy principles function at all scales and across all domains. This means that rather having to legislate or adjudicate the interface of interactions between differing parties, they can be resolved in the same manner that physics, biology, and in effect, nature does by applying the same principles of alignment and coherence at multiple nested interacting scales. What this suggests is that there is a principled, science based computationally scalable means for resolving matters involving the exchange of value across boundaries between multi-scale intentional agents, from individuals, to  “foreign Nations, and among the several States, and with the Indian Tribes.” Rather than framing the issue of centralization verses decentralization as a zero-sum game, where different kinds of freedoms tradeoffs (freedom of action verses freedom by protection) are determined by political power dynamics, the differences can be resolved in a principled manner drawing upon how multi-scale living entities self-assemble and to form more complex entities, e.g., Federalist “whole” while preserving the unique independence of the Jeffersonian independent selves.

 

 

Not So Fair Laissez-Faire

 The doctrines of laissez-faire and decentralization are popular, but flawed understandings of how complex living systems self-assemble to form new kinds of organizations for mutual benefit. To achieve a homeostasis of mutual value, each agent cannot blindly optimize their own self-interest, but rather must identify and signal shared predictions and synchronously adjust their own actions to jointly achieve a higher mutual benefit. In other words, for an inclusive democratic process to be successful, it must follow the same symbiotic principles as complex biological or living processes follow to achieve new levels of complexity and adaptation. For instance, to have effective democratic processes entails multiple parties having an accurate understanding of the other party’s beliefs and being able to communicate accurately to other parties their expected individual and joint outcomes. The intent of any form of democratic rule making is to reconcile the need to recognize the interests of competing parties, and to protect such interests while creating something new that makes the tradeoff between sacrificing some particular interest to achieve an overall net joint benefit. It is the “art of the collaboration” or ‘joint discovery” that is essential to democratic and biological processes. The goal in both cases is to take things that are very different and through the mutual exchange information and resources make them more alike with like interests so that they can work together. It is a set of scientifically grounded principles that apply to all living things at all scales and in all domains. Hence, a principled based means for achieving complexity stability as an overriding goal for democratic governance.

 

Democratic Self-Assembly as a Natural Principle

The process for achieving complexity and stability is fundamental to the formation of all complex, symbiotic, multicellular forms of life; it entails a form of complementary segmentation to reduce entropy or “Free Energy” whereby the joint benefits outweigh the benefits of the individual agents acting on their own. Providing a computational understanding of the process by which organisms replicate themselves and achieve increased complexity was first developed by John Von Neuman in 1948. His work anticipated the discovery of the DNA helix by Crick and Watson in 1953 and provided a computational basis for understanding the biological processes of life and reproduction. Recognizing the limitations of infinite self-reference of Turing Machines for self-replicators, he developed his universal constructor theory of reproduction whereby reproduction was achieved through copying and then the writing of instructions to a universal constructor, (Von Neuman & Birk, 1966) which in biological terms is the complex molecule ribosome, and in computational terms, a Universal Turing machine. In this manner he made the equivalence between biological processes as an energetic material process and an informational computational process. In effect he brought energy and computational informational thermodynamics together anticipating the work of the physicist Leonard Susskind on the Second Law of Quantum Complexity (Brown, Susskind, 2018). Another prescient insight of Von Neuman was that cells achieve multicellular complexity not by random mutation but by the necessary signaling, sharing and incorporation of other cells genetic material and this occurs not just through sexual selection but multi-cellular recombination.  In other words, the infinite self-reference problem of Turing Machines as a basis for intelligent life was first discovered through a universal constructor architecture that allowed for the incorporation of novel but stable information through recombination. This was the early predecessor to the Free Energy Principle and Active Inference as a computational basis for complex self- reproduction and biological intelligence. As the Columbia University biochemist, Hashim M. Al-Hashimi, ( Al-Hashimi, 2023) noted, the notion of computational molecules was first proposed by the Nobel Laureate Jacque Monad in 1960s calling them “biomolecules”, but it was not until the work of the computer scientist Leonard Adleman demonstrated the full computational capabilities of  molecular computation in the field of DNA computing. Al-Hashimi, (2023) makes the following plea to fellow biochemists and systemic biologists to recognize the computational basis of life:

 

By reducing biology into a computational form, entire fields within computer science could be leveraged to systematize biology. For example, the field of computational complexity could be used to examine tradeoffs between time, memory, and energy in various biochemical reactions, and to classify and explore the diversity of problems solved by biological computation. Conversely, computer scientists might be able to mine the natural universe of biological computation and harness billions of years of evolution to discover new computational models or algorithms and maybe even answer questions in mathematics and logic

 

The question of how self-assembly works at the multi-cellular level or multi-agent to achieve stable complexity is not just a story of how more complex biological forms are created, but also how more complex forms of intelligences are created to predictively anticipate and shape the world around them. This is not an individual process but one of group selection as the Nobel Laureate Gerald Edelman argued in his book, Neural Darwinism: Neuronal Group Selection (1985).  The discoverer of eukaryotic symbiotic evolution, zymogenesis, Lynn Margulis, demonstrated the importance of cooperation in achieving stable complexity.  At the socio-economic level, the importance of symbiotic mutualism is demonstrated by the importance and long history of American agricultural cooperatives, credit unions, and insurance risk pools as exemplars of the symbiotic process where individuals with very different preferences and actions come together under a common preference, the reduction of risk or the pooling of resources, to organize themselves under new rules of self-regulation. In effect, each individual agent accepts a degree of limited freedom to achieve a group benefit that is also an improved individual benefit.

 

Here is how this symbiotic process of nested self-assembly and acquired complexity is described at the cellular level by the computational neuroscientist Karl Friston and the computational biologist Michael Levin: (Friston, Levin, 2020)

 

One of the central problems of biology is the origin and control of shape [1–3]. How do cells cooperate to build highly complex three-dimensional structures during embryogenesis? How are self-limiting growth and remodelling harnessed for the regeneration of limbs, eyes, and brains in animals such as salamanders

 

This is an interesting set-up because the external states of one cell are the active states of other cells. This means we are effectively simulating a multi-system or multi-agent problem that, as we will see, necessarily entails some form of communication or generalized synchrony among cells.

 

By framing self-assembly as active inference, we can now functionally talk about a cell’s beliefs and actions in the following way: each cell possesses a generative model that encodes (genetic) beliefs about the chemotactic signals it should sense and express if it occupied a particular place in the target form.

 

Finally, future work should explore whether and how the self- organization and autopoietic dynamics that we demonstrated in the morphogenetic domain generalize to larger-scale phenomena such as brains and societies, as has been variously proposed [45,68].

 

 

Scale-Free and Domain Free Self-Assembly

This is a very well researched process in information physics and computational biology, and it shows that at each level of self-assembling processes a new kind of boundary condition and intentionality emerges. (Levin, Friston, 2022, Rosen 1991,) It is also has been demonstrated in information physics (Fields, 2022) and electromagnetism (Hopfield, 1982) that each level or scale of self-assembly acts by the same principles as those levels both above and below it. One level does not reduce to the other, but “what is true as below is true as above” is a universal principle of nested organization. In short, there is a universal principle that applies at all scales and across different domains of content and application. (Friston, Levin, Fields, 2020) In technical terms, (which we can only reference here), each agent is a Markov blanket (Pearl 2018, Friston, 2017) which becomes a ‘blanketed’ or enclosed independent, self-assembling whole that achieves relative autonomy in its niche by reliably predicting the state of that niche and its own internal states and its actions on that niche.  Borrowing and generalizing form Jacque Monads, biomolecules, we call such entities “bioforms”, which are self-assembling agents that individually and jointly achieve “metabolic closure” to acquire and generate sufficient energy individually and collectively to generate a surplus for their joint survival.  The capacity to achieve metabolic closure is an empirical “hard reality” constraint on the freedoms of different agents and to the viability of their policies and beliefs. In classical economic terms, such hard constraints are seen as a necessary instrument of “creative destruction” (Schumpeter,1942) which necessarily follow from a “blind”, invisible processes seemingly beyond human comprehension, and hence, action. The point to be made here is that the processes of growth and transformation need not be blind and beyond human comprehension, and hence, effective intervention. By making the democratic processes for resolving differences among agents and information visible and principled, informed, and effective polices and processes are possible. Given that ideologies and policies can be modeled, tested, and predicted, creative change does not necessarily entail “destruction” and hence, organizations, firms, and sectors can adapt without having to be destroyed.  This is indeed what separates living and sentient beings from blind mechanisms acting as though their futures are to be determined not by planning and prediction, but by a presumption of “random variation”. 

 

Conclusion:

This is the first in a series of essays. This initial essay begins with the provocation that current democratic institutions, principles, and practices are inherently flawed and limited by their Enlightenment origins and adherence to Cartesian Dualism.  These limitations make many of the guiding principles of American Democracy incapable of addressing the fundamental challenges of an exponentially growing, technology driven, and complex digital and computational society. The essay challenges such hallowed doctrines as separation of powers, textual originalism, subsidiarity and decentralization, free markets, and free speech. As modern societies become more digital, interdependent, virtual, and networked, it is argued that new forms of distributed self-governance become possible grounded in scientific principles and informed by the new informational physics and biology of natural intelligences. The application of such sciences to deploy autonomous forms of adaptive intelligences that embody the same principles of error prediction minimization that the brain does to enable growth and stability at multiple scales and across many different areas of application. This initial essay lays the groundwork of a major “upgrade” in current democratic institutions. The second essay examines the application of such approaches in greater depth to create healthy and stable ideologies and institutions grounded in First Principles of a science-based governance. It also explores current crisis areas in economic and social equity and climate change where such an approach might be applicable. This approach is proposed as a new and living framework for upgrading democratic institutes.

 

 


[1] This heading owes its inspiration to Danny Hillis essay, The Enlightenment is Dead; Long Live the Entanglement: Journal of Design and Science, 2016

SELECTIVE BIBLIOGRAPHY

1.     W. Ross Ashby, Design for a Brain: The Origin of Adaptive Behavior, Springer, 1952

2.     W.R. Ashby, Introduction to Cybernetics, Chapman, 1956

3.     Brian Borgosian “”The Inescapable Casino”, Scientific American, November 2020″Scientific American.

4.     Andreja Bubic,1,2,* D. Yves von Cramon,1,3 and  Ricarda I. Schubotz  Prediction, Cognition, and the Brain, Front Hum Neurosci. 2010; 4: 25. Published online 2010 Mar 22. 

5.     Andy, Clark, The Experience Machine How our Minds Predict and Share Reality, Pantheon, 2023

6.     Antonio Damasio, Descartes Error, Damásio, António (1994). Descartes’ Error: Emotion, Reason, and the Human Brain. Putnam. ISBN 0-399-13894-3.

7.     Bert DeVries, Friston, K, A factor graph description of deep temporal active inference, Frontiers in Computational Neuroscience, Oct. 2017

8.     Adam R. Brown & Leonard Susskind, The Second Law of Quantum Complexity,Dec.18th, arXiv.1701.01107

9.     Mike Duncan, The Storm before the Storm: The Beginning of the End of the Roman Republic, Public Affairs, 2017

10.  Edleman, Gerald, Giulio, Tononi, A universe of Consciousness How matter becomes imagination, Harper, 2001

11.  Edleman, Neural Darwinism; A Theory of Neural Group Selection, Basic Books, 1987

12.  Chris Fields, Fields, C. and Levin, M. (2023) Regulative development as a model for origin of life and artificial life studies. BioSystems 229: 104927 (doi: 10.1016/j.biosystems.2023.104927).

13.  Fields, C., Fabrocini, F., Friston, K.Glazebrook, J. F., Hazan, H., Levin, M. and Marcianò, A. (2023) Control flow in active inference systems, Part II: Tensor networks as general models of control flow. IEEE Transactions on Molecular, Biological, and Multi-Scale Communications 9: 246-256 (doi: 10.1109/TMBMC.2023.3272158).

14.  Fields, C., Fabrocini, F., Friston, K.Glazebrook, J. F., Hazan, H., Levin, M. and Marcianò, A. (2023) Control flow in active inference systems, Part I: Classical and quantum formulations of active inference. IEEE Transactions on Molecular, Biological, and Multi-Scale Communications 9: 235-245 (doi: 10.1109/TMBMC.2023.3272150).

15.  Friston, K. J. (2019a). A free energy principle for a particular physics. arXiv. https://doi. org/10.48550/arXiv.1906.10184

16.  Friston, K., Parr, T., Heins, C., Constant, A., Friedman, D., Isomura, T. Fields, C., Verbelen, T., Ramstead, M., Clippinger, J. and Frith, C. D. (2023) Federated inference and belief sharing. Neuroscience & Biobehavioral Reviews in press (doi:10.1016/j.neubiorev.2023.105500).

17.  Heins, C., Klein, B., Demekas, D., Aguilera, M., & Buckley, C. L. (2023). Spin glass systems as collective active inference. In C. L. Buckley, D. Cialfi, P. Lanillos, M. Ramstead, N. Sajid, H. Shimazaki, & T. Verbelen (Eds.), Active Inference (pp. 75–98). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-28719-0_6

18.  Holmes, Oliver, Wendell, The Common Law, Little Brown, Boston, 1881

19.  Hopcroft Hopfield, J. J. (1982). “Neural networks and physical systems with emergent collective computational abilities”Proceedings of the National Academy of Sciences79 (8): 2554–2558.

20.  Michael Levin, Levin, M. (2023), Bioelectric networks: the cognitive glue enabling evolutionary scaling from physiology to mindAnimal Cognition, doi:10.1007/s10071-023-01780-3

21.  Michael Levin, Levin, M. (2023), Bioelectric networks: the cognitive glue enabling evolutionary scaling from physiology to mindAnimal Cognition, doi:10.1007/s10071-023-01780-3

22.  Levin, M. (2022), Generalizing frameworks for sentience beyond natural speciesAnimal Sentience, 7(32): 15

23.  Hashim M. Al-Hashimi, Turing, von Neumann, and the computational architecture of biological machines , PNAS Perspective, March 10,2023

24.  Henry Lin (Harvard), Max Tegmark (MIT), David Rolnick (MIT) Why does deep and cheaplearning work so well?, arXiv:1608225

25.  Michael Kirchhoff1, Thomas Parr2, Ensor Palacios3, Karl Friston4 and Julian Kiverstein5 The Markov blankets of life: autonomy, active inference and the free energy principle, MK, 0000-0002-2530-0718; TP, 0000-0001-5108-5743

26.  Tanya Luhrmann, How God Becomes Real; Kindling the Presence of Invisible Others, Princeton University Press, 2020

27.  Parr, T,Pezzulo, G, Friston, K, Active Inference; The Free Energy Principle in Mind, Brain and Behavior, MIT Press, 2022

28.  Pearl, Judea, The Book of Why; The New Science of Cause and Effect: Basic Books, 2018

29.  Pickard, Victor, Journalisms Market Failure is a Crisis for Democracy, Harvard Business Review, March, 2020

30.  Piketty, Thomas, Capital in the Twenty-First Century (Cambridge, MA: Belknap Press, 2014)

31.  Ramstead, M. J. D., Badcock, P. B., & Friston, K. J. (2018). Answering Schrödinger’s question: A free-energy formulation. Physics of Life Reviews, 24, 1–16. https://doi.org/ 10.1016/j.plrev.2017.09.001

32.  Rouleau, N., and Levin, M. (2023), The Multiple Realizability of Sentience in Living Systems and BeyondeNeuro, 10(11), doi:10.1523/eneuro.0375-23.2023

33.  Rosen, Robert, Life Itself Columbia U, Press, 2018

34.  Seth, Anil, Being You: New Science of Consciousness, Dutton, 2021

35.  Ashish VaswaniNoam ShazeerJakob UszkoreitLlion JonesAidan N. GomezLukasz KaiserIllia Polosukhin, Attention is All You Need, arXiv:1706.03762

36.  von Neumann, John; Burks, Arthur W. (1966), Theory of Self-Reproducing Automata. (Scanned book online), University of Illinois Press, retrieved 2017-02-28

37.  Winthrop, John, The Journal of John Winthrop, 1630-1649, Harvard University Press, 1996,

38.  Woodward, Colin, American Nations: A History of the Eleven Rival Regional Cultures of North America, Viking 2011.

39.   Zuboff, Shoshana (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. London: Profile Books. p. 22. ISBN 978-1-78125-685-5