In the future, the networks with the greatest utility will pose the greatest social threat to individual sovereignty.

Presently, governments can generally be considered high utility networks. As a citizen of a well-functioning democracy, I am given the right to life, liberty, and pursuit of happiness, so long as I don’t violate the conditions of my implicit contract with the government to not break any of its laws. If I break the law, I will be pursued by the government, and I can expect to have my freedom restricted to a degree that is roughly commensurate with the severity of my crime. Besides for oxygen, water, food, and shelter, freedom is my most important resource as a human being. To be imprisoned is to be without freedom, and I am thus deprived of my ability to flourish. The government has a monopoly on my freedom, and freedom is the highest utility resource given to me by any network I am part of.

Consider a lesser example. On eBay, I am able to buy and sell used items based on my reputation score. If I make a living on eBay alone, a hit to my reputation score will result in a loss of my livelihood. Thus, eBay has a monopoly on my freedom.

Network Value → Reputation Value → Digital Slavery

Obviously, natural resources like oxygen and water are necessary for survival. The scary prospect is an agent holding monopoly over natural resources for survival. ‍ Here, I define an agent as an instrument of intentionality. For example, nature is not an agent because it does not have intent. In contract, a cat is an agent because it has intent to avoid pain. Similarly, we can say that objects are intentionally when they are manipulated to behave according to the interests of intent-bearing agents. So, a computer program has intent, because it acts according to the interests of an intentional agent. I recognize that there are issues with this line of reasoning, because we might encounter a computer program which produces computer programs, ad infinitum. At what point is the behavior of its descendants wholly independent of the intent of the agent who created the originally computer program? For the sake of the argument, let’s consider inherited intent to only apply within two generations. So, the computer program which behaves according to a human agent has intent, but the computer program created by a computer programs does not have intent.

Premises