Blockchain technology is famously considered a “trust machine”. It is a machine (a set of code and procedures) that will make transactions reliable without the need to trust any particular person. In other words, the blockchain replaces trust of persons with trust of computational systems.
But there is a little problem. What makes the machine trustworthy? The members of the blockchain network must agree on the validity of the transactions to be registered in the network. Once they agree, then the transaction is encrypted in a “block” of data, that is chained to previous blocks. This is done in a manner that will make it almost impossible (but not actually impossible) to change.
But why will the members spend their time and energy to verify and validate each and every transaction? What incentives do they have to do so? This is the crux of blockchain technology.
When “Satoshi Nakamoto” developed the first fully functional blockchain network, he (or they) adopted a competitive mechanism called “Proof of Work”. The objective of this mechanism is to bring the members to agree on the validity of the blocks of data on the performed transactions. The mechanism, though, was known long before, but it became popular after Nakamoto.
The mechanism, briefly, works as follows. Members of the network (called “miners”) compete for solving a mathematical puzzle, such that the winner will be given the priority to create a new block (or blocks) of data. Other members then can easily verify that the winner did actually solve the puzzle and did verify the validity of the transactions.
The competitive mechanism behind the Proof of Work seems an innocent way to incentivize network members, but it has dramatic consequences.
Because “miners” i.e., those who do the necessary work to verify and validate the transactions, are competing with each other, each will use more computing power to solve the puzzle and beat the competition. With more computing power, the competition will be tougher as the time needed to solve the puzzle becomes shorter. The network then will automatically increase the difficulty of the puzzle to keep the time needed to solve the puzzle at about 10 minutes.
As miners invest more in computing power, the winner will not accept to sell their prized coins for a price less than the cost of computing. With more computing power, the system is designed to make the puzzle more difficult. Which requires more computing power. Which makes the coins even more expensive … and so on. The system creates a positive feedback loop that leads to massive consumption of energy, and to an exponential rise in the price of the prize or coins. It is a perfect bubble structure. The positive feedback loop is, by design, not sustainable.
Given the high stakes and competitive design, there are very good incentives for miners to attack the network and create alternative chains of data blocks. The commercial incentive structure, therefore, undermines trust in the “trust machine.”
Can “trust” be a commodity?
For a commodity to be traded in the market, it must be exclusive: the buyer must be the owner of the commodity, or else there is no point in paying the price. Buying a commodity, like a smartphone, transfers the ownership of the commodity from the seller to the buyer, and the buyer will have sole control over it.
Can we buy and sell trust? If trust were a commodity, then the seller must cease to own that trust when it is transferred to the buyer. But this means that the seller now is less trustworthy after the transaction! If the seller is as trustworthy after the sale as before the sale of trust, then trust cannot be a commodity.
The fact is, trust cannot be a commercial commodity. The reason we trust a person is that they are willing to forego profit opportunities to preserve their reputation. Trust and reputation require commitment, and commitment is not consistent with opportunistic behavior motivated by profit-seeking incentives, as economist Robert F. Frank explains.
This does not mean that profit incentives will eliminate trust altogether. Trust is essential for almost all human interactions, regardless of the tools used. However, the profit-incentives will make trust less stable and more vulnerable to competitive forces.
My colleagues at IRTI were aware of these challenges and therefore developed a new consensus algorithm. They call it “Proof of Use”. The algorithm is built on reciprocity, and thus adopts a cooperative rather than a competitive incentive structure.
In broad terms, the method is based on reciprocity: members validate the transactions of other members in return for others validating theirs. In this manner, only members who actually use the network to validate their transactions will have the right to validate the transactions of others, hence the name: Proof-of-Use. In this mechanism, users and miners are the same group. They all have the shared objective of verifying their transactions. Accordingly, there is no positive feedback loop that makes the system unsustainable and unfriendly to the environment.
But the devil is in the details! My colleagues worked hard to ensure that the detailed parameters serve the ultimate objective, and were successful in obtaining a fintech patent from the Intellectual Property Office of Singapore. Singapore is ranked second in the world and top in Asia for having the best Intellectual Property protection in the World Economic Forum’s Global Competitiveness Report 2019.
The Proof of Use algorithm cannot be the solution to all the challenges posed by the Proof of Work or similar algorithms. But it is a step, we hope, in the right direction.