Playing the Chain Game
When constructing a new blockchain, there are slew of considerations that must be made when architecting your new system. Ones which many experts who’ve watch the industry evolve and flourish over the years would likely refer to as “best-practices”.
These considerations can be both technical, or social in nature. Each of these two types of considerations play a pivotal role in an assets’ emergent governance; establishing community norms around how miners, developers, and speculators entrench themselves into your burgeoning community.
For a successful public blockchain to proliferate successfully upon launch, you want to maximize for a few key parameters, while simultaneously minimizing your initial use cases to reduce chain bloat — unless tackling a more broad generalized smart contracting use case such as Ethereum. For the sake of this essay, we will focus on UTXO based chains, as opposed to other account based ones.
Scalability, interoperability, incentives, governance, privacy, and energy efficiency are the main parameters you’ll want to hone in on as you seek to find “chain-market fit” to close the socio-technical gap necessary to sustain your asset for the forseeable future.
Since blockchain-based systems are just now reaching a stage of maturity that supports large-scale applications via off-chain scaling, optimizing for the most efficient use-case that is semantically important enough as a base-layer primitive is key to getting traction.
That said, we will be using the upcoming Handshake public blockchain as an example of a lean public chain asset seeking to launch in a fair and intelligent manner, while maintaining safety, liveness, fault tolerance, and rich data availability/integrity (more info on Handshake can be found here, and here — if you’re unfamiliar).
Fig. 1: The Handshake Technical Stack showing the many functional layers of Handshake’s technology.
Distribution, Incentives, and Mining
To better understand launching a chain and discussing the various mechanisms at play, it’s important to first be aware of the chain’s tech stack upon launch.
We’ll reference Fig.1 on Handshake’s tech stack to explain why certain aspects of your architecture are important.
We discussed the application/contract layer more in our initial post last-year, so for now we’ll focus on the layers beneath to go more in-depth.
Distribution (Incentive Layer)
The task of creating an entrench developer base to bootstrap your chain with the technical know-how necessary to maintain your protocol as you garner adoption, is no small feat.
That said, a long-tail power distribution of holders is the most advantageous overtime, as it ensures an extensive list of would-be contributors who are properly incentivized to contribute back to your codebase.
Handshake seeks to achieves this by utilizing the implementation of a blinded claim scheme called “Goosigs” (Fig 1. under “Incentive Layer”), which was developed for the airdrop distribution by Dan Boneh and Riad S. Wahby at the Stanford Center for Blockchain Research.
Effectively, a merkle tree is “stuffed” with the SSH/PGP keys of Github, Hacker News, Keybase, and PGP Web of Trust users (full details of this process are shared and updated here), which then provides a rich array of technical contributors, speculators, and those familiar with securing a private key pair, to be some of the initial holders of the asset. An individual who has their keys embedded in this tree can then utilize the HNS airdrop tool to initiate a signed handshake using their keys to make their claim — if they were imported from the various services listed above.
This claim process is private, and does not reveal information of the individual who signed the transaction once it’s sent on-chain.
Fig. 2: Types of curves utilized for the airdrop process. Source: Github
At this time, there are roughly 205,000+ individuals who will have their keys committed to the tree, enabling them to claim 4662 HNS of their own, and this number will likely change slightly as the tree is finalized completely prior to Handshake’s launch.
Fig. 3: A pie chart of the initial Handshake distribution breakdown for the strategically targeted airdrop from Handshake.org. As you can see, the majority of the assets supplied (70% of genesis total) will be controlled by individuals in the airdrop.
Alongside that, there are allotments for the initial contributors to the project, present domain name holders, and existing functionaries of the internet who deserve to be aptly incentivized; we cover more of this in our previous post. We won’t go into deep detail of the claims process and the technical mechanisms involved, but you can get a proper deep-dive from HSD contributor Matthew Zipkin.
Fig. 4: Most chains typically launch with a Bell Curve Distribution, where the majority of the assets are naturally constricted between an upper and lower bound. With Handshake, initially, the Contributors/Investors/ and Open Source orgs possessing HNS from investing/receiving a grant will fall to the left of the bell curve. Upon launch, general speculators and miners on the right-hand side of the curve will be able to able to acquire the asset, too. Overtime, this creates a mature power law distribution, where the economic players have many connected nodes and edges, which dictates most of the chain’s daily activity & growth. This is one of the key advantages of PoW: creating a more fair distribution, overtime.
This method creates a wide array of vested stakeholders in the network, who can then hopefully serve as stewards/custodians of its future progress — with some added assurance that their privacy has was also considered ahead of time.
This mimics Satoshi’s initial launch of bitcoin into the mailing list of cryptographers, who would then later go on to support the effort of launching bitcoin — except with a larger scale of potential contributors, given the growth and maturity of the ecosystem in the last decade since Bitcoin’s inception.
Modern times call for modern solutions, and targeting as many technical contributors as possible is your goal if you want to increase your propensity for adoption.
You only get to generate a genesis block once, and although the concept of “pre-mines” have been negatively skewed in the past, doing it intelligently to a wider audience, and not as a means to enrich a small few, not only makes it more fair, but increases the likelihood that the chain, once launched, will have continued speculative value capture and growth going forward — especially in this competitive multi-asset landscape we exist in today.
This airdrop is then further aided in improving it’s long-tail distribution with PoW mining. With a large portion of potential Handshake contributors and backers capitalized, further distribution happens in the form of mining the asset directly. And, that has some technical/social nuance to be considered as well.
Handshake PoW (Consensus Layer)
Creating a price floor for mining enthusiasts and early-adopters is important in entrenchment. Since early investors in Handshake paid a fixed price for their assets, raising $10.2mill for roughly 102 million units sold in total, this created a price floor of ~$0.10 cents in mid-2018 (reference: Handshake.org).
This means we can imply a speculative price floor of at least this price when wrestling with estimating potential profitability as HNS is added to liquid order-books and individuals begin to speculate on the asset, or buy and sell names from the chain (there were no liquid markets/exchanges for BTC until ~2yrs after Bitcoin’s launch, so you have to consider this when launching a chain in the present day).
From there, miners would only need to mine and initially sell Handshake above that spot price to be in profit when selling on exchanges, after they’ve accounted for their electricity costs/overhead (which will be cheaper early-on with a lower difficulty, and a seamless miner UX for on-ramping more long-term miners to the network).
Full Block Header
Fig 5: The current Handshake PoW algorithm diagrammed to show how the Preheader/Sharehash (blockhash) computation works.
Handshake has committed to a unique, yet hardened block header, and has merged a final version of its PoW that consists of Blake2b-512/SHA3–256/ and Blake2b-256.
The chaining of multiple algorithms to comprise your PoW algorithm is a novel strategy; one necessary in a world where most algorithms have been used to date, or off the shelf equipment might exist in the wild to already produce an ASIC. Preparing accordingly and providing your asset lead time to establish a healthy community of early mining enthusiasts prior to the introduction of an ASIC, is critical for capturing speculators into your network effect.
What you’re aiming for is an algorithm that is efficient with modern GPUs, and is not easy to implement out of the box for an ASIC manufacturer (giving you lead time against their economies of scale so your chain may mature).
ASICs are a means to an end for PoW assets that seek to legitimize and process transactions efficiently at scale, but you only need them once your chain has reached a certain level of adoption (i.e. the total daily USD volume of activity on the chain approaching the USD cost to secure the asset daily is but one heuristic for measuring chain maturity). This can take quite sometime depending on the type of chain you’re deploying and its target use case.
The Handshake PoW also introduces some other novel technical tricks to help in ensuring fairness for pooled based mining, alongside further hardening its currently proposed Proof of Work.
The subHash, as indicated in the above graphic, is all of the data potentially malleable by the miner (timestamp, treeRoot, etc). It is calculated early to discourage changes and to reduce/eliminate malleability. Making changes to the subHash is computationally expensive (pre-image resistance), and will cost you 2 cycles (1 for the subHash, and 1 for the commithash) if you wished to compute it again, aiding in preventing malicious miner activity.
subHash == All malleable data (1 hash cycle)commitHash == subHash + maskHash (1 hash cycle)
The Maskhash allows us to mine blocks in a pooled environment, while preventing the known “block-witholding attack” present in Bitcoin mining. The use of a mask makes its so miners do not need to know the full header serialization prior to mining a header, thus, theoretically, they cannot effectively withhold their mined block from the pool as it becomes too computationally expensive/functionally impossible.
MaskHash == prevblock + mask (must be > target but < share target) (1 Hash cycle)
The preheader contains the required pertinent information from the previous block in order to compute your hash, including a nonce and the timestamp. This is calculated first to prevent any hashed block caching, as changing the subHash/commitHash will cost 2 cycles of hashing (as we learned above).
These techniques ensure the preheader does not contain any miner malleable data, aside from the timestamp and nonce, which could be used for mining optimizations (pre-computation) or attacks on the chain’s integrity, as well as providing an incentive to keep the chain timestamp up-to-date.
The preheader also serves as an optimization for SPV resolvers (i.e. the hnsd light client) that only need to access the treeRoot to verify names in the Urkel tree, or just to validate the previous PoW.
Fig. 6: Code example of the Handshake blockhash (referred to as a shareHash) taken from HSD.
Miner UX & On-Boarding
With the PoW covered, you can’t have an appropriately decentralized and efficient public chain launch if you don’t have an optimized, open-sourced mining client for your asset. When bitcoin launched, there was no such thing as mining pools, PoW ASICs, or mature economies of scale that could quickly come online to centralize interests early-on. Satoshi made it easy with a one-click miner that utilized your CPU, and many were able to mine a healthy sum of the asset before it ballooned in value — further entrenching them into the ecosystem. And, many are still active to this day who are well capitalized thanks to their early adoption.
But, we live in a post-ASIC world, one ripe with enthusiast miners with high-end GPU setups in their home (some for mining, others conveniently due to PC gaming). We want to make it as easy for them to be early-adopters as Satoshi once did with his/her CPU miner. But, that looks a little different in the present day, as the requirements have changed and a whole industry has risen around PoW mining.
The community has organically derived such a solution, in the form of software called “HandyMiner”. HandyMiner CLI and HandyMiner GUI are two simple ways you as a miner can quickly get started mining on the network:
HandyMiner CLI Benefits:
Fig. 7: A rich dashboard of information presented via the HandyMiner CLI dashboard.
Traditionally there is a steep learning curve when setting up miners. From downloading initial dependencies, to syncing a chain and initializing your GPUs to target the PoW. No matter how much you presume, you’ll never have a solution that’s one size fits all, as no two mining rigs are the same.
The team behind HandyMiner has worked to alleviate the pain of smaller/hobbyist miners who may otherwise be afraid of switching mining algorithms on the fly for a new asset. The HandyMiner CLI supports an initial configuration tool to quickly get you mining to a stratum, submitting your payout address, and getting started hashing.
A rich ASCII based dashboard will greet you, showing your current hashrates, GPU voltages, temperatures and the overall performance across your rig. Most technical miners will opt to use the CLI on their larger setups, so a simple interface that is quick to get going will increase your share of potential miners.
HandyMiner GUI Benefits:
Fig. 8: Main screen of the HandyMiner GUI
However, not everyone is as familiar with the command line, so you’ll need to abstract things just a bit further. For Handshake, that’s the HandyMiner GUI.
The GUI removes the use of the command-line entirely. By creating a one click interface, individuals with decent hardware at home can get started with their miner and bundled HSD node in a few clicks.
With the HandyMiner GUI, the end-user gets all the relevant information to their miner, with an easy to use interface to start contributing hashrate to the network. Keeping this barrier to entry low for new potential miners in every way possible helps to further improve your long-tail distribution of entrenched players.
It is no small task merging the weird and often befuddling intricacies of Nvidia and AMD, while building an optimized shader/kernel. Though a hard task, you help improve the competitiveness of GPU miners as your chain matures to the point of ASICs if you can build an efficient enough, open-source mining client from the start.
Fig. 9: GUI allows you to quickly pick and deploy which of your graphics card will be used for hashing. Most gamers utilize discrete GPUs, which means extra hashpower for your chain if you can get them onboard easily enough.
P2P Networking & Data Availability (Network and Data Layers)
The last and final layers are the networking and data layers. When you’re building a chain and opting for optimal latency and throughput, you’re going to need to think about how much information you’re transmitting for your chain to function; if you’re maximizing for data availability, you’ll want to ensure your proof sizes are small, while also being aware of your history independence on past stale states to lessen the hardware needed to run a node and parse information from the chain.
Handshake realizes this partially by implementing a specially built base-2 trie, optimized for performance, simplicity, storage, and optimal proof size, called the Urkel Tree. This trie type was built as an improvement over other solutions such as LevelDB, which utilize underlying key-value stores, increasing the time and computation necessary for lookups as they have to write to disk, and Ethereum’s Merkle Patricia Trie. Storage of the trie’s internal nodes remain of a constant size, which makes updating and storing information more performant when committing transactions and on-going name auctions into the covenant state.
Fig. 10: Example of an Urkel Tree, from Boyma Fahnbulleh’s presentation at SBC in 2019. The graphic illustrates a base-2 trie, and interactions as new information is added to the trie alongside null nodes which aid in proving non-inclusion for lookups.
Lookups in your database can be perversely computationally expensive, and can create a ceiling on your chain’s transactional throughput if it isn’t considered prior to launch.
Due to these considerations, this allows the Handshake chain to maintain smaller proof sizes, as well as enable light clients that can verify the information necessary to keep your devices’ resource requirements low, but allowing you sufficient security to interact with the network.
Your PoW algorithm helps optimize for safety in a distributed system, and incentivizes individuals to run fullnodes, which can then validate a transaction as incorrect or invalid to the network’s consensus. And, with the combination of fullnodes with access to the full rich state of the Urkel tree, and the aid of light clients to growing the networks intended economic use, you further maximize the liveness of the chain; ensuring malicious actors cannot delay the acceptance of sending/receiving messages on-chain.
You can learn more about Urkel Trees in this talk from HSD contributor Boyma Fahnbulleh (full transcript can be found on the Stanford reading list, here) and you can view the code in full via the HSD github, here.
Generating elliptic curve public keys produces a fairly obvious pattern when analyzed in real-time. Elligators are a means to turn public keys from an elliptic curve into strings of randomized bytes. The use of Elligators is then combined with Handshake’s encrypted P2P network, Brontide (based off the same noise protocol used by Lightning’s LND), which encrypts the public key prior to it being sent across the network in the initial handshake with the network.
Without the use of encryption, an ISP or malicious state actor could analyze your traffic, and use that as a means to censor your node. With the use of elligators turning those public keys into randomized bytes, you make it more difficult for analysts/bad actors to personally identify you on the network.
This optimizations ensures that any party that seeks to eavesdrop on your network traffic will need to employ computationally expensive calculations for every packet that goes across the network, in an attempt to thwart potential censorship.
Learning As We Go
While not an exhaustive complete collection of best-practices, the above methods aid to ensure a decentralized and diverse stream of individual miners, speculators, and pools, allowing the community to capture value early as the ecosystem organically expands or contracts.
Remember, ASICs are to be expected, they arise in any successful chain that has found product-market fit (or, miner-market fit, depending on how you look at it). They are a sign of a healthy growing network of participants becoming entrenched to some economy of scale. This is why “ASIC-proofing” is mostly a fallacy, as your chain can only be ASIC resistance, until it reaches a sustainable threshold in its mining community.
More on the discussion Proof-of-Work and ASIC resistance can be found in a recent post from Coinbase, if you are curious and wish to learn more about the concept.
Though many new chains continue to launch, it’s important to take each event as a learning opportunity. There is no 100% guaranteed way to launch a chain, but there are quite a few boxes you can check beforehand to increase its likelihood of standing the test of time and the public adversaries outside of your testing environment.
Special thanks to Alex Smith, Sean Kilgariff, Darren Mills, and Matthew Zipkin for their assistance for certain key parts of this analysis.
If you enjoyed this post, please share, and be sure to subscribe and share.