Merkle Tree & Merkle Root Explained - Mycryptopedia
transactions - What is the Merkle root? - Bitcoin Stack ...
How Does Bitcoin Work? - learn me a bitcoin
Hi, I understand that there is a merkle root hash in every block of the blockchain. I did not really understand what it is used for. if miners are doing the validation by solving a puzzle, why do we need the merkle root?
https://preview.redd.it/hl80wdx61j451.png?width=1200&format=png&auto=webp&s=c80b21c53ae45c6f7d618f097bc705a1d8aaa88f A proof-of-work (PoW) system (or protocol, or function) is a consensus mechanism that was first invented by Cynthia Dwork and Moni Naor as presented in a 1993 journal article. In 1999, it was officially adopted in a paper by Markus Jakobsson and Ari Juels and they named it as "proof of work". It was developed as a way to prevent denial of service attacks and other service abuse (such as spam on a network). This is the most widely used consensus algorithm being used by many cryptocurrencies such as Bitcoin and Ethereum. How does it work? In this method, a group of users competes against each other to find the solution to a complex mathematical puzzle. Any user who successfully finds the solution would then broadcast the block to the network for verifications. Once the users verified the solution, the block then moves to confirm the state. The blockchain network consists of numerous sets of decentralized nodes. These nodes act as admin or miners which are responsible for adding new blocks into the blockchain. The miner instantly and randomly selects a number which is combined with the data present in the block. To find a correct solution, the miners need to select a valid random number so that the newly generated block can be added to the main chain. It pays a reward to the miner node for finding the solution. The block then passed through a hash function to generate output which matches all input/output criteria. Once the result is found, other nodes in the network verify and validate the outcome. Every new block holds the hash of the preceding block. This forms a chain of blocks. Together, they store information within the network. Changing a block requires a new block containing the same predecessor. It is almost impossible to regenerate all successors and change their data. This protects the blockchain from tampering. What is Hash Function? A hash function is a function that is used to map data of any length to some fixed-size values. The result or outcome of a hash function is known as hash values, hash codes, digests, or simply hashes. https://preview.redd.it/011tfl8c1j451.png?width=851&format=png&auto=webp&s=ca9c2adecbc0b14129a9b2eea3c2f0fd596edd29 The hash method is quite secure, any slight change in input will result in a different output, which further results in discarded by network participants. The hash function generates the same length of output data to that of input data. It is a one-way function i.e the function cannot be reversed to get the original data back. One can only perform checks to validate the output data with the original data. Implementations Nowadays, Proof-of-Work is been used in a lot of cryptocurrencies. But it was first implemented in Bitcoin after which it becomes so popular that it was adopted by several other cryptocurrencies. Bitcoin uses the puzzle Hashcash, the complexity of a puzzle is based upon the total power of the network. On average, it took approximately 10 min to block formation. Litecoin, a Bitcoin-based cryptocurrency is having a similar system. Ethereum also implemented this same protocol. Types of PoW Proof-of-work protocols can be categorized into two parts:- · Challenge-response This protocol creates a direct link between the requester (client) and the provider (server). In this method, the requester needs to find the solution to a challenge that the server has given. The solution is then validated by the provider for authentication. The provider chooses the challenge on the spot. Hence, its difficulty can be adapted to its current load. If the challenge-response protocol has a known solution or is known to exist within a bounded search space, then the work on the requester side may be bounded. https://preview.redd.it/ij967dof1j451.png?width=737&format=png&auto=webp&s=12670c2124fc27b0f988bb4a1daa66baf99b4e27 Source-wiki · Solution–verification These protocols do not have any such prior link between the sender and the receiver. The client, self-imposed a problem and solve it. It then sends the solution to the server to check both the problem choice and the outcome. Like Hashcash these schemes are also based on unbounded probabilistic iterative procedures. https://preview.redd.it/gfobj9xg1j451.png?width=740&format=png&auto=webp&s=2291fd6b87e84395f8a4364267f16f577b5f1832 Source-wiki These two methods generally based on the following three techniques:- CPU-bound This technique depends upon the speed of the processor. The higher the processor power greater will be the computation. Memory-bound This technique utilizes the main memory accesses (either latency or bandwidth) in computation speed. Network-bound In this technique, the client must perform a few computations and wait to receive some tokens from remote servers. List of proof-of-work functions Here is a list of known proof-of-work functions:- o Integer square root modulo a large prime o Weaken Fiat–Shamir signatures`2 o Ong–Schnorr–Shamir signature is broken by Pollard o Partial hash inversion o Hash sequences o Puzzles o Diffie–Hellman–based puzzle o Moderate o Mbound o Hokkaido o Cuckoo Cycle o Merkle tree-based o Guided tour puzzle protocol A successful attack on a blockchain network requires a lot of computational power and a lot of time to do the calculations. Proof of Work makes hacks inefficient since the cost incurred would be greater than the potential rewards for attacking the network. Miners are also incentivized not to cheat. It is still considered as one of the most popular methods of reaching consensus in blockchains. Though it may not be the most efficient solution due to high energy extensive usage. But this is why it guarantees the security of the network. Due to Proof of work, it is quite impossible to alter any aspect of the blockchain, since any such changes would require re-mining all those subsequent blocks. It is also difficult for a user to take control over the network computing power since the process requires high energy thus making these hash functions expensive.
How many nodes would we lose if BTC used 32MB blocks?
Can someone help me with some estimates/quantify the security implications of such a change? (Worst case/expected case/best case) What approach do you take to such an estimate? (constructive posts only please, I honestly want to learn)
Newbie here. I'm not yet familiar with bitcoin mining, just a bit interested. The bitcoin block header, as we all know, is consisted of the nonce, the timestamp, the Merkle root, nBits, and the hash of the previous block. Miners usually increment the nonce by 1, until they exhaust all 2^32 possibilities and find the solution. However, I have read that it is very common for miners to exhaust all 2^32 combinations and not find a solution at all. As a result, they have to make slight changes to the timestamp and/or the Merkle root to calculate even more combinations. Therefore, what is the probability of a miner exhausting 2^32 combinations without finding a valid nonce in a specific block? Does it have something to do with the bitcoin mining "difficulty" thingy? I'm so confused right now......
Bitcoin Witness: use the worlds most secure, immutable, and decentralised public database to attest to the integrity of your files
About Bitcoin Witness
https://bitcoinwitness.com is a free service that allows you to take any file and have its fingerprint witnessed in a bitcoin transaction. The service then allows you to download a proof file that can be used as verifiable evidence that your files fingerprint matches the fingerprint witnessed in the bitcoin transaction. The verification can be done using open source software even if our website does not exist in the future.
Protecting your data
We do not store your files data, in fact your files data is never even sent to our servers. Instead, your file is analysed locally in the browser to generate a SHA256 hash which is your files fingerprint. The only data we do store is the file name, the fingerprint (hash), and the proof file generated by the app. This is so you can access and download proofs in the future. Anyone can retrieve the proof by presenting the original file at any time. As you witness files, their fingerprint is also stored in your local cache so that you can easily retrieve the proof files when you load bitcoin witness on that device. It is recommend you download the proof once they are available to remove any reliance on our service.
How it works
Bitcoin Witness uses the Chainpoint protocol for many of its operations. Chainpoint is a layer two decentralised network that runs atop of (and supports the scaling of) bitcoin. Currently there are ~6500 community run Chainpoint nodes. Chainpoint nodes receive hashes and aggregate them together in a Merkle tree. The root of this tree is then included in a bitcoin transaction. Your files fingerprint becomes part of a tree that is initially secured and witnessed in a Chainpoint calendar block (a decentralised database maintained by Chainpoint nodes) before being witnessed in a bitcoin transaction (the most secure decentralised database in the world).
Steps performed to witness your file
The end to end process for witnessing your file and retrieving a downloadable proof takes around ~90 minutes. This is because we wait for 6 bitcoin block confirmations before the proof file is made available. The steps to witness files is as follows: 1. Generate the files fingerprint When you select a file it is processed locally in the browser using the SHA256 algorithm to generate its fingerprint. We call it a fingerprint because if the same file is processed using this algorithm in the future, it will always result in the same hash value (fingerprint). If any modifications are made to your file it will result in a completely different hash value. 2. Combine the files fingerprint with entropy from NIST The National Institute of Standards and Technology (NIST) randomness beacon generates full entropy bit strings and posts them in blocks every minute. The published values include a cryptographic link to all previous values to prevent retroactive changes. Your files fingerprint is hashed with this random value to prove that the file was witnessed after that random value was generated. 3. Witness the file in the Chainpoint calendar Chainpoint nodes aggregate your hash with other hashes in the network to create a Merkle tree and generate partial proof. After ~ 12 seconds we retrieve a proof which includes the NIST value, timestamp information and the other hashes in the tree required to verify your files fingerprint in the anchor hash of a Chainpoint Calendar Block. 4. Witness the file in the bitcoin blockchain The anchoring hash of the calendar block is then sent in the OP_RETURN of a Bitcoin transaction. As a result, this value is included in the raw transaction body, allowing the transaction ID and the Merkle path from that transaction to the Bitcoin block’s Merkle root to be calculated. After 6 confirmations (~60 minutes) the final proof file is made available which contains all the Merkle path information required to verify your proof.
Steps to verify a file was witnessed by Bitcoin
The easiest way to verify a file has been witnessed is to visit https://bitcoinwitness.com and upload the proof file or the original file. Bitcoin Witness performs the verification processes and returns the relevant information about when the file was witnessed. With that said, the benefit of the service is that even if the bitcoin witness app does not exist in the future. People must still be able to verify the files integrity (don’t trust us, trust bitcoin). There are 2 steps to verify that your file was witnessed. The steps seek to verify that both your original file, and the downloaded proof file, have not been modified since the time of the bitcoin transaction / block. These steps are outlined below and can be performed using open source software. 1. Verify your file has not been modified Generate a Sha256 hash of your file and check that the hash value generated matches the “hash” value in the proof file you are about to verify. There are plenty of free online tools available that allow you to generate a hash of your file. And you can check the “hash” value in the proof file by opening it in a text editor. 2. Verify the proof file has not been modified Re-run the operations set out in the proof file and then validate that the hash value produced at the end of the operations matches the Merkle root value in the bitcoin block. The Chainpoint Parse library is open source software that can be used to re-run the operations in the proof file. The result can be verified to match the bitcoin Merkle root using any block explorer.
Future Vision and Roadmap
Today marks the release of the first version of the bitcoin witness app which can be found at https://bitcoinwitness.com. The immediate focus is on some additional features some users have already suggested
Email / Push notifications when a proof file is available
Encrypted & decentralised storage of files (interested in the communities suggestions around technologies to use for this)
The broader vision and road map for bitcoin witness is to remove the need to trust organisations and each other with our data and instead trust bitcoin. We want to enable a world where people can make claims about data and that bitcoin’s immutable ledger can be used to verify that claim. The current version allows people to claim “This data has not been modified since that point in time”. An example of a future claim might be; “I was in possession of this data at that point in time”
Support us and get involved
This has been a fun learning experience. Would love it if you could all test out the app and give me feedback on the app, the user experience, any roadmap items I should think about. I welcome any comments here or join our telegram For regular updates you can follow our twitter.
DISCLAIMER This Whitepaper is for Era Swap Network. Its purpose is solely to provide prospective community members with information about the Era Swap Ecosystem & Era Swap Network project. This paper is for information purposes only and does not constitute and is not intended to be an offer of securities or any other financial or investment instrument in any jurisdiction. The Developers disclaim any and all responsibility and liability to any person for any loss or damage whatsoever arising directly or indirectly from (1) reliance on any information contained in this paper, (2) any error, omission or inaccuracy in any such information, or (3) any action resulting therefrom Digital Assets are extremely high-risk, speculative products. You should be aware of the risks involved and fully consider before participating in Digital assets whether it’s appropriate for you. You should only participate if you are an experienced investor with sophisticated knowledge of financial markets and you fully understand the risks associated with digital assets. We strongly advise you to take independent professional advice before making any investment or participating in any way. You should check what rules and protections apply to your respective jurisdictions before investing or participating in any way. The Creators & community will not compensate you for any losses from trading, investment or participating in any way. You should read whitepaper carefully before participating and consider whether these products are right for you. TABLE OF CONTENT · Abstract · Introduction to Era Swap Network · Development Overview · Era Swap Utility Platform · Alpha-release Development Plan · Era Swap Network Version 1: Specification · Bunch Structure: 10 · Converting ES-ERC20 to ES-Na: · Conclusion: · Era Swap Ecosystem · Social Links Abstract The early smart contracts of Era Swap Ecosystem like TimeAlly, Newly Released Tokens, Assurance, BetDeEx of Era Swap Ecosystem, are deployed on Ethereum mainnet. These smart contracts are finance-oriented (DeFi), i.e. most of the transactions are about spending or earning of Era Swap tokens which made paying the gas fees in Ether somewhat intuitive to the user (withdrawal charges in bank, paying tax while purchasing burgers) but transactions that are not token oriented like adding a nominee or appointee voting also needs Ether to be charged. As more Era Swap Token Utility platform ideas kept appending to the Era Swap Main Whitepaper, more non-financial transaction situations arise like updating status, sending a message, resolving a dispute and so on. Paying extensively for such actions all day and waiting for the transaction to be included in a block and then waiting for enough block confirmations due to potential chain re-organizations is counter-intuitive to existing free solutions like Facebook, Gmail. This is the main barrier that is stopping Web 3.0 from coming to the mainstream. As alternatives to Ethereum, there are few other smart contract development platforms that propose their own separate blockchain that features for higher transaction throughput, but they compromise on decentralization for improving transaction speeds. Moreover, the ecosystem tools are most advancing in Ethereum than any other platform due to the massive developer community. With Era Swap Network, the team aims to achieve scalability, speed and low-cost transactions for Era Swap Ecosystem (which is currently not feasible on Ethereum mainnet), without compromising much on trustless asset security for Era Swap Community users. Introduction to Era Swap Network Era Swap Network (ESN) aims to solve the above-mentioned problems faced by Era Swap Ecosystem users by building a side-blockchain on top of Ethereum blockchain using the Plasma Framework. Era Swap Network leverages the Decentralisation and Security of Ethereum and the Scalability achieved in the side-chain, this solves the distributed blockchain trilema. In most of the other blockchains, blocks are a collection of transactions and all the transactions in one block are mined by a miner in one step. Era Swap Network will consist of Bunches of Blocks of Era Swap Ecosystem Transactions. Decentralization Layer 2 Scalable and Secure A miner mines all the blocks in a bunch consequently and will commit the bunch-root to the ESN Plasma Smart Contract on Ethereum mainnet. Development Overview Initially, we will start with a simple Proof-of-Authority (PoA) based consensus of EVM to start the development and testing of Era Swap Ecosystem Smart Contracts as quickly as possible on the test-net. We will call this as an alpha-release of ESN test-net and only internal developers will work with this for developing smart contracts for Era Swap Ecosystem. User’s funds in a Plasma implementation with a simple consensus like PoA are still secured as already committed bunch-roots cannot be reversed. Eventually, we want to arrive on a more control-decentralized consensus algorithm like Proof-of-Stake (PoS) probably, so that even if the chain operator shuts down their services, a single Era Swap Ecosystem user somewhere in the world can keep the ecosystem alive by running software on their system and similarly more people can join to decentralize the control further. In this PoS version, we will modify the Parity Ethereum client in such a way, that at least 50% of transaction fees collected will go to the Luck Pool of NRT Smart Contract on Ethereum mainnet and rest can be kept by miner of the blocks/bunch of blocks if they wish. After achieving such an implementation, we will release this as a beta version to the community for testing the software on their computers with Kovan ERC20 Era Swaps (Ethereum test-net). Era Swap Decentralised Ecosystem Following platforms are to be integrated:
Era Swap Token Contract (adapted ERC20 on Ethereum) The original asset will lie on Ethereum to avoid loss due to any kind of failure in ESN.
Plasma Manager Contract (on Ethereum) To store ESN bunch headers on Ethereum.
Reverse Plasma Manager Contract (on ESN) Bridge to convert ES to ES native and ES native to ES. User deposits ES on Mainnet Plasma, gives proof on ESN and gets ES native credited to their account in a decentralised way.
NRT Manager Contract (on Ethereum or on ESN) If it is possible to send ES from an ESN contract to luck pool of NRT Manager Contract on Ethereum, then it’s ok otherwise, NRT Manager will need to be deployed on ESN for ability to add ES to luck pool.
Era Swap Wallet (React Native App for managing ESs and ES natives) Secure wallet to store multiple private keys in it, mainly for managing ES and ES native, sending ES or ES native, also for quick and easy BuzCafe payments.
TimeAlly (on Ethereum or on ESN) On whichever chain NRT Manager is deployed, TimeAlly would be deployed on the same chain.
Assurance (on Ethereum or on ESN) On whichever chain NRT Manager is deployed, TimeAlly would be deployed on the same chain.
DaySwappers (on ESN) KYC manager for platform. For easily distributing rewards to tree referees.
TimeSwappers (on ESN) Freelance market place with decentralised dispute management.
SwappersWall (on ESN) Decentralised social networking with power tokens.
BuzCafe (on ESN) Listing of shops and finding shops easily and quick payment.
BetDeEx (on ESN) Decentralised Prediction proposals, prediction and results.
DateSwappers (on ESN) Meeting ensured using cryptography.
ComputeEx (on Ethereum / centralised way) Exchange assets.
Era Swap Academy (on ESN / centralised way) Learn. Loop. Leap. How to implement ES Academy is not clear. One idea is if content is constantly being modified, then subscription expired people will only have the hash of old content while new content hash is only available to people who have done Dayswapper KYC and paid for the course. Dayswapper KYC is required because this way people won’t share their private keys to someone else.
Value of Farmers (tbd) The exchange of farming commodities produced by farmers in VoF can be deposited to warehouses where the depositors will get ERC721 equivalent tokens for their commodities (based on unique tagging).
DeGameStation (on ESN) Decentralised Gaming Station. Games in which players take turns can be written in Smart Contract. Games like Chess, Poker, 3 Patti can be developed. Users can come to DeGameStation and join an open game or start a new game and wait for other players to join.
Alpha-release Development Plan
Deploying Parity Node customized according to Era Swap Whitepaper with PoA consensus.
Setting up Plasma Smart Contracts.
Creating a bridge for ERC20 Swap from Ethereum test-net to ESN alpha test-net.
Alpha Version Era Swap Network Version 1 : Specification The Version 1 release of ESN plans to fulfill the requirements for political decentralisation and transparency in dApps of Era Swap Ecosystem using Blockchain Technology. After acquiring sufficient number of users, a version 2 construction of ESN will be feasible to enable administrative decentralization, such that the Era Swap Ecosystem will be run and managed by the Era Swap Community and will no longer require the operator to support for it's functioning. Era Swap Network (ESN) Version 1 will be a separate EVM-compatible sidechain attached to Ethereum blockchain as it’s parent chain. ESN will achieve security through Plasma Framework along with Proof-of-Authority consensus for faster finality. The idea behind plasma framework is to avoid high transaction fees and high transaction confirmation times on Ethereum mainnet by instead doing all the ecosystem transactions off-chain and only post a small information to an Ethereum Smart Contract which would represent hash of plenty of ecosystem transactions. Also, to feature movement of Era Swap Tokens from Ethereum blockchain to ESN using cryptographic proof, reverse plasma of Ethereum on ESN will be implemented. Also, submitting hash of each ESN blocks to ESN Plasma Smart Contract on Ethereum would force ESN to have a block time equal to or more than Ethereum’s 15 second time as well as it would be very much costly for operator to post lot of hashes to an Ethereum Smart Contract. This is why, merkle root of hashes of bunch of blocks would instead be submitted to ESN Plasma Smart Contact on Ethereum. Actors involved in the ESN:
Block Producer Nodes Lesser the number of nodes, quicker is the block propagation between block producers which can help quick ecosystem transactions. We find that 7 block producers hosted on different could hosting companies and locations reduces the risk of single point of failure of Era Swap Ecosystem and facilitates 100% uptime of dApps. Block Producer Nodes will also be responsible to post the small information to the Blockchain.
Block Listener Nodes Rest of the nodes will be Block Listeners which will sync new blocks produced by the block producer nodes. Plenty of public block listener nodes would be setup in various regions around the world for shorter ping time to the users of Era Swap Ecosystem. Users would submit their Era Swap Ecosystem transactions to one of these public nodes, which would relay them to rest of the Era Swap Network eventually to the block producer nodes which would finalize a new block including the user transaction.
Bunch Committers This will be an instance in the block producers which will watch for new blocks confirmed on ESN and will calculate bunch merkle roots and will submit it to ESN Plasma Smart Contract. This instance will also post hash of new Ethereum blocks to ESN (after about 10 confirmations) for moving assets between both the blockchain.
Users These will be integrating with dApps which would be connected to some public ESN nodes or they can install a block listner node themselves. They can sign and send transactions to the node which they are connected to and then that node will relay their transactions to block producer nodes who would finalise a block including their transaction.
A Bunch Structure in Smart Contract will consist of the following: • Start Block Number: It is the number of first ESN block in the bunch. • Bunch Depth: It is Merkle Tree depth of blocks in the bunch. For e.g. If bunch depth is 3, there would be 8 blocks in the bunch and if bunch depth is 10, there would be 1024 blocks in the bunch. Bunch depth of Bunches on ESN Plasma Contract is designed to be variable. During the initial phases of ESN, it would be high, for e.g. 15, to avoid ether expenditure and would be decreased in due course of time. • Transactions Mega Root: This value is the merkle root of all the transaction roots in the bunch. This is used by Smart Contract to verify that a transaction was sent on the chain. • Receipts Mega Root: This value is the merkle root of all the receipt roots in the bunch. This is used to verify that the transaction execution was successful. • Timestamp: This value is the time when the bunch proposal was submitted to the smart contract. After submission, there is a challenge period before it is finalised.
Converting ES-ERC20 to ERC-NA and BACK
On Ethereum Blockchain, the first class cryptocurrency is ETH and rest other tokens managed by smart contracts are second class. On ESN, there is an advancement to have Era Swaps as the first class cryptocurrency. This cryptocurrency will feature better user experience and to differentiate it from the classic ERC20 Era Swaps, it will be called as Era Swap Natives (ES-Na). According to the Era Swap Whitepaper, maximum 9.1 Million ES will exist which will be slowly released in circulation every month. Era Swaps will exist as ES-ERC20 as well as in form of ES-Na. One of these can be exchanged for the other at 1:1 ratio. Following is how user will convert ES-ERC20 to ES-Na:
User will give allowance to a Deposit Smart Contract, and following that call deposit method to deposit tokens to the contract.
On transaction confirmation, user will paste the transaction hash on a portal which will generate a Proof of Deposit string for the user. This string is generated by fetching all the transactions in the Ethereum Block and generating a Transaction Patricia Merkle Proof to prove that user’s transaction was indeed included in the block and the Receipts Patricia Merkle Proof to confirm that the user’s transaction was successful.
Using the same portal, user will submit the generated proofs to a Smart Contract on ESN, which would release funds to user. Though, user will have to wait for the Etheruem block roots to be posted to ESN after waiting for confirmations which would take about 3 minutes. Once, it’s done user’s proofs will be accepted and will receive exact amount of ES- Na on ESN.
Following is how user will convert ES-Na to ES-ERC20:
ES-Na being first class cryptocurrency, user will simply send ES-Na to a contract.
User will paste the transaction hash on a portal which will generate a Proof of Deposit for the user. Again ES-Na being first class cryptocurrency, Transaction Patricia Merkle Proof is enough to prove that user’s transaction was indeed included in the block. Another thing which will be generated is the block inclusion proof in the bunch.
User will have to wait for the bunch confirmation to the Plasma Smart Contract and once it’s done, user can send the proof to the Plasma Smart Contract to receive ES-ERC20.
Since the blocks are produced and transactions are validated by few block producers, it exposes a possibility for fraud by controlling the block producer nodes. Because ESN is based on the Plasma Model, when failure of sidechain occurs or the chain halts, users can hard exit their funds directly from the Plasma Smart Contract on Ethereum by giving a Proof of Holdings.
HOld ES Tokens Swapping with New ES Tokens
The old ES Tokens will be valueless as those tokens will not be accepted in ESN because of NRT (New Released Tokens) and TimeAlly contracts on mainnet which is causing high gas to users, hence reducing interactions. Also, there was an event of theft of Era Swap Tokens and after consensus from majority of holders of Era Swap Tokens; it was decided to create a new contract to reverse the theft to secure the value of Era Swap Tokens of the community. Below is the strategy for swapping tokens: TimeAlly and TSGAP: Majority of Era Swap Community have participated in TimeAlly Smart Contract in which their tokens are locked for certain period of time until which they cannot move them. Such holders will automatically receive TimeAlly staking of specific durations from the operator during initialization of ESN. Liquid Tokens: Holders of Liquid Era Swap Tokens have to transfer the old tokens to a specified Ethereum wallet address managed by team. Following that, team will audit the token source of the holder (to eliminate exchange of stolen tokens) and send new tokens back to the wallet address.
Post-Genesis Tokens Return Program
Primary asset holding of Era Swap tokens will exist on Ethereum blockchain as an ERC20 compatible standard due to the highly decentralised nature of the blockchain. Similar to how users deposit tokens to an cryptocurrency exchange for trading and then withdraw the tokens back, users will deposit tokens to ESN Contract to enter Era Swap Ecosystem and they can withdraw it back from ESN Contract for exiting from ecosystem network. The design of the token system will be such that, it will be compatible with the future shift (modification or migration of ESN version 1) to ESN version 2, in which an entirely new blockchain setup might be required. To manage liquidity, following genesis structure will be followed:
1.17 billion (Circulating Supply)
Locked in Smart Contract
7.93 billion (pending NRT releases)
Though it looks like there are 9.1 * 2 = 18.2 Billion ES, but the cryptographic design secures that at any point in time at least a total of 9.1 billion ES (ES-ERC20 + ES-Na) will be locked. To unlock ES-Na on ESN, an equal amount of ES-ERC20 has to be locked on Ethereum and vice-versa. 9.1 billion ES-ERC20 will be issued by ERC20 smart contract on Ethereum Blockchain, out of which the entire circulating supply (including liquid and TimeAlly holdings) of old ES will be received to a team wallet. TimeAlly holdings of all users will be converted to ES-Na and distributed on ESN TimeAlly Smart Contract by team to the TimeAlly holders on their same wallet address. Liquid user holdings will be sent back to the users to the wallet address from which they send back old ES tokens (because some old ES are deposited on exchange wallet address). ES-Na will be issued in the genesis block to an ESN Manager Smart Contract address. It will manage all the deposits and withdrawals as well as NRT releases.
Following are identified risks to be taken care of during the development of ESN: Network Spamming: Attackers can purchase ES from the exchange and make a lot of transactions between two accounts. This is solved by involving gas fees. A setting of 200 nanoES minimum gas price will be set, which can be changed as per convenience. DDoS: Attackers can query public nodes for computationally heavy output data. This will overload the public node with requests and genuine requests might get delayed. Block producers RPC is private, so they will continue to produce blocks. To manage user’s denial of service, the provider in dApps needs to be designed in such a way such that many public nodes will be queried simple information (let’s say latest block number) and the one which response quickly to user will be selected. AWS is down: To minimize this issue due to cloud providers down, there will be enough nodes on multiple cloud providers to ensure at least one block producer is alive. User deposit double spending: User deposits ES on Ethereum, gets ES-Na on ESN. Then the issue happens that there are re-org on ETH mainnet and the user’s transaction is reversed. Since ETH is not a fixed chain and as per PoW 51% attack can change the blocks. As Ethereum is now enough mature and by statistics forked blocks are at most of height 2. So it is safe to consider 15 confirmations. Exit Game while smooth functioning: User starts a hard exit directly from Plasma Smart Contract on Ethereum, then spends his funds from the plasma chain too. To counter this, the exit game will be disabled, only when ESN halts, i.e. fails to submit block header within the time the exit game starts. This is because it is difficult to mark user’s funds as spent on ESN. Vulnerability in Ecosystem Smart Contracts: Using traditional methods to deploy smart contracts results in a situation where if a bug is found later, it is not possible to change the code. Using a proxy construction for every ecosystem smart contract solves this problem, and changing a proxy can be given to a small committee in which 66% of votes are required, this is to prevent a malicious change of code due to compromising of a single account or similar scenario. ChainID replay attacks: Using old and traditional ways to interact with dApps can cause loss to users, hence every dApp will be audited for the same.
Hi Bitcoiners! I’m back with the 28th monthly Bitcoin news recap. For those unfamiliar, each day I pick out the most popularelevant/interesting stories in Bitcoin and save them. At the end of the month I release them in one batch, to give you a quick (but not necessarily the best) overview of what happened in bitcoin over the past month. You can see recaps of the previous months on Bitcoinsnippets.com A recap of Bitcoin in April 2019 Adoption
Neutro Yellow Paper — “Simplified Payment Verification nodes” chapter..
https://preview.redd.it/yq9pp0o7l3631.jpg?width=737&format=pjpg&auto=webp&s=0cd8cef09126e0f3555e91685891042079e12411 This is part two of our short series and here we’ll try to explain in simple terms how can I, operating a lightweight node, verify the correctness of the chain/ compare the two or more seemingly valid chains given to me by my peers? Simplified Payment Verification is a term borrowed from Bitcoin. The meaning is the same, the method is different though. In Bitcoin I can store on my device just block headers. It’s easy to calculate the PoW difficulty combined for all blocks as PoW itself is difficult and costly to perform, but easy and cheap to verify. Because in Neutro our focal point of consensus is relative Proof of Work combined for all blocks, and to know the relative Proof of Work we must know how many votes (or rather votes cast using how many validation tokens) are included in each block, we must verify a lot of votes for each block. That is a problem. But what if in every block there was a simplified “list” of votes in the same order as they are included in the Merkle tree within that block? Let’s say I want to create a false chain. I include this list in every block of my chain and the proper Merkle tree with votes. In order to raise my blockchain’s relative Proof of Work, I must include some “artificial” (created by me in order to cheat the network) votes. I can try to cheat the SPV nodes by either: a) including in my block’s “list” of votes false values that do not exist in the Merkle tree b) including in the Merkle tree votes that are false or otherwise incorrect https://preview.redd.it/c8vyeew3l3631.png?width=800&format=png&auto=webp&s=2e4cf867ee6ec3a41849298ad64bba6cb610f109 As an SPV node I can use the following method of “random questions”. As I download a block, I see a list of votes. First thing I should do is to calculate if, assuming that list is not falsified, the relative Proof of Work for that block would result in a value that is declared within that block. If it wouldn’t — it’s an incorrect chain. If it would — I now ask for random votes from that list and check their correctness (value, signature, adherence to protocol’s rules). Let’s say there are 20 votes in the list. I ask for votes number 3,8,19,20. My peer “shows” me them by providing me with the correct Merkle path (as they are stored in the form of a Merkle tree). What’s the motive behind that? The more someone wants to cheat the network, the more false votes he will include in his chain. The more false votes he includes in his chain, the more likely SPV nodes will detect his lie. This method allows an SPV node to ask for and verify only a small portion of data. But, because of the logic presented above, the probability of catching a liar is growing with the “size” of his lie. If an adversary produces just a small number of false votes within his chain, it won’t help him much. If he produces a lot of false votes, the probability of him cheating any SPV node is decreased with every false vote included in his chain. This probability, with proper parameters, is extremely low given even the few percents of false votes. So this is the method of easy verification of the main-chain. Once I’ve verified the main-chain, it’s easy to verify if a given transaction was performed in a given block of a given shard-chain. I just need to follow the Merkle path from a given main-chain block to a given shard-chain block header, then starting from that shard-chain block header (or Merkle root included in it, to be precise) to data within that shard-chain block that I need. Source: https://medium.com/@neutroprotocol/explain-it-to-me-like-im-14-and-i-know-how-bitcoin-works-251343781336 https://neutro.io/
Soft-forking the block time to 2 min: my primarily silly and academic (but seemingly effective) entry to the "increase the blockchain's capacity in an arbitrarily roundabout way as long as it's a softfork" competition
So given that large portions of the bitcoin community seem to be strongly attached to this notion that hard forks are an unforgivable evil, to the point that schemes containing hundreds of lines of code are deemed to be a preferred alternative, I thought that I'd offer an alternative strategy to increasing the bitcoin blockchain's throughput with nothing more than a soft fork - one which is somewhat involved and counterintuitive, but for which the code changes are actually quite a bit smaller than some of the alternatives; particularly, "upper layers" of the protocol stack should need no changes at all. Notes:
Unlike the "generalized softfork" approach of putting the "real" merkle root in the coinbase of otherwise mandatorily empty blocks, this strategy makes very little change to the semantics of the protocol. No changes to block explorers or wallets required.
The point of this is largely academic, to show what is possible in a blockchain protocol. That said, if some segwit-as-block-size-increase supporters are interested in segwit because it increases the cap in a way that does not introduce a slippery slope, block time decreases are a viable alternative strategy, as there is a limit to how low block time can go while preserving safety and so the slippery slope has a hard stop and does not extend infinitely.
My personal actual preference would be a simple s/1000000/2000000/g (plus a cap of 100-1000kb/tx to address ddos issues), though I also believe that people on all sides here are far too quick to believe that the other side is evil and not see that there are plenty of reasonable arguments in every camp. I recommend this, this and this as required reading.
There's some chance that some obscure rule of the bitcoin protocol makes this all invalid, but then I don't know about it and did not see it in the code.
The attack vector is as follows. Instead of trying to increase the size of an individual block directly, we will create a softfork where under the softfork rules, miners are compelled to insert incorrect timestamps, so as to trick the bitcoin blockchain into retargeting difficulty in such a way that on average, a block comes every two minutes instead of once every ten minutes, thereby increasing throughput to be equivalent to a 5 MB block size. First, let us go over the bitcoin block timestamp and difficulty retargeting rules:
Every block must include a timestamp.
This timestamp must at the least be greater than the median of the previous eleven blocks (code here and here)
For a node to accept a block, this timestamp must be at most 2 hours ahead of the node's "network-adjusted time" (code here), which can itself be at most 70 minutes ahead of the node's timestamp (code here); hence, we can never go more than 3.17 hours into the future
Every 2016 blocks, there is a difficulty retargeting event. At that point, we calculate D = the difference between the latest block time and the block time of the block 2016 blocks before. Then, we "clamp" D to be between 302400 and 4834800 seconds (1209600 seconds = 2 weeks is the value that D "should be" if difficulty is correctly calibrated). We finally adjust difficulty by a factor of 1/D: for example, if D = 604800, difficulty goes up by 2x, if D = 1814400, difficulty goes down by 33%, etc. (code here)
The last rule ensures that difficulty adjustments are "clamped" between a 4x increase and a 4x decrease no matter what. So, how to we do this? Let's suppose for the sake of simplicity that in all examples the soft fork starts at unix time 1500000000. We could say that instead of putting the real time into blocks, miners should put 1500000000 + (t - 1500000000) * 5; this would make the blockchain think that blocks are coming 5x as rarely, and so it would decrease difficulty by a factor of 5, so that from the point of view of actual time blocks will start coming in every two minutes instead of ten. However, this approach has one problem: it is not a soft fork. Users running the original bitcoin client will very quickly start rejecting the new blocks because the timestamps are too far into the future. Can we get around this problem? You could use 1500000000 + (t - 1500000000) * 0.2 as the formula instead, and that would be a soft fork, but that would be counterproductive: if you do that, you would instead reduce the real-world block throughput by 5x. You could try to look at schemes where you pretend that blocks come quickly sometimes and slowly at other times and "zigzag" your way to a lower net equilibrium difficulty, but that doesn't work: for mathematical reasons that have to do with the fact that 1/x always has a positive second derivative, any such strategy would inevitably gain more difficulty going up than it would lose coming down (at least as long as it stays within the constraint that "fake time" must always be less than or equal to "real time"). However, there is one clever way around this. We start off by running a soft fork that sets fake_time = 1500000000 + (real_time - 1500000000) * 0.01 for as long as is needed to get fake time 12 weeks behind real time. However, we add an additional rule: every 2016th block, we set the block timestamp equal to real time (this rule is enforced by soft-fork: if you as a miner don't do this, other miners don't build on top of your block). This way, the difficulty retargeting algorithm has no idea that anything is out of the ordinary, and so difficulty just keeps adjusting as normal. Note that because the timestamp of each block need only be higher than the median of the timestamps of the previous 11 blocks, and not necessarily higher than that of the immediately previous block, it's perfectly fine to hop right back to fake time after those single blocks at real time. During those 12 weeks, we also add a soft-forking change which invalidates a random 20% of blocks in the first two weeks, a random 36% of blocks in the second two weeks, 50% in the third two weeks, etc; this creates a gap between in-protocol difficulty and de-facto difficulty that will hit 4x by the time we start the next step (we need this to avoid having an 8-week period where block throughput is at 250 kb per 10 minutes). Then, once we have 12 weeks of "leeway", we perform the following maneuver. We do the first retarget with the timestamp equal to fake time; this increases difficulty by 4x (as the timestamp difference is -12 weeks, which gets clamped to the minimum of 302400 seconds = 0.5 weeks). The retarget after that, we set the timestamp 8 weeks ahead of fake time, so as to get the difficulty down 4x. The retargeting round after that, we determine the actual retargeting coefficient c that we want to have, and clamp it so that 0.5 <= c < 2. We set the block timestamp c * 2 weeks ahead of the timestamp of the previous retargeting block. Then, in the retargeting round after that, we set the block timestamp back at fake time, and start the cycle again. Rinse and repeat forever. Diagram here: http://i.imgur.com/sqKa00e.png Hence, in general we spend 2/3 of our retargeting periods in lower-difficulty mode, and 1/3 in higher-difficulty. We choose c to target the block time in lower-difficulty mode to 30 seconds, so that in higher-difficulty mode it will be two minutes. In lower-difficulty mode, we add another softfork change in order to make a random 75% of blocks that get produced invalid (eg. one simple way to do this is to just pretend that the difficulty during these periods is 4x higher), so the actual block time duing all periods will converge toward two minutes - equivalent to a throughput of 5 MB every ten minutes. Note that a corollary of this is that it is possible for a majority of miners to collude using the technique above to make the block rewards come out 5x faster (or even more) than they are supposed to, thereby greatly enriching themselves at the expense of future network security. This is a slight argument in favor of bitcoin's finite supply over infinite supply models (eg. dogecoin), because in an infinite supply model this means that you can actually permanently expand issuance via a soft fork rather than just making the existing limited issuance come out faster. This is a quirk of bitcoin's difficulty adjustment algorithm specifically; other algorithms are immune to this specific trick though they may be vulnerable to tricks of their own. Homework:
Come up with a soft-fork strategy to change the mining algorithm to Keccak
Determine the minimum block time down to which it is possible to soft-fork Ethereum using a timestamp manipulation strategy. Do the same for Kimoto Gravity Well or whatever your favorite adjustment algorithm of choice is.
EDIT: I looked at the code again and it seems like the difficulty retargeting algorithm might actually only look 2015 blocks back every 2016 blocks rather than every 2016 blocks (ie. it checks the timestamp difference between block 2016*k+2015 and 2016*k, not 2016*k+2016 and 2016*k as I had assumed). In that case, the timestamp dance and the initial capacity adjustment process might actually be substantially simpler than I thought: it would simply be a one-step procedure of always setting the timestamp at 2016*k to equal real time and then setting the timestamp of 2016*k+2015 to whatever is convenient for achieving the desired difficulty adjustment. EDIT 2: I think I may have been wrong about the effectiveness of this strategy being limited by the minimum safe block time. Specifically, note that you can construct a soft fork where the in-protocol difficulty drops to the point where it's negligible, and say that all blocks where block.number % N != 0 have negligible difficulty but blocks where block.number % N = 0 are soft-forked to have higher de-facto difficulty; in this case, a miner's optimal strategy will be to simultaneously generate N-1 easy blocks and a hard block and if successful publish them as a package, creating a "de-facto block" of theoretically unlimited size.
WitLess Mining - Removing Signatures from Bitcoin Cash
A Selfish Miner Variant to Remove Signatures from Bitcoin Cash WitLess Mining is a hypothetical adversarial hybrid fork leveraging a variant of the selfish miner strategy to remove signatures from Bitcoin Cash. By orphaning blocks produced by miners unwilling to blindly accept WitLess blocks without validation, a miner or cartel of collaborating miners with a substantial, yet less than majority, share of the total Bitcoin Cash network hash power can alter the Nash equilibrium of Bitcoin Cash’s economic incentives, enticing otherwise honest miners to engage in non-validated mining. Once a majority of network hash power has switched to non-validated mining it will be possible to steal arbitrary UTXOs using invalid signatures - even non-existent signatures. As miners would risk losing all of their prior rewards and fees were signatures to be released that prove their malfeasance, it could even be possible to steal coins using non-existent transactions, leaving victims no evidence to prove the theft occurred. WitLess Mining introduces two new data structures, the WitLess Transaction (wltx) and the WitLess Transaction Input (wltxin). These data structures are modifications of their standard counterpart data structures, Transaction (tx) and Transaction Input (txin), and can be used as drop-in replacements to create a WitLess Block (wlblock). These new structures provide WitLess Miners signature-withheld (WitLess) transaction data sufficient to reliably update their local UTXO sets based on the transactions contained within a WitLess block while preventing validation of the transaction signature scripts. The specific mechanism by which WitLess Mining transaction and block data will be communicated among WitLess miners is left as an exercise for the reader. The author suggests it may be possible to extend the existing Bitcoin Cash gossip network protocol to handle the new WitLess data structures. Until WitLess Mining becomes well-adopted, it may be preferable to implement an out-of-band mechanism for releasing WitLess transactions and blocks as service. In order to offset potential revenue reduction due to the selfish mining strategy, the WitLess Mining cartel might provide a distribution service under a subscription model, offering earlier updates for higher tiers. An advanced distribution system could even implement a per-block bidding option, creating a WitLess information market. Regardless of the distribution mechanism chosen, the risk having their blocks orphaned will provide strong economic incentive for rational short-term profit-maximizing agents to seek out WitLess transaction and block data. To encourage other segments of the Bitcoin Cash ecosystem to adopt WitLess Mining, the WitLess data structures are designed specifically to facilitating malicous fee-based transaction replacement:
The lock_time field of wltx can be used to override the corresponding field in the standard form of a transaction, allowing the sender to introduce an arbitrary delay before their transaction becomes valid for inclusion in a block.
The sequence field of wltxin can be used to override the corresponding field in the standard form of a transaction input, allowing the sender to set a lower sequence number thereby enabling replacement even when the standard form indicates it is a final version.
It is expected that fee-based transaction replacement will be particularly popular among malicious users wishing to defraud 0-conf accepting merchants as well as the vulnerable merchants themselves. The feature is likely to encourage higher fees from the users resulting in their WitLess transaction data fetching a premium price under subscription- or market-based distribution. Malicious users may also be interested in subscribing directly to a WitLess Mining distribution service in order to receive notification when the cartel is in a position to reliably orphan non-compliant blocks, during which time their efforts will be most effective.
WitLess Block - wlblock
The wlblock is an alternate serialization of a standard block, containing the set of wltx as a direct replacement of the tx records. The hashMerkleRoot of a wlblock should be identical to the corresponding value in the standard block and can verified to apply to a set of txid by constructing a Merkelized root of txid_commitments from the wltx set. The same proof of work validation that applies to the standard block header also ensures legitimacy of the wltx set thanks to a WitLess Commitment included as an input to the coinbase tx.
WitLess Transaction - wltx
Transaction data format version as it appears in the corresponding tx
Always 0x5052 and indicates that the transaction is WitLess
Number of WitLess transaction inputs (never zero)
A list of 1 or more WitLess transaction inputs or sources for coins
Number of transaction outputs as it appears in the corresponding tx
A list of 1 or more transaction outputs or destinations for coins as it appears in the corresponding tx
The block number or timestamp at which this transaction is unlocked. This can vary from the corresponding tx, with the higher of the two taking precedence.
Each wltx can be referenced by a wltxid generated in way similar to the standard txid.
WitLess Transaction Input - wltxin
The previous output transaction reference as it appears in the corresponding txin
The length of the signature script as it appears in the corresponding txin
32 or 0
Only for the first the wltxin of a transaction, the txid of the tx containing the corresponding txin; omitted for all subsequent wltxin entries
Transaction version as defined by the sender. Intended for replacement of transactions when sender wants to defraud 0-conf merchants. This can vary from the corresponding txin, with the lower of the two taking precedence.
WitLess Commitment Structure
A new block rule is added which requires a commitment to the wltxid. The wltxid of coinbase WitLess transaction is assumed to be 0x828ef3b079f9c23829c56fe86e85b4a69d9e06e5b54ea597eef5fb3ffef509fe. A witless root hash is calculated with all those wltxid as leaves, in a way similar to the hashMerkleRoot in the block header. The commitment is recorded in a scriptPubKey of the coinbase tx. It must be at least 42 bytes, with the first 10-byte of 0x6a284353573e3d534e43, that is:
1-byte - OP_RETURN (0x6a) 1-byte - Push the following 40 bytes (0x28) 8-byte - WitLess Commitment header (0x4353573e3d534e43) 32-byte - WitLess Commitment hash: Double-SHA256(witless root hash) 43rd byte onwards: Optional data with no consensus meaning
If there are more than one scriptPubKey matching the pattern, the one with highest output index is assumed to be the WitLess commitment.
Thoughts on scaling with larger blocks and universal blockchain pruning
While Bitcoin Core is experimenting with payment channels that will use the blockchain as a trusted intermediary to open and close channels, with a throughput theoretically limited only by latency (if it works at all), the scaling solution chosen by the Bitcoin Cash community is increasingly larger blocks to track demand and scale as the network grows. That, however, creates new problems. The first problem is block propagation. Every time a node wins the proof-of-work race, it has to broadcast it's block for validation and acceptance by the other nodes, which in practical terms involves uploading files through the p2p network. Larger blocks, therefore, would result in larger files to upload, and a block size limit is defined by the mean/median network bandwidth. If I remember correctly, the "compact blocks" implementation used by both Bitcoin Core and Cash can handle up to 98% compression, which is great and allows for a tremendous increase in block size without fear of clogging up the network, but a scalable future-proof payment system must be able to theoretically handle millions of transactions per second, which would require a bandwidth measured in the Gbps. The solution to this problem, instead of hoping that Gbps connections will be commonplace by then, is improving the compression algorithms. One such improved algorithm is Graphene, which can handle up to 99.8% compression, requiring a bandwidth measured in Mbps for a throughput in the millions tx/s. The second problem, and perhaps the most discussed one regarding future centralization problems with Bitcoin Cash, is the storage size of the blockchain. While it sits at around 160GB right now, with increased demand and larger blocks comes an exponential increase in the blockchain database size. One million tx/s (i.e. 600 million transactions per 10-minute block), averaging 250 bytes per transaction, would result in 150GB blocks, and an increase of around 7800TB in required storage per year. This is hardly possible for most nodes, even with the expected Moore's Law increase in consumer grade SSD, unless we experience some technological breakthrough allowing seemingly endless storage capacity. Once again, this is not something we should expect to happen, but rather work around it to ensure a continued decentralization of full nodes on the network. One solution to this problem is "pruning" the blockchain, as a means to ditch spent outputs from the blockchain and only keep the unspent ones. However, this is only a solution to lightweight/pruned nodes (which don't relay previous blocks or Merkle paths) since it's impossible to actually prune the blockchain, because any change in past blocks will require a change in the subsequent blocks, and the new pruned blockchain would only be valid if we employed a massive amount of computational power to solve the proof-of-work puzzles of each individual block since Genesis. Hardly an easy solution. A work-around to mining the pruned blocks would be to hard fork the network to allow for a completely invalid blockchain up to the "pruning block", from where valid blocks would continue to be built on top of. This has numerous problems, but the biggest might just be that the community will never allow for nearly 10 years worth of blocks to suddenly go invalid for the sake of scaling, and thus the hard fork would never gain traction. EDIT: In reality, nodes running with a pruned blockchain keep track of the UTXO set (i.e. every unspent transaction outputs and their associated public keys) and erase every transaction up to a certain amount of blocks, keeping only the block headers (including the merkle root) and the merkle branches. This allows for any participant to request the merkle path to a certain transaction and locally verify it's authenticity without downloading the actual blocks. However, there are situations where one must validate the blocks themselves instead of trusting the nodes it's connected to, and since each block references a block which you can't expect to be valid without checking, one ends up needing to validate every previous blocks until all coinbase transactions are accounted for, or even all the way back to the Genesis block, so it's very important that there exists full nodes running the blockchain in it's entirety. I propose a solution to this problem with a "net settlement hard fork" that would consolidate the complete UTXO set in the blockchain and render the previous blocks unnecessary to ensure the security of the system. On a predetermined block, similarly to how the reward halvings are scheduled, a special block would be mined, which I'll refer to as Exodus (actually, Exodus#1, as more of these special blocks would have to be mined from time to time). This block would contain "net settlement" transactions of all unspent outputs back to their rightful public keys. For example, the wallet that received the first coinbase transaction ever, likely belonging to Satoshi Nakamoto (1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa), has 1297 unspent outputs as of now, most of which are donations intended for Satoshi. On the Exodus block, all these outputs would be combined and sent back to that wallet as one net settled transaction of 66.87633136 BTC. Since every valid transaction requires a signature from the private key holder, these transactions would all be considered invalid on any other block except Exodus, the special block where the community has agreed to overlook this lack of signatures. In order to prevent malicious actors from trying to steal some of the outputs from other people or even create new outputs and send them to themselves, the hard fork would be programmed far in advance, so all miners can create their own [identical] Exodus block independently, incentivized to do so because of the expected regular block reward (block_reward) and the Exodus block reward calculated by adding every individual transaction size (tx_size_i) and multiplying by some agreed upon maximum transaction fee (max_tx_fee). Therefore, the total coinbase reward for mining that block would be block_reward + max_tx_fee * Σtx_size_i. This block would have an unlimited size, and define the end of the "first blockchain era", from Genesis to Exodus, which can be kept by anyone who chooses to do so. Since Exodus contains all the net settled transactions from the historical blockchain, full nodes aren't required to keep any previous blocks in their entirety to ensure transaction validity, effectively ditching several GB/TB of required storage. Every new transaction referencing the old blockchain would need to be rejected, until every wallet is fully synced and starts to reference the Exodus transactions instead, which shouldn't take long, but I might be understating how disruptive this could be. Subsequent blocks could have their maximum block sizes automatically adjusted similarly to how mining difficulty is adjusted now, without worrying over the database size. New "net settlement hard forks" would have to be performed periodically to ensure a limit to the blockchain size, similarly to how Monero ensure periodic upgrades to their protocol (it hard forks every 6 months). This would ensure that throughput can increase as needed, and the database would be regularly pruned to ensure proper decentralization of full nodes. Feel free to criticize the idea, and perhaps point me to better scaling solutions that I might be overlooking.
During initial block download, how is the value of "verificationprogress" calculated?
Running Bitcoin. During initial block download, I see this parameter: "verificationprogress": 0.2109286112882065 I wondered how this value is calculated and my initial suspicion was that it divides the number of blocks it has downloaded so far by the total number of block headers it has received. But in my case, that number is different: "blocks": 367986 "headers": 562456 Blocks/headers: 0.6542485101 So it must be calculated in some other manner. My second guess is that it's calculating the total number of transactions it has processed up to now against the total number in all 562456 blocks. But then I started to wonder, where would it get the information about the total number of transactions in all 562456 blocks? If it hasn't verified those blocks yet, how could it accurately know how many transactions there are in them? Could the merkle roots be used for this somehow? I'd like to know where the value 0.21... comes from.
Enhanced peers management with inbound connections eviction, group and a new peer store.
Revised and refactored the syscalls for future scripts development.
More demos: partial signature, non-interactive transfer, fixed cap UDT.
Initiated Swift and Java SDK.
Changes in RFCs
The RFC (Request for Comments) process is intended to provide an open and community driven path for new protocols, improvements and best practices. One month later after open source, we have 11 RFCs in draft or proposal status. We haven't finalized them yet, discussions and comments are welcome.
RFC0002 provides an overview of the Nervos Common Knowledge Base (CKB), the core component of the Nervos Network, a decentralized application platform with a layered architecture. The CKB is the layer 1 of Nervos, and serves as a general purpose common knowledge base that provides data, asset, and identity services.
RFC0003 introduces the VM for scripting on CKB the layer 1 chain. VM layer in CKB is used to perform a series of validation rules to determine if transaction is valid given transaction's inputs and outputs. CKB uses RISC-V ISA to implement VM layer. CKB relies on dynamic linking and syscalls to provide additional capabilities required by the blockchain, such as reading external cells or other crypto computations. Any compilers with RV64I support, such as riscv-gcc, riscv-llvm or Rust can be used to generate CKB compatible scripts.
RFC0004 is the protocol how CKB nodes synchronize blocks via the P2P network. Block synchronization must be performed in stages with Bitcoin Headers First style. Block is downloaded in parts in each stage and is validated using the obtained parts.
RFC0006 proposes Complete Binary Merkle Tree(CBMT) to generate Merkle Root and Merkle Proof for a static list of items in CKB. Currently, CBMT is used to calculate Transactions Root. Basically, CBMT is a complete binary tree, in which every level, except possibly the last, is completely filled, and all nodes are as far left as possible. And it is also a full binary tree, in which every node other than the leaves has two children. Compare with other Merkle trees, the hash computation of CBMT is minimal, as well as the proof size.
RFC0007 describes the scoring system of CKB P2P Networking layer and several networking security strategies based on it.
RFC0009 describes syscalls specification, and all the RISC-V VM syscalls implemented in CKB so far.
RFC0010 defines the consensus rule “cellbase maturity period”. For each input, if the referenced output transaction is cellbase, it must have at least CELLBASE_MATURITYconfirmations; else reject this transaction.
RFC0011, transaction filter protocol, allows peers to reduce the amount of transaction data they send. Peer which wants to retrieve transactions of interest, has the option of setting filters on each connection. A filter is defined as a Bloom filter on data derived from transactions.
CKB has released v0.2.0 and v0.3.0 in this month. Rust 2018. We have upgraded all the major repositories to Rust 1.31.0 and 2018 edition. After the Rust upgrade, we can switch to numext, which is a high-performance big number library relying on some new features in 1.31.0. CKB is dockerized. It has never been easier to run a CKB node:
docker run -ti nervos/ckb:latest run
The node started via
no longer produces new blocks. This feature is now in a new process which is launched by
(#52). The new process gets block template from a node and submits new block with resolved PoW puzzle via node's RPC. The RPC interface for miners is temporary, and we are working on an RFC proposal for this. After this change, we also modularized RPCs (#118). Now each RPC module can be disabled via config file. Another feature we are actively developing is peers management. This month, we have implemented network group and inbound peer eviction which described in RFC0007. We also delivered a new version of
which allow us to support security strategies defined in RFC0007 in the future. Annoyed by the problems of existing P2P libraries, we started to work on a brand new P2P protocol from the ground up. It is still in an early stage and is a minimal implementation for a multiplexed p2p network based on
that supports mounting custom protocols. We already implemented 3 core components yamux/secio/service. yamux and secio are mainly refer to their corresponding golang implementations, API are clear and easy to use. Those 3 core components are all use channel based lock-free design with good code readability and maintainability. We are adding up more custom protocols layers, and is going to integrate the discovery protocol soon as described in RFC0012. We have refactored the rust utility library to mock time for debug and test (#111). It is now available as a separate crate. There are some other features we are still working on, such as implementation of RFC0006 and RFC0011, and the RFC about serialization format CFB. We are going to release them in next month.
Changes in VM
All the CKB related syscalls used in the VM have been revisited and refactored to be more future proof.
CKB VM used in CKB has upgraded to the latest revision with the following notable changes:
With experience learned from real contracts, the maximum memory in CKB VM has been reduced from 128MB to 16MB
From here... https://bitcointalk.org/index.php?topic=5006583.0 Questions. Chapter 1: Introduction 1. What are the main Bitcoin terms? 2. What is a Bitcoin address? 3. What is a Bitcoin transaction? 4. What is a Bitcoin block? 5. What is a Bitcoin blockchain? 6. What is a Bitcoin transaction ledger? 7. What is a Bitcoin system? What is a bitcoin (cryptocurrency)? How are they different? 8. What is a full Bitcoin stack? 9. What are two types of issues that digital money have to address? 10. What is a “double-spend” problem? 11. What is a distributed computing problem? What is the other name of this problem? 12. What is an election? 13. What is a consensus? 14. What is the name of the main algorithm that brings the bitcoin network to the consensus? 15. What are the different types of bitcoin clients? What is the difference between these clients? Which client offers the most flexibility? Which client offers the least flexibility? Which client is the most and least secure? 16. What is a bitcoin wallet? 17. What is a confirmed transaction and what is an unconfirmed transaction? Chapter 2: How Bitcoin works. 1. What is the best way to understand transactions in the Bitcoin network? 2. What is a transaction? What does it contain? What is the similarity of a transaction to a double entry ledger? What does input correspond to? What does output correspond to? 3. What are the typical transactions in the bitcoin network? Could you please name three of such transactions and give examples of each type of the transaction? 4. What is a QR and how it is used in the Bitcoin network? Are there different types of QRs? If so, what are the different types? Which type is more informational? What kind of information does it provide? 5. What is SPV? What does this procedure check and what type of clients of the Bitcoin network usually use this procedure? Chapter 3: The Bitcoin client. 1. How to download and install the Core Bitcoin client? 2. What is the best way to test the API available for the Core Bitcoin client without actually programming? What is the interface called? 3. What are the major areas of operations in the Bitcoin client? What can we do with the client? 4. What are the available operations for the Bitcoin addresses? 5. What are the available read operations for the Bitcoin transactions? How is a transaction encoded in the Bitcoin network? What is a raw transaction and what is a decoded transaction? 6. If I want to get information about a transaction that is not related to any address in my own wallet, do I need to change anything in the Bitcoin client configuration? If yes, which option do I need to modify? 7. What are the available read operation for the Bitcoin blocks? 8. What are the available operations for the creation of the transactions in the Bitcoin network? 9. How do you normally need to address the unspent output from the previous transaction in order to use it as an input for a new transaction? 10. What is the mandatory operation after creating a new transaction and before sending this new transaction to the network? What state does the wallet have to be in order to perform this operation? 11. Is the transaction ID immutable (TXID)? If not why, if yes, why and when? 12. What does signing a transaction mean? 13. What are the other options for Bitcoin clients? Are there any libraries that are written for some specific languages? What types of clients do these libraries implement? Chapter 4: Keys, Addresses and Wallets. 1. What is a PKC? When it was developed? What are the main mathematical foundations or functions that PKC is using? 2. What is ECC? Could you please provide the formula of the EC? What is the p and what is the Fp? What are the defined operations in ECC? What is a “point to infinity”? 3. What is a Bitcoin wallet? Does this wallet contain coins? If not, what does it contain then? 4. What is a BIP? What it is used for? 5. What is an encrypted private key? Why would we want to encrypt private keys? 6. What is a paper wallet? What kind of storage it is an example of? 7. What is a nondeterministic wallet? Is it a good wallet or a bad wallet? Could you justify? 8. What is a deterministic wallet? 9. What is an HD wallet? 10. How many keys are needed for one in and out transaction? What is a key pair? Which keys are in the key pair? 11. How many keys are stored in a wallet? 12. How does a public key gets created in Bitcoin? What is a “generator point”? 13. Could you please show on a picture how ECC multiplication is done? 14. How does a private key gets created in Bitcoin? What we should be aware of when creating a new private key? What is CSPRNG? What kind of input should this function be getting? 15. What is a WIF? What is WIF-Compressed? 16. What is Base58 encoding and what is Base58Check encoding? How it is different from Base64 encoding? Which characters are used in Base58? Why Base58Check was invented? What kind of problems does it solve? How is Base58Check encoding is created from Base58 encoding? 17. How can Bitcoin addresses be encoded? Which different encodings are used? Which key is used for the address creation? How is the address created? How this key is used and what is the used formula? 18. Can we visually distinguish between different keys in Base58Check format? If yes, how are they different from each other? What kind of prefixes are used? Could you please provide information about used prefixes for each type of the key? 19. What is an index in HD wallets? How many siblings can exist for a parent in an HD wallet? 20. What is the depth limitation for an HD wallet key hierarchy? 21. What are the main two advantages of an HD wallet comparing to the nondeterministic wallets? 22. What are the risks of non-hardened keys creation in an HD wallet? Could you please describe each of them? 23. What is a chain code in HD wallets? How many different chain code types there are? 24. What is the mnemonic code words? What are they used for? 25. What is a seed in an HD wallet? Is there any other name for it? 26. What is an extended key? How long is it and which parts does it consist of? 27. What is P2SH address? What function are P2SH addresses normally used for? Is that correct to call P2SH address a multi-sig address? Which BIP suggested using P2SH addresses? 28. What is a WIF-compressed private key? Is there such a thing as a compressed private key? Is there such a thing as a compressed public key? 29. What is a vanity address? 30. What is a vanity pool? 31. What is a P2PKH address? What is the prefix for the P2PKH address? 32. How does the owner prove that he is the real owner of some address? What does he have to represent to the network to prove the ownership? Why a perpetrator cannot copy this information and reuse it in the next transactions? 33. What is the rule for using funds that are secured by a cold storage wallet? How many times you can send to the address that is protected by the private key stored in a cold storage? How many times can you send funds from the address that is protected by the private key stored in a cold storage? Chapter 5: Transactions. 1. What is a transaction in Bitcoin? Why is it the most important operation in the Bitcoin ecosystem? 2. What is UTXO? What is one of the important rules of the UTXO? 3. Which language is used to write scripts in Bitcoin ecosystem? What are the features of this language? Which language does it look like? What are the limitations of this language? 4. What is the structure of a transaction? What does transaction consists of? 5. What are the standard transactions in Bitcoin? How many standard transactions there are (as of 2014)? 6. What is a “locking script” and what is an “unlocking script”? What is inside these scripts for a usual operation of P2PKH? What is a signature? Could you please describe in details how locking and unlocking scripts work and draw the necessary diagrams? 7. What is a transaction fee? What does the transaction fee depend on? 8. If you are manually creating transactions, what should you be very careful about? 9. Could you please provide a real life scenario when you might need a P2SH payment and operation? 10. What is the Script operation that is used to store in the blockchain some important data? Is it a good practice? Explain your answer. Chapter 6: The Bitcoin Network. 1. What is the network used in Bitcoin? What is it called? What is the abbreviation? What is the difference between this network architecture and the other network architectures? Could you please describe another network architecture and compare the Bitcoin network and the other network architectures? 2. What is a Bitcoin network? What is an extended Bitcoin network? What is the difference between those two networks? What are the other protocols used in the extended Bitcoin network? Why are these new protocols used? Can you give an example of one such protocol? What is it called? 3. What are the main functions of a bitcoin node? How many of them there are? Could you please name and describe each of them? Which functions are mandatory? 4. What is a full node in the Bitcoin network? What does it do and how does it differ from the other nodes? 5. What is a lightweight node in the Bitcoin network? What is another name of the lightweight node? How lightweight node checks transactions? 6. What are the main problems in the SPV process? What does SPV stand for? How does SPV work and what does it rely on? 7. What is a Sybil attack? 8. What is a transaction pool? Where are transaction pools stored in a Bitcoin network client? What are the two different transaction pools usually available in implementations? 9. What is the main Bitcoin client used in the network? What is the official name of the client and what is an unofficial name of this client? 10. What is UTXO pool? Do all clients keep this pool? Where is it stored? How does it differ from the transaction pools? 11. What is a Bloom filter? Why are Bloom filters used in the Bitcoin network? Were they originally used in the initial SW or were they introduced with a specific BIP? Chapter 7: The Blockchain. 1. What is a blockchain? 2. What is a block hash? Is it really a block hash or is it a hash of something else? 3. What is included in the block? What kind of information? 4. How many parents can one block have? 5. How many children can one block have? Is it a temporary or permanent state of the blockchain? What is the name of this state of the blockchain? 6. What is a Merkle tree? Why does Bitcoin network use Merkle trees? What is the advantage of using Merkle trees? What is the other name of the Merkle tree? What kind of form must this tree have? 7. How are blocks identified in the blockchain? What are the two commonly used identities? Are these identities stored in the blockchain? 8. What is the average size of one transaction? How many transactions are normally in one block? What is the size of a block header? 9. What kind of information do SPV nodes download? How much space do they save by that comparing to what they would need if they had to download the whole blockchain? 10. What is a usual representation of a blockchain? 11. What is a genesis block? Do clients download this block and if yes – where from? What is the number of the genesis block? 12. What is a Merkle root? What is a Merkle path? Chapter 8: Mining and Consensus. 1. What is the main purpose of mining? Is it to get the new coins for the miners? Alternatively, it is something else? Is mining the right or good term to describe the process? 2. What is PoW algorithm? 3. What are the two main incentives for miners to participate in the Bitcoin network? What is the current main incentive and will it be changed in the future? 4. Is the money supply in the Bitcoin network diminishing? If so, what is the diminishing rate? What was the original Bitcoin supply rate and how is it changed over time? Is the diminishing rate time related or rather block related? 5. What is the maximum number of Bitcoins available in the network after all the Bitcoins have been mined? When will all the Bitcoins be mined? 6. What is a decentralized consensus? What is a usual setup to clear transactions? What does a clearinghouse do? 7. What is deflationary money? Are they good or bad usually? What is the bad example of deflationary spiral? 8. What is an emergent consensus? What is the feature of emergent consensus? How does it differ from a usual consensus? What are the main processes out of which this emergent decentralized consensus becomes true? 9. Could you please describe the process of Independent Transaction Verification? What is the list of criteria that are checked against a newly received transaction? Where can these rules be checked? Can they be changed over time? If yes, why would they be changed? 10. Does mining node have to be a full node? If not, what are the other options for a node that is not full to be a mining node? 11. What is a candidate block? What types of nodes in the Bitcoin network create candidate blocks? What is a memory pool? Is there any other name of the memory pool? What are the transactions kept in this memory pool? 12. How are transactions added to the candidate block? How does a candidate block become a valid block? 13. What is the minimum value in the Bitcoin network? What is it called and what is the value? Are there any alternative names? 14. What is the age of the UTXO? 15. How is the priority of a transaction is calculated? What is the exact formula? What are the units of each contributing member? When is a transaction considered to be old? Can low priority transactions carry a zero fee? Will they be processed in this case? 16. How much size in each block is reserved for high priority transactions? How are transactions prioritized for the remaining space? 17. Do transactions expire in Bitcoin? Can transactions disappear in the Bitcoin network? If yes, could you please describe such scenario? 18. What is a generation transaction? Does it have another name? If it does, what is the other name of the transaction? What is the position of the generation transaction in the block? Does it have an input? Is the input usual UTXO? If not – what is the input called? How many outputs there are for the generation transaction? 19. What is the Coinbase data? What is it currently used for? 20. What is little-endian and big-endian formats? Could you please give an example of both? 21. How is the block header constructed? Which fields are calculated and added to the block header? Could you please describe the steps for calculation of the block header fields? 22. What is a mantissa-exponent encoding? How is this encoding used in the Bitcoin network? What is the difficulty target? What is the actual process of mining? What kind of mathematical calculation is executed to conduct mining? 23. Which hash function is used in the Bitcoin mining process? 24. Could you describe the PoW algorithm? What features of the hash function does it depend on? What is the other name of the hash function? What is a nonce? How can we increase the difficulty of the PoW calculation? What do we need to change and how do we need to change this parameter? 25. What is difficulty bits notation? Could you please describe in details how it works? What is the formula for the difficulty notation? 26. Why is difficulty adjustable? Who adjusts it and how exactly? Where is the adjustment made? On which node? How many blocks are taken into consideration to predict the next block issuance rate? What is the change limitation? Does the target difficulty depend on the number of transactions? 27. How is a new block propagated in the network? What kind of verification does each node do? What is the list of criteria for the new block? What kind of process ensures that the miners do not cheat? 28. How does a process of block assembly work? What are the sets of blocks each full node have? Could you please describe these sets of blocks? 29. What is a secondary chain? What does each node do to check this chain and perhaps to promote it to the primary chain? Could you please describe an example when a fork occurs and what happens? 30. How quickly forks are resolved most of the time? Within how many new block periods? 31. Why the next block is generated within 10 minutes from the previous? What is this compromise about? What do designers of the Bitcoin network thought about when implementing this rule? 32. What is a hashing race? How did Bitcoin hashing capacity has changed within years from inception? What kind of hardware devices were initially used and how did the HW utilization evolved? What kind of hardware is used now to do mining? How has the network difficulty improved? 33. What is the size of the field that stores nonce in the block header? What is the limitation and problem of the nonce? Why was an extra nonce created? Was there any intermediate solution? If yes, what was the solution? What are the limitations of the solution? 34. What is the exact solution for the extra nonce? Where does the new space come from? How much space is currently used and what is the range of the extra nonce now? 35. What is a mining pool? Why was it created? How are normally such pools operated? Do they pay regularly to the pool participants? Where are newly created Bitcoins distributed? To which address? How do mining pools make money? How do the mining pools calculate the participation? How are shares earned calculated? 36. What is a managed pool? How is the owner of the pool called? Do pool members need to run full nodes? Explain why or why not? 37. What are the most famous protocols used to coordinate pool activities? What is a block template? How is it used? 38. What is the limitation of a centralized pool? Is there any alternative? If yes, what is it? How is it called? How does it work? 39. What is a consensus attack? What is the main assumption of the Bitcoin network? What can be the targets of the consensus attacks? What can these attacks do and what they cannot do? How much overall capacity of the network do you have to control to exercise a consensus attack? Chapter 9: Alternative Chains, Currencies and Applications. 1. What is the name of alternative coins? Are they built on top of the Bitcoin network? What are examples of them? Is there any alternative approach? Could you please describe some alternatives? 2. Are there any alternatives to the PoW algorithm? If yes – what are the alternatives? Could you please name two or three? 3. What is the operation of the Script language that is used to store a metadata in Bitcoin blockchain? 4. What is a coloured coin? Could you please explain how it is created and how it works? Do you need any special SW to manage coloured coins? 5. What is the difference between alt coins and alt chains? What is a Litecoin? What are the major differences between the Bitcoin and Litecoin? Why so many alt coins have been created? What are they usually based on? 6. What is Scrypt? Where is it used and how is it different from the original algorithm from which it has been created? 7. What is a demurrage currency? Could you please give an example of one blockchain and crypto currency that is demurrage? 8. What is a good example of an alternative algorithm to PoW? What is it called and how is it different from the PoW? Why the alternatives to Bitcoin PoW have been created? What is the main reason for this? What is dual-purpose PoW algorithms? Why have they been created? 9. Is Bitcoin “anonymous” currency? Is it difficult to trace transactions and understand someone’s spending habits? 10. What is Ethereum? What kind of currency does it use? What is the difference from Bitcoin? Chapter 10: Bitcoin security. 1. What is the main approach of Bitcoin security? 2. What are two common mistakes made by newcomers to the world of Bitcoin? 3. What is a root of trust in traditional security settings? What is a root of trust in Bitcoin network? How should you assess security of your system? 4. What is a cold storage and paper wallet? 5. What is a hardware wallet? How is it better than storing private keys on your computer or your smart phone?
PROPOSAL: B0RG (Bitcoin zero replay guarantee) - Ensuring a smooth 2X upgrade without a chain split
DRAFT A lot of anxiety and confusion regarding the future direction of Bitcoin around the 2x fork date has appeared. The potential of a chain split and resulting chaos in the markets keeps coming up repeatedly. In here, a comparatively simple change to the legacy ("1x") and 2x chain is proposed that will solve this problem. This change is a combination of two soft forks (one on each chain), meaning it will be backwards compatible with all existing clients, no matter on which chain they operate. As a soft fork, it is also an upgrade that solely concerns the Bitcoin miners, needing no further action on part of the users of any touched block chain. This change will ensure a smooth transition through the 2x phase of SegWit2x and ensure just a single chain will emerge. Principle The change involves a deliberation of abandonment (DOA) chain, defined as follows: This DOA is the chain of just block headers and coin base transactions, compatible with the legacy chain rules, starting at the 2x fork point (FP; starting at block height 494,784). All rules for difficulty and maximum cumulative difficulty and so forth are left unchanged. From the maximum difficulty DOA, a state flag "abandoned" (A) will be calculated, like this: If the DOA is empty, A amounts to 'true'. If the DOA is non-empty, the coin base transactions and merkle roots in the DOA will be checked:
if any transaction merkle roots in any of the DOA block headers indicate that more than just the coin base transaction would be in the merkle tree, A amounts to 'false'. (Requirement 1)
likewise, if any coin base transaction in the DOA has more than one output or an output that does not follow a simple TBD "burn" pattern, A amounts to 'false' as well. (Requirement 2)
If the above checks pass, 'A' will be true.
It should be noted here that the DOA chain of headers can not be confused with the 2x chain, as the 2x chain has a requirement of having a base block being truly bigger than 1000,000 bytes at the FP, contradicting Requirement 1. To ensure a smooth transaction on the 2x chain, a soft fork rule is then added. , that requires a hash of the most recent DOA chain in a TBD location. Additionally,. It requires that the DOA criterion, as defined above, evaluates to true. If this hash is out of date (does not reflect the DOA chain tip) or the DOA criterion false, a block is to be considered invalid and orphaned. Implementing this change on 2x will ensure that each miner is urged to create empty blocks on the legacy chain, assuming a simple 50% majority is willing to upgrade to SegWit2X at the FP. From the POV of both chains, this appears as a simple, respective soft fork. It is compatible with all old legacy nodes and will cleanly signal that it is time to upgrade. Protocol issues On the network level, the change is conceptually simple. "block" messages will be accepted, both if they fulfill the "must be greater than 1MB" rule of the SegWit2X chain, as well as if they don't. But in the latter case, the messages are added to the DOA chain, if the checks for proof of work on the block header pass. SegWit2X compatible clients will will forward both DOA blocks as well as regular blocks in "block" messages. Sunset clause After 3 months, the requirement to keep track of the DOA chain will be dropped. EDIT: Realized that no hash is even necessary in the 2x chain. Added sunset clause.
The following are a list of items that appear to me to be more or less on the Bitcoin Cash roadmap. I say appear to me because there isn’t any formal roadmap (as of yet), just a bunch of ideas thrown around on the mailing list and in slack that seems to have a rough high level consensus (implementation details spur much more debate).
New difficulty adjustment algorithm
The altcoin community very familiar with what happens when more than one coin competes for mining hardware ― wild swings in difficulty as miners switch to the most profitable-to-mine coins and unreliable block times. Bitcoin hasn’t had to deal with this during its lifetime since it’s the only cryptocurrency using double SHA256 as its mining algorithm. That has now changed with the Bitcoin Cash fork. The difficulty adjustment algorithm designed by Satoshi is pretty naive and not well suited for multiple coins competing for the same hash power and both Bitcoin Cash and Bitcoin are suffering because of it; Bitcoin Cash more so. So it makes sense to change the algorithm. Fortunately a good deal of research has been done on this topic over the years so it’s really just a matter of implementing the best solution. The result is that Bitcoin Cash will have a modern adjustment algorithm that prevents wild swings in block times. Bitcoin, on the other hand, will have to continue to muddle through the relative changes in mining profitability.
Fix transaction malleability the right way
The primary complaint people had with segregated witness was not related to what it was trying to achieve ― fixing malleability, but rather how it was implemented. We were told the reason the Bitcoin community needed to accept such a cumbersome and ugly hack on a multi-billion dollar protocol was that to do it any other way was to risk a chain split. Ironically, that hack ended up being a primary driver of the Bitcoin Cash chain split. Bitcoin Cash is probably better off for forking off when it did because now it can fix malleability the correct way. Whether that is through a minimal malleability fix, with the transaction format remaining largely unchanged, or by changing the format to be more extensible is TBD. In either case this change isn’t likely a high priority in the short term. It should be obvious to everyone that the primary use case for a malleability fix, the lightning network, is not ready right now. Even when it finally becomes ready it will take time for the technology to mature. And even more time to gain consumer acceptance, if it ever does. This could easily take several years to happen. If Bitcoin Cash takes 12 or 18 months to get a malleability fix out the door I doubt it will miss anything.
Parallel transaction validation / New merkle tree format
Today each transaction in a block must be validated in sequential order since later transactions might depend on prior ones. This prevents the validation from being done in parallel, increasing the time it takes to validate a block and creating a scalability bottleneck. By validating in parallel the work can be split across multiple cores or even multiple machines speeding up validation time. Since the ordering is no longer important, the merkle tree can be redefined to allow for things like proof-of-absence which would been needed for fraud proofs or sharding down the road.
Committing the root of the UTXO set to each block would improve light client security, support a fast sync mode, and help with sharding down the road. At this point in time it’s not clear if there is an efficient enough way to do this that wont itself become a scalability bottleneck. Ethereum does this with its Patricia tree, so there is at least some precedent but more research into how best to do so is probably still needed. Short of the Patricia Tree, Bitcoin Cash could benefit from something like ECMH (Elliptic Curve Multiset Hash) proposed by Core developer Pieter Wuille. It doesn’t allow for the proofs we would like to create, but it’s very efficient and would probably be sufficient for checkpointing the UTXO set for fast sync. This could even be done without committing anything to the blocks and could be replaced by something better later. This would allow a new node to get fully bootstrapped in like five minutes rather than a couple days and allow all but archival nodes to prune transaction data older than six months or a year.
Bitcoin-ng / Weak blocks
Bitcoin-ng and Weak blocks are two different approaches to solving some scaling issues. The first has to do with larger blocks artificially benefiting larger miners at the expense of smaller miners potentially creating mining centralization pressures. The second is the need to validate blocks all at once rather than over time. Bitcoin-ng goes quite a bit further to fixing these issues than does weak blocks but it’s a pretty dramatic change to the consensus rules and one that has never run in a production environment. Weak blocks is just an addition to the wire protocol and doesn’t touch the consensus rules and is therefore more conservative. If I had a vote I would probably do weak blocks now, see how it works, and then consider ng later on down the road. Both bitcoin-ng and weak blocks would have the side benefit of improving zero confirmation security (though it still wouldn’t be perfect). Bitcoin-ng more so because it changes how transactions are mined and confirmed. Weak blocks, not being a consensus rule, would still permit miners to mine double spends, but it would give us a historical record of every double spend on the network. Merchants could use the blockchain data to calculate the percentage of transactions that received a weak confirmation but did not make it into the blockchain due to a double spend (likely to be very low) and use that to manage risk appropriately.
The holy grail of blockchain scalability is not pushing all transactions off onto centralized or quasi-centralized layer 2 platforms, but rather removing the need for all nodes to download and validate all transactions for the system to work. Two of the changes mentioned above, restructuring the merkle tree and UTXO commitments, can possibly introduce a new partially-validating operating mode. Users could still run a fully validating node if they want to (a node that validates all shards), but they would have the option tell it to only download and validate some lesser number of shards and still have the same security as a full node. In theory, the network should still be able to function if all nodes, including miners, are running partially validating nodes. If it works you basically get unlimited on-chain scaling without introducing any centralization. Sharding is still very researchy and so this is rightfully last on the roadmap, but it’s still an ideal to work towards. Ethereum has sharding on its roadmap, so hopefully we’ll get to see how it works for them and learn from what they do. So that’s it. Like I said, it’s pretty ambitious. It will definitely require several hardforks to realize this roadmap, but one the benefits of doing changes by hardfork is that you can be somewhat bold when not constrained by backwards compatibility requirements. Some of this stuff might not work out but at least Bitcoin Cash is rejecting the notion that cryptocurrency aught to be nothing more than a corporate settlement system and pushing forward as spendable electronic cash.
The roots of a “digital” money precede Bitcoin by decades. The existence of email (a system where any two parties on earth can talk to each other only servers and cables as a third party ... Why are Merkle roots used in Bitcoin? There are a handful of use cases for Merkle trees, but here we will focus on their importance in blockchains.Merkle trees are essential in Bitcoin and many other cryptocurrencies. They’re an integral component of every block, where they can be found in the block headers.To get the leaves for our tree, we use the transaction hash (the TXID) of every ... Here’s a technical diagram that explains how merkle roots are created in bitcoin: And here’s some Ruby code for creating a merkle root (from an array of TXIDs). It’s worth a read, even if you’re not a programmer at the moment. require 'digest' # Need this for the SHA256 hash function # Hash function used in the merkle root function (and in bitcoin in general) def hash256(hex) binary ... Good job! We now have the original merkleroot value from our bitcoin-cli getblock command! You can repeat the process for as many transaction as you like. Exception for merkle roots of block containing only the coinbase transaction. A exception to the process above is when generating the merkle root for a block which contains a single transaction. He says "Merkle roots are stored in Bitcoin block headers so as to enable efficient membership proofs for transactions in a block, which are necessary for Simple Verified Payment verification (SPV) nodes that only store block headers and not block contents" but doesn't explain. My question is about "efficient membership proofs" and you're not addressing that in any of your responses.