Bitcoin Cash Difficulty Chart - BitInfoCharts

Masari: Simple Private Money

Masari (MSR) is a scalability-focused, untraceable, secure, and fungible cryptocurrency using the RingCT protocol. Masari is the first CryptoNote coin to develop uncle mining and a fully client side web wallet.
[link]

Is there a resource to see the "effective" price of bitcoin to miners? (eg. today's $8,000 at the current difficulty level is as profitable as a $6,950 price last week difficulty was 15% easier)

submitted by mrcrypto2 to btc [link] [comments]

Is there a resource to see the "effective" price of bitcoin to miners? (eg. today's $8,000 at the current difficulty level is as profitable as a $6,950 price last week difficulty was 15% easier) /r/btc

Is there a resource to see the submitted by ABitcoinAllBot to BitcoinAll [link] [comments]

Wholeheartedly willing to get downvoted, but this RMT obsession has to stop.

This sub hasn't got a clue, I swear.

Huge sweeping changes to the game mechanics are a terrible way to combat RMT.
It's basically an admission that your anti-cheat doesn't work. Most MMOs suffer in some way from an RMT problem; WoW, Runescape, even Destiny 2 has RMT issues if you just look. Thing is, the anticheat in those games actually works worth a damn, so the entire playerbase doesn't have to suffer from endless tinkering with in-game systems.
Before you hit me with 'it's a hardcore game, deal with it, it's supposed to be grindy', just stop. Just don't bother. I've heard it time and time again, and it's bullshit. You know it's bullshit just as well as I do. The changes BSG have been making recently to nerf all forms of progression only make the game 'more hardcore' for people who work full time and don't have the same amount of *time* as streamers who dedicate their entire life to this game. That's not 'hardcore'. The game's difficulty mechanically is 'hardcore' and always have been, and I love it. These changes, though, in my eyes, are just time-wasting for the sake thereof. Since when does the amount of time one has to invest in a game define how fucking hardcore it is? Would you describe WoW as more hardcore than Tarkov because of how long you have to play to progress? Or perhaps beating all three Witcher games back to back is 'hardcore' because it took a long time. Are ARMA or DCS inherently less hardcore than Tarkov because an operation can be completed in an afternoon? No, judging how hardcore a game is by the amount of time one has to invest in it is a joke. *No game* should give enormous *mechanical advantages* to those with more time on their hands. There's already an inherent skill advantage that comes from that amount of practice, designing the mechanics to also reward only those with that much time is a kick in the teeth to all the people who love this game but can't invest that level of time.
And yeah, you can go ahead and say 'ummm actualllly it's a beta, so they can do what they like, stop whining', and yep. Yes, they can. You're correct. However, comma, that doesn't mean I have to pretend to like it. Yes, I did buy EoD and no, I don't regret it because of all the fun I've had til now. But suggesting people who don't like the current direction the game is going in aren't allowed to voice their opinion because the game's in beta is fucking ludicrous. What do you think the purpose of a playable beta is? Nikita is more than welcome to ignore all the people who don't like these new changes, but what gives people on this sub the right to tell me that I'm not entitled to an opinion on the product I've chosen to financially support. It's such a toxic, capital-G Gamer attitute to suggest that 'Tarkov is OUR game because we're willing to dump several full days a week into grinding for our Bitcoin farms. You should just go and play something else, this clearly isn't a game for you. Go play Call of Duty.' I shouldn't even have to express how utterly reductive and childish that is. Grow up.
I'm getting HUGE red flags with the way this game is currently going, because it's all too similar to a game I used to love, The Culling. That game blew up on launch and a bunch of high profile streamers suggested changes to the game, and the devs went ahead and implemented all of them without so much as *thinking* about how they'd affect the average player. Look at where that game is now. Servers shut down, because the average player simply stopped having fun. I'm not saying BSG is even close to that bad, but this endless tinkering with mechanics for the nebulous, vague purpose of 'RMT' has to stop or I don't know if the 'little guys' are gonna stick around much longer.

EDIT: I AM AWARE THAT RMT != CHEATING. But cheating is what makes RMT viable. RMTers need to keep items in supply, and to do that, they cheat. It's much more profitable. Ergo, if you stamp out cheaters, the RMT problem becomes significantly diminished.
EDIT 2: u/ArxMessor makes a great point that Tarkov is an MMO and therfore should have some kind of grind. I agree. However, most MMOs use systems like weekly bounties etc to ensure even players with only maybe 10 hours a week to invest in the game can still keep up and compete. Tarkov currently rewards time investment *exponentially* which removes all possibility of catching up.

EDIT 3: Yep, my DMs right now are very much confirming the things I said above about a certain subset of this community. Thanks, Gamers.
EDIT 4: I get it, Destiny anti-cheat is ass. I made a mistake there, since I don't play Trials of Osiris. However, do you see Bungie making the win requirement for Trials 50 wins instead of 9 or whatever just to slow down the hackers? Of course not, because it hurts normal players more.

Edit 5: My first gold! Thanks kind stranger.
submitted by ArmedChalko to EscapefromTarkov [link] [comments]

Gridcoin 5.0.0.0-Mandatory "Fern" Release

https://github.com/gridcoin-community/Gridcoin-Research/releases/tag/5.0.0.0
Finally! After over ten months of development and testing, "Fern" has arrived! This is a whopper. 240 pull requests merged. Essentially a complete rewrite that was started with the scraper (the "neural net" rewrite) in "Denise" has now been completed. Practically the ENTIRE Gridcoin specific codebase resting on top of the vanilla Bitcoin/Peercoin/Blackcoin vanilla PoS code has been rewritten. This removes the team requirement at last (see below), although there are many other important improvements besides that.
Fern was a monumental undertaking. We had to encode all of the old rules active for the v10 block protocol in new code and ensure that the new code was 100% compatible. This had to be done in such a way as to clear out all of the old spaghetti and ring-fence it with tightly controlled class implementations. We then wrote an entirely new, simplified ruleset for research rewards and reengineered contracts (which includes beacon management, polls, and voting) using properly classed code. The fundamentals of Gridcoin with this release are now on a very sound and maintainable footing, and the developers believe the codebase as updated here will serve as the fundamental basis for Gridcoin's future roadmap.
We have been testing this for MONTHS on testnet in various stages. The v10 (legacy) compatibility code has been running on testnet continuously as it was developed to ensure compatibility with existing nodes. During the last few months, we have done two private testnet forks and then the full public testnet testing for v11 code (the new protocol which is what Fern implements). The developers have also been running non-staking "sentinel" nodes on mainnet with this code to verify that the consensus rules are problem-free for the legacy compatibility code on the broader mainnet. We believe this amount of testing is going to result in a smooth rollout.
Given the amount of changes in Fern, I am presenting TWO changelogs below. One is high level, which summarizes the most significant changes in the protocol. The second changelog is the detailed one in the usual format, and gives you an inkling of the size of this release.

Highlights

Protocol

Note that the protocol changes will not become active until we cross the hard-fork transition height to v11, which has been set at 2053000. Given current average block spacing, this should happen around October 4, about one month from now.
Note that to get all of the beacons in the network on the new protocol, we are requiring ALL beacons to be validated. A two week (14 day) grace period is provided by the code, starting at the time of the transition height, for people currently holding a beacon to validate the beacon and prevent it from expiring. That means that EVERY CRUNCHER must advertise and validate their beacon AFTER the v11 transition (around Oct 4th) and BEFORE October 18th (or more precisely, 14 days from the actual date of the v11 transition). If you do not advertise and validate your beacon by this time, your beacon will expire and you will stop earning research rewards until you advertise and validate a new beacon. This process has been made much easier by a brand new beacon "wizard" that helps manage beacon advertisements and renewals. Once a beacon has been validated and is a v11 protocol beacon, the normal 180 day expiration rules apply. Note, however, that the 180 day expiration on research rewards has been removed with the Fern update. This means that while your beacon might expire after 180 days, your earned research rewards will be retained and can be claimed by advertising a beacon with the same CPID and going through the validation process again. In other words, you do not lose any earned research rewards if you do not stake a block within 180 days and keep your beacon up-to-date.
The transition height is also when the team requirement will be relaxed for the network.

GUI

Besides the beacon wizard, there are a number of improvements to the GUI, including new UI transaction types (and icons) for staking the superblock, sidestake sends, beacon advertisement, voting, poll creation, and transactions with a message. The main screen has been revamped with a better summary section, and better status icons. Several changes under the hood have improved GUI performance. And finally, the diagnostics have been revamped.

Blockchain

The wallet sync speed has been DRASTICALLY improved. A decent machine with a good network connection should be able to sync the entire mainnet blockchain in less than 4 hours. A fast machine with a really fast network connection and a good SSD can do it in about 2.5 hours. One of our goals was to reduce or eliminate the reliance on snapshots for mainnet, and I think we have accomplished that goal with the new sync speed. We have also streamlined the in-memory structures for the blockchain which shaves some memory use.
There are so many goodies here it is hard to summarize them all.
I would like to thank all of the contributors to this release, but especially thank @cyrossignol, whose incredible contributions formed the backbone of this release. I would also like to pay special thanks to @barton2526, @caraka, and @Quezacoatl1, who tirelessly helped during the testing and polishing phase on testnet with testing and repeated builds for all architectures.
The developers are proud to present this release to the community and we believe this represents the starting point for a true renaissance for Gridcoin!

Summary Changelog

Accrual

Changed

Most significantly, nodes calculate research rewards directly from the magnitudes in EACH superblock between stakes instead of using a two- or three- point average based on a CPID's current magnitude and the magnitude for the CPID when it last staked. For those long-timers in the community, this has been referred to as "Superblock Windows," and was first done in proof-of-concept form by @denravonska.

Removed

Beacons

Added

Changed

Removed

Unaltered

As a reminder:

Superblocks

Added

Changed

Removed

Voting

Added

Changed

Removed

Detailed Changelog

[5.0.0.0] 2020-09-03, mandatory, "Fern"

Added

Changed

Removed

Fixed

submitted by jamescowens to gridcoin [link] [comments]

Taproot, CoinJoins, and Cross-Input Signature Aggregation

It is a very common misconception that the upcoming Taproot upgrade helps CoinJoin.
TLDR: The upcoming Taproot upgrade does not help equal-valued CoinJoin at all, though it potentially increases the privacy of other protocols, such as the Lightning Network, and escrow contract schemes.
If you want to learn more, read on!

Equal-valued CoinJoins

Let's start with equal-valued CoinJoins, the type JoinMarket and Wasabi use. What happens is that some number of participants agree on some common value all of them use. With JoinMarket the taker defines this value and pays the makers to agree to it, with Wasabi the server defines a value approximately 0.1 BTC.
Then, each participant provides inputs that they unilaterally control, totaling equal or greater than the common value. Typically since each input is unilaterally controlled, each input just requires a singlesig. Each participant also provides up to two addresses they control: one of these will be paid with the common value, while the other will be used for any extra value in the inputs they provided (i.e. the change output).
The participants then make a single transaction that spends all the provided inputs and pays out to the appropriate outputs. The inputs and outputs are shuffled in some secure manner. Then the unsigned transaction is distributed back to all participants.
Finally, each participant checks that the transaction spends the inputs it provided (and more importantly does not spend any other coins it might own that it did not provide for this CoinJoin!) and that the transaction pays out to the appropriate address(es) it controls. Once they have validated the transaction, they ratify it by signing for each of the inputs it provided.
Once every participant has provided signatures for all inputs it registered, the transaction is now completely signed and the CoinJoin transaction is now validly confirmable.
CoinJoin is a very simple and direct privacy boost, it requires no SCRIPTs, needs only singlesig, etc.

Privacy

Let's say we have two participants who have agreed on a common amount of 0.1 BTC. One provides a 0.105 coin as input, the other provides a 0.114 coin as input. This results in a CoinJoin with a 0.105 coin and a 0.114 coin as input, and outputs with 0.1, 0.005, 0.014, and 0.1 BTC.
Now obviously the 0.005 output came from the 0.105 input, and the 0.014 output came from the 0.114 input.
But the two 0.1 BTC outputs cannot be correlated with either input! There is no correlating information, since either output could have come from either input. That is how common CoinJoin implementations like Wasabi and JoinMarket gain privacy.

Banning CoinJoins

Unfortunately, large-scale CoinJoins like that made by Wasabi and JoinMarket are very obvious.
All you have to do is look for a transactions where, say, more than 3 outputs are the same equal value, and the number of inputs is equal or larger than the number of equal-valued outputs. Thus, it is trivial to identify equal-valued CoinJoins made by Wasabi and JoinMarket. You can even trivially differentiate them: Wasabi equal-valued CoinJoins are going to have a hundred or more inputs, with outputs that are in units of approximately 0.1 BTC, while JoinMarket CoinJoins have equal-valued outputs of less than a dozen (between 4 to 6 usually) and with the common value varying wildly from as low as 0.001 BTC to as high as a dozen BTC or more.
This has led to a number of anti-privacy exchanges to refuse to credit custodially-held accounts if the incoming deposit is within a few hops of an equal-valued CoinJoin, usually citing concerns about regulations. Crucially, the exchange continues to hold private keys for those "banned" deposits, and can still spend them, thus this is effectively a theft. If your exchange does this to you, you should report that exchange as stealing money from its customers. Not your keys not your coins.
Thus, CoinJoins represent a privacy tradeoff:

Taproot

Let's now briefly discuss that nice new shiny thing called Taproot.
Taproot includes two components:
This has some nice properties:

Taproot DOES NOT HELP CoinJoin

So let's review!
CoinJoin:
Taproot:
There is absolutely no overlap. Taproot helps things that CoinJoin does not use. CoinJoin uses things that Taproot does not improve.

B-but They Said!!

A lot of early reporting on Taproot claimed that Taproot benefits CoinJoin.
What they are confusing is that earlier drafts of Taproot included a feature called cross-input signature aggregation.
In current Bitcoin, every input, to be spent, has to be signed individually. With cross-input signature aggregation, all inputs that support this feature are signed with a single signature that covers all those inputs. So for example if you would spend two inputs, current Bitcoin requires a signature for each input, but with cross-input signature aggregation you can sign both of them with a single signature. This works even if the inputs have different public keys: two inputs with cross-input signature aggregation effectively define a 2-of-2 public key, and you can only sign for that input if you know the private keys for both inputs, or if you are cooperatively signing with somebody who knows the private key of the other input.
This helps CoinJoin costs. Since CoinJoins will have lots of inputs (each participant will provide at least one, and probably will provide more, and larger participant sets are better for more privacy in CoinJoin), if all of them enabled cross-input signature aggregation, such large CoinJoins can have only a single signature.
This complicates the signing process for CoinJoins (the signers now have to sign cooperatively) but it can be well worth it for the reduced signature size and onchain cost.
But note that the while cross-input signature aggregation improves the cost of CoinJoins, it does not improve the privacy! Equal-valued CoinJoins are still obvious and still readily bannable by privacy-hating exchanges. It does not improve the privacy of CoinJoin. Instead, see https://old.reddit.com/Bitcoin/comments/gqb3udesign_for_a_coinswap_implementation_fo

Why isn't cross-input signature aggregation in?

There's some fairly complex technical reasons why cross-input signature aggregation isn't in right now in the current Taproot proposal.
The primary reason was to reduce the technical complexity of Taproot, in the hope that it would be easier to convince users to activate (while support for Taproot is quite high, developers have become wary of being hopeful that new proposals will ever activate, given the previous difficulties with SegWit).
The main technical complexity here is that it interacts with future ways to extend Bitcoin.
The rest of this writeup assumes you already know about how Bitcoin SCRIPT works. If you don't understand how Bitcoin SCRIPT works at the low-level, then the TLDR is that cross-input signature aggregation complicates how to extend Bitcoin in the future, so it was deferred to let the develoeprs think more about it.
(this is how I understand it; perhaps pwuille or ajtowns can give a better summary.)
In detail, Taproot also introduces OP_SUCCESS opcodes. If you know about the OP_NOP opcodes already defined in current Bitcoin, well, OP_SUCCESS is basically "OP_NOP done right".
Now, OP_NOP is a do-nothing operation. It can be replaced in future versions of Bitcoin by having that operation check some condition, and then fail if the condition is not satisfied. For example, both OP_CHECKLOCKTIMEVERIFY and OP_CHECKSEQUENCEVERIFY were previously OP_NOP opcodes. Older nodes will see an OP_CHECKLOCKTIMEVERIFY and think it does nothing, but newer nodes will check if the nLockTime field has a correct specified value, and fail if the condition is not satisfied. Since most of the nodes on the network are using much newer versions of the node software, older nodes are protected from miners who try to misspend any OP_CHECKLOCKTIMEVERIFY/OP_CHECKSEQUENCEVERIFY, and those older nodes will still remain capable of synching with the rest of the network: a dedication to strict backward-compatibility necessary for a consensus system.
Softforks basically mean that a script that passes in the latest version must also be passing in all older versions. A script cannot be passing in newer versions but failing in older versions, because that would kick older nodes off the network (i.e. it would be a hardfork).
But OP_NOP is a very restricted way of adding opcodes. Opcodes that replace OP_NOP can only do one thing: check if some condition is true. They can't push new data on the stack, they can't pop items off the stack. For example, suppose instead of OP_CHECKLOCKTIMEVERIFY, we had added a OP_GETBLOCKHEIGHT opcode. This opcode would push the height of the blockchain on the stack. If this command replaced an older OP_NOP opcode, then a script like OP_GETBLOCKHEIGHT 650000 OP_EQUAL might pass in some future Bitcoin version, but older versions would see OP_NOP 650000 OP_EQUAL, which would fail because OP_EQUAL expects two items on the stack. So older versions will fail a SCRIPT that newer versions will pass, which is a hardfork and thus a backwards incompatibility.
OP_SUCCESS is different. Instead, old nodes, when parsing the SCRIPT, will see OP_SUCCESS, and, without executing the body, will consider the SCRIPT as passing. So, the OP_GETBLOCKHEIGHT 650000 OP_EQUAL example will now work: a future version of Bitcoin might pass it, and existing nodes that don't understand OP_GETBLOCKHEIGHT will se OP_SUCCESS 650000 OP_EQUAL, and will not execute the SCRIPT at all, instead passing it immediately. So a SCRIPT that might pass in newer versions will pass for older versions, which keeps the back-compatibility consensus that a softfork needs.
So how does OP_SUCCESS make things difficult for cross-input signatur aggregation? Well, one of the ways to ask for a signature to be verified is via the opcodes OP_CHECKSIGVERIFY. With cross-input signature aggregation, if a public key indicates it can be used for cross-input signature aggregation, instead of OP_CHECKSIGVERIFY actually requiring the signature on the stack, the stack will contain a dummy 0 value for the signature, and the public key is instead added to a "sum" public key (i.e. an n-of-n that is dynamically extended by one more pubkey for each OP_CHECKSIGVERIFY operation that executes) for the single signature that is verified later by the cross-input signature aggregation validation algorithm00.
The important part here is that the OP_CHECKSIGVERIFY has to execute, in order to add its public key to the set of public keys to be checked in the single signature.
But remember that an OP_SUCCESS prevents execution! As soon as the SCRIPT is parsed, if any opcode is OP_SUCCESS, that is considered as passing, without actually executing the SCRIPT, because the OP_SUCCESS could mean something completely different in newer versions and current versions should assume nothing about what it means. If the SCRIPT contains some OP_CHECKSIGVERIFY command in addition to an OP_SUCCESS, that command is not executed by current versions, and thus they cannot add any public keys given by OP_CHECKSIGVERIFY. Future versions also have to accept that: if they parsed an OP_SUCCESS command that has a new meaning in the future, and then execute an OP_CHECKSIGVERIFY in that SCRIPT, they cannot add the public key into the same "sum" public key that older nodes use, because older nodes cannot see them. This means that you might need more than one signature in the future, in the presence of an opcode that replaces some OP_SUCCESS.
Thus, because of the complexity of making cross-input signature aggregation work compatibly with future extensions to the protocol, cross-input signature aggregation was deferred.
submitted by almkglor to Bitcoin [link] [comments]

Amaury History Summary

https://www.change.org/p/bitcoin-cash-community-bitcoin-abc-must-step-down-from-control-of-bitcoin-cash[change.org](https://www.change.org/p/bitcoin-cash-community-bitcoin-abc-must-step-down-from-control-of-bitcoin-cash)
Introduction
Bitcoin Cash lived because of Amaury taking control against the community and being able to push through the brainwashed community of Bitcoin, especially BitcoinTalk and Bitcoin back in 2017. But today, where BCH has split into BCH and BSV, his actions are attempting to destroy BCH himself by doing things not wanted by the remaining community that supports the BCH goal in general.
Last March, he pushed forward for the Infrastructure Funding Plan, a way to pay the developers from a small portion of the blocks mined by the miners. However, it's implementation was not a very good one, due to its inherent limitations by only making Bitcoin ABC, which was the main mining node, dictate which wallet address does the "IFP" coins go to, and therefore Bitcoin Cash Node (BCHN) was created without it as Bitcoin ABC still has the IFP in their code.
Last few weeks ago, Johnathan Toomin proposed the ASERT DAA for a replacement DAA or difficulty adjustment algorithm. However, because Toomin created it on BCHN and not ABC, it was not taken by Bitcoin ABC and Amaury created Grasberg DAA in response. This DAA was hastily made and was immediately put into the Bitcoin ABC. The code is currently being under review and a lot of it is being heavily rewritten back conform with Toomin's ASERT, which had been a better proposal in itself. Take note that Grasberg did not start as a proposal.
In smaller related news, Shammah Chancellor created a meme competition about Bitcoin Cash's current community. The people liked it and most of Read.Cash joined it, however, it broke an essential rule from the media platform, due to the main goal for the memes containing controversial name-calling insults for Amaury and status in the Bitcoin Cash community. While the competition is in good graces, what we all don't know is that memes will only encourage those of ill intent to continue them, knowing that it will merely rile up those who do not want the struggle and the FUD and they will attack others in a deeper level. It had a side-effect of the Read.Cash developer banning himself from it.
Some people take memes seriously, and they do not care if you say they are satire or not.
This brings us to my point. We need a change of direction, and it starts with the hero who had turned into a villain and the community he has with it.
Amaury Séchet should stop controlling the Bitcoin Cash protocol and allow others to implement their proposals. If you can't do it, get someone else.
Shammah Chancellor should stop being a hypocrite and saying this (and this!) while going out to proclaim that Read.Cash is censored. Go do your own thing if you don't want it.
C. Edward Kelso should stop making anyone who is against Shammah nor Amaury or any of the Bitcoin ABC members look bad. Write the truth if needed. Heavily imply it, if you can't stop doing it your badly-described articles.
Bitcoin Cash should not be led by these people if all they do not bend down and take the community at their backs. We already have great progress and these setbacks will cost us the whole Bitcoin promise. We are already under attack. Do not let them destroy it.
submitted by steve_m0 to btc [link] [comments]

How EpiK Protocol “Saved the Miners” from Filecoin with the E2P Storage Model?

How EpiK Protocol “Saved the Miners” from Filecoin with the E2P Storage Model?

https://preview.redd.it/n5jzxozn27v51.png?width=2222&format=png&auto=webp&s=6cd6bd726582bbe2c595e1e467aeb3fc8aabe36f
On October 20, Eric Yao, Head of EpiK China, and Leo, Co-Founder & CTO of EpiK, visited Deep Chain Online Salon, and discussed “How EpiK saved the miners eliminated by Filecoin by launching E2P storage model”. ‘?” The following is a transcript of the sharing.
Sharing Session
Eric: Hello, everyone, I’m Eric, graduated from School of Information Science, Tsinghua University. My Master’s research was on data storage and big data computing, and I published a number of industry top conference papers.
Since 2013, I have invested in Bitcoin, Ethereum, Ripple, Dogcoin, EOS and other well-known blockchain projects, and have been settling in the chain circle as an early technology-based investor and industry observer with 2 years of blockchain experience. I am also a blockchain community initiator and technology evangelist
Leo: Hi, I’m Leo, I’m the CTO of EpiK. Before I got involved in founding EpiK, I spent 3 to 4 years working on blockchain, public chain, wallets, browsers, decentralized exchanges, task distribution platforms, smart contracts, etc., and I’ve made some great products. EpiK is an answer to the question we’ve been asking for years about how blockchain should be landed, and we hope that EpiK is fortunate enough to be an answer for you as well.
Q & A
Deep Chain Finance:
First of all, let me ask Eric, on October 15, Filecoin’s main website launched, which aroused everyone’s attention, but at the same time, the calls for fork within Filecoin never stopped. The EpiK protocol is one of them. What I want to know is, what kind of project is EpiK Protocol? For what reason did you choose to fork in the first place? What are the differences between the forked project and Filecoin itself?
Eric:
First of all, let me answer the first question, what kind of project is EpiK Protocol.
With the Fourth Industrial Revolution already upon us, comprehensive intelligence is one of the core goals of this stage, and the key to comprehensive intelligence is how to make machines understand what humans know and learn new knowledge based on what they already know. And the knowledge graph scale is a key step towards full intelligence.
In order to solve the many challenges of building large-scale knowledge graphs, the EpiK Protocol was born. EpiK Protocol is a decentralized, hyper-scale knowledge graph that organizes and incentivizes knowledge through decentralized storage technology, decentralized autonomous organizations, and generalized economic models. Members of the global community will expand the horizons of artificial intelligence into a smarter future by organizing all areas of human knowledge into a knowledge map that will be shared and continuously updated for the eternal knowledge vault of humanity
And then, for what reason was the fork chosen in the first place?
EpiK’s project founders are all senior blockchain industry practitioners and have been closely following the industry development and application scenarios, among which decentralized storage is a very fresh application scenario.
However, in the development process of Filecoin, the team found that due to some design mechanisms and historical reasons, the team found that Filecoin had some deviations from the original intention of the project at that time, such as the overly harsh penalty mechanism triggered by the threat to weaken security, and the emergence of the computing power competition leading to the emergence of computing power monopoly by large miners, thus monopolizing the packaging rights, which can be brushed with computing power by uploading useless data themselves.
The emergence of these problems will cause the data environment on Filecoin to get worse and worse, which will lead to the lack of real value of the data in the chain, high data redundancy, and the difficulty of commercializing the project to land.
After paying attention to the above problems, the project owner proposes to introduce multi-party roles and a decentralized collaboration platform DAO to ensure the high value of the data on the chain through a reasonable economic model and incentive mechanism, and store the high-value data: knowledge graph on the blockchain through decentralized storage, so that the lack of value of the data on the chain and the monopoly of large miners’ computing power can be solved to a large extent.
Finally, what differences exist between the forked project and Filecoin itself?
On the basis of the above-mentioned issues, EpiK’s design is very different from Filecoin, first of all, EpiK is more focused in terms of business model, and it faces a different market and track from the cloud storage market where Filecoin is located because decentralized storage has no advantage over professional centralized cloud storage in terms of storage cost and user experience.
EpiK focuses on building a decentralized knowledge graph, which reduces data redundancy and safeguards the value of data in the distributed storage chain while preventing the knowledge graph from being tampered with by a few people, thus making the commercialization of the entire project reasonable and feasible.
From the perspective of ecological construction, EpiK treats miners more friendly and solves the pain point of Filecoin to a large extent, firstly, it changes the storage collateral and commitment collateral of Filecoin to one-time collateral.
Miners participating in EpiK Protocol are only required to pledge 1000 EPK per miner, and only once before mining, not in each sector.
What is the concept of 1000 EPKs, you only need to participate in pre-mining for about 50 days to get this portion of the tokens used for pledging. The EPK pre-mining campaign is currently underway, and it runs from early September to December, with a daily release of 50,000 ERC-20 standard EPKs, and the pre-mining nodes whose applications are approved will divide these tokens according to the mining ratio of the day, and these tokens can be exchanged 1:1 directly after they are launched on the main network. This move will continue to expand the number of miners eligible to participate in EPK mining.
Secondly, EpiK has a more lenient penalty mechanism, which is different from Filecoin’s official consensus, storage and contract penalties, because the protocol can only be uploaded by field experts, which is the “Expert to Person” mode. Every miner needs to be backed up, which means that if one or more miners are offline in the network, it will not have much impact on the network, and the miner who fails to upload the proof of time and space in time due to being offline will only be forfeited by the authorities for the effective computing power of this sector, not forfeiting the pledged coins.
If the miner can re-submit the proof of time and space within 28 days, he will regain the power.
Unlike Filecoin’s 32GB sectors, EpiK’s encapsulated sectors are smaller, only 8M each, which will solve Filecoin’s sector space wastage problem to a great extent, and all miners have the opportunity to complete the fast encapsulation, which is very friendly to miners with small computing power.
The data and quality constraints will also ensure that the effective computing power gap between large and small miners will not be closed.
Finally, unlike Filecoin’s P2P data uploading model, EpiK changes the data uploading and maintenance to E2P uploading, that is, field experts upload and ensure the quality and value of the data on the chain, and at the same time introduce the game relationship between data storage roles and data generation roles through a rational economic model to ensure the stability of the whole system and the continuous high-quality output of the data on the chain.
Deep Chain Finance:
Eric, on the eve of Filecoin’s mainline launch, issues such as Filecoin’s pre-collateral have aroused a lot of controversy among the miners. In your opinion, what kind of impact will Filecoin bring to itself and the whole distributed storage ecosystem after it launches? Do you think that the current confusing FIL prices are reasonable and what should be the normal price of FIL?
Eric:
Filecoin mainnet has launched and many potential problems have been exposed, such as the aforementioned high pre-security problem, the storage resource waste and computing power monopoly caused by unreasonable sector encapsulation, and the harsh penalty mechanism, etc. These problems are quite serious, and will greatly affect the development of Filecoin ecology.
These problems are relatively serious, and will greatly affect the development of Filecoin ecology, here are two examples to illustrate. For example, the problem of big miners computing power monopoly, now after the big miners have monopolized computing power, there will be a very delicate state — — the miners save a file data with ordinary users. There is no way to verify this matter in the chain, whether what he saved is uploaded by himself or someone else. And after the big miners have monopolized computing power, there will be a very delicate state — — the miners will save a file data with ordinary users, there is no way to verify this matter in the chain, whether what he saved is uploaded by himself or someone else. Because I can fake another identity to upload data for myself, but that leads to the fact that for any miner I go to choose which data to save. I have only one goal, and that is to brush my computing power and how fast I can brush my computing power.
There is no difference between saving other people’s data and saving my own data in the matter of computing power. When I save someone else’s data, I don’t know that data. Somewhere in the world, the bandwidth quality between me and him may not be good enough.
The best option is to store my own local data, which makes sense, and that results in no one being able to store data on the chain at all. They only store their own data, because it’s the most economical for them, and the network has essentially no storage utility, no one is providing storage for the masses of retail users.
The harsh penalty mechanism will also severely deplete the miner’s profits, because DDOS attacks are actually a very common attack technique for the attacker, and for a big miner, he can get a very high profit in a short period of time if he attacks other customers, and this thing is a profitable thing for all big miners.
Now as far as the status quo is concerned, the vast majority of miners are actually not very well maintained, so they are not very well protected against these low-DDOS attacks. So the penalty regime is grim for them.
The contradiction between the unreasonable system and the demand will inevitably lead to the evolution of the system in a more reasonable direction, so there will be many forked projects that are more reasonable in terms of mechanism, thus attracting Filecoin miners and a diversion of storage power.
Since each project is in the field of decentralized storage track, the demand for miners is similar or even compatible with each other, so miners will tend to fork the projects with better economic benefits and business scenarios, so as to filter out the projects with real value on the ground.
For the chaotic FIL price, because FIL is also a project that has gone through several years, carrying too many expectations, so it can only be said that the current situation has its own reasons for existence. As for the reasonable price of FIL there is no way to make a prediction because in the long run, it is necessary to consider the commercialization of the project to land and the value of the actual chain of data. In other words, we need to keep observing whether Filecoin will become a game of computing power or a real value carrier.
Deep Chain Finance:
Leo, we just mentioned that the pre-collateral issue of Filecoin caused the dissatisfaction of miners, and after Filecoin launches on the main website, the second round of space race test coins were directly turned into real coins, and the official selling of FIL hit the market phenomenon, so many miners said they were betrayed. What I want to know is, EpiK’s main motto is “save the miners eliminated by Filecoin”, how to deal with the various problems of Filecoin, and how will EpiK achieve “save”?
Leo:
Originally Filecoin’s tacit approval of the computing power makeup behavior was to declare that the official directly chose to abandon the small miners. And this test coin turned real coin also hurt the interests of the loyal big miners in one cut, we do not know why these low-level problems, we can only regret.
EpiK didn’t do it to fork Filecoin, but because EpiK to build a shared knowledge graph ecology, had to integrate decentralized storage in, so the most hardcore Filecoin’s PoRep and PoSt decentralized verification technology was chosen. In order to ensure the quality of knowledge graph data, EpiK only allows community-voted field experts to upload data, so EpiK naturally prevents miners from making up computing power, and there is no reason for the data that has no value to take up such an expensive decentralized storage resource.
With the inability to make up computing power, the difference between big miners and small miners is minimal when the amount of knowledge graph data is small.
We can’t say that we can save the big miners, but we are definitely the optimal choice for the small miners who are currently in the market to be eliminated by Filecoin.
Deep Chain Finance:
Let me ask Eric: According to EpiK protocol, EpiK adopts the E2P model, which allows only experts in the field who are voted to upload their data. This is very different from Filecoin’s P2P model, which allows individuals to upload data as they wish. In your opinion, what are the advantages of the E2P model? If only voted experts can upload data, does that mean that the EpiK protocol is not available to everyone?
Eric:
First, let me explain the advantages of the E2P model over the P2P model.
There are five roles in the DAO ecosystem: miner, coin holder, field expert, bounty hunter and gateway. These five roles allocate the EPKs generated every day when the main network is launched.
The miner owns 75% of the EPKs, the field expert owns 9% of the EPKs, and the voting user shares 1% of the EPKs.
The other 15% of the EPK will fluctuate based on the daily traffic to the network, and the 15% is partly a game between the miner and the field expert.
The first describes the relationship between the two roles.
The first group of field experts are selected by the Foundation, who cover different areas of knowledge (a wide range of knowledge here, including not only serious subjects, but also home, food, travel, etc.) This group of field experts can recommend the next group of field experts, and the recommended experts only need to get 100,000 EPK votes to become field experts.
The field expert’s role is to submit high-quality data to the miner, who is responsible for encapsulating this data into blocks.
Network activity is judged by the amount of EPKs pledged by the entire network for daily traffic (1 EPK = 10 MB/day), with a higher percentage indicating higher data demand, which requires the miner to increase bandwidth quality.
If the data demand decreases, this requires field experts to provide higher quality data. This is similar to a library with more visitors needing more seats, i.e., paying the miner to upgrade the bandwidth.
When there are fewer visitors, more money is needed to buy better quality books to attract visitors, i.e., money for bounty hunters and field experts to generate more quality knowledge graph data. The game between miners and field experts is the most important game in the ecosystem, unlike the game between the authorities and big miners in the Filecoin ecosystem.
The game relationship between data producers and data storers and a more rational economic model will inevitably lead to an E2P model that generates stored on-chain data of much higher quality than the P2P model, and the quality of bandwidth for data access will be better than the P2P model, resulting in greater business value and better landing scenarios.
I will then answer the question of whether this means that the EpiK protocol will not be universally accessible to all.
The E2P model only qualifies the quality of the data generated and stored, not the roles in the ecosystem; on the contrary, with the introduction of the DAO model, the variety of roles introduced in the EpiK ecosystem (which includes the roles of ordinary people) is not limited. (Bounty hunters who can be competent in their tasks) gives roles and possibilities for how everyone can participate in the system in a more logical way.
For example, a miner with computing power can provide storage, a person with a certain domain knowledge can apply to become an expert (this includes history, technology, travel, comics, food, etc.), and a person willing to mark and correct data can become a bounty hunter.
The presence of various efficient support tools from the project owner will lower the barriers to entry for various roles, thus allowing different people to do their part in the system and together contribute to the ongoing generation of a high-quality decentralized knowledge graph.
Deep Chain Finance:
Leo, some time ago, EpiK released a white paper and an economy whitepaper, explaining the EpiK concept from the perspective of technology and economy model respectively. What I would like to ask is, what are the shortcomings of the current distributed storage projects, and how will EpiK protocol be improved?
Leo:
Distributed storage can easily be misunderstood as those of Ali’s OceanDB, but in the field of blockchain, we should focus on decentralized storage first.
There is a big problem with the decentralized storage on the market now, which is “why not eat meat porridge”.
How to understand it? Decentralized storage is cheaper than centralized storage because of its technical principle, and if it is, the centralized storage is too rubbish for comparison.
What incentive does the average user have to spend more money on decentralized storage to store data?
Is it safer?
Existence miners can shut down at any time on decentralized storage by no means save a share of security in Ariadne and Amazon each.
More private?
There’s no difference between encrypted presence on decentralized storage and encrypted presence on Amazon.
Faster?
The 10,000 gigabytes of bandwidth in decentralized storage simply doesn’t compare to the fiber in a centralized server room. This is the root problem of the business model, no one is using it, no one is buying it, so what’s the big vision.
The goal of EpiK is to guide all community participants in the co-construction and sharing of field knowledge graph data, which is the best way for robots to understand human knowledge, and the more knowledge graph data there is, the more knowledge a robot has, the more intelligent it is exponentially, i.e., EpiK uses decentralized storage technology. The value of exponentially growing data is captured with linearly growing hardware costs, and that’s where the buy-in for EPK comes in.
Organized data is worth a lot more than organized hard drives, and there is a demand for EPK when robots have the need for intelligence.
Deep Chain Finance:
Let me ask Leo, how many forked projects does Filecoin have so far, roughly? Do you think there will be more or less waves of fork after the mainnet launches? Have the requirements of the miners at large changed when it comes to participation?
Leo:
We don’t have specific statistics, now that the main network launches, we feel that forking projects will increase, there are so many restricted miners in the market that they need to be organized efficiently.
However, we currently see that most forked projects are simply modifying the parameters of Filecoin’s economy model, which is undesirable, and this level of modification can’t change the status quo of miners making up computing power, and the change to the market is just to make some of the big miners feel more comfortable digging up, which won’t help to promote the decentralized storage ecology to land.
We need more reasonable landing scenarios so that idle mining resources can be turned into effective productivity, pitching a 100x coin instead of committing to one Fomo sentiment after another.
Deep Chain Finance:
How far along is the EpiK Protocol project, Eric? What other big moves are coming in the near future?
Eric:
The development of the EpiK Protocol is divided into 5 major phases.
(a) Phase I testing of the network “Obelisk”.
Phase II Main Network 1.0 “Rosetta”.
Phase III Main Network 2.0 “Hammurabi”.
(a) The Phase IV Enrichment Knowledge Mapping Toolkit.
The fifth stage is to enrich the knowledge graph application ecology.
Currently in the first phase of testing network “Obelisk”, anyone can sign up to participate in the test network pre-mining test to obtain ERC20 EPK tokens, after the mainnet exchange on a one-to-one basis.
We have recently launched ERC20 EPK on Uniswap, you can buy and sell it freely on Uniswap or download our EpiK mobile wallet.
In addition, we will soon launch the EpiK Bounty platform, and welcome all community members to do tasks together to build the EpiK community. At the same time, we are also pushing forward the centralized exchange for token listing.
Users’ Questions
User 1:
Some KOLs said, Filecoin consumed its value in the next few years, so it will plunge, what do you think?
Eric:
First of all, the judgment of the market is to correspond to the cycle, not optimistic about the FIL first judgment to do is not optimistic about the economic model of the project, or not optimistic about the distributed storage track.
First of all, we are very confident in the distributed storage track and will certainly face a process of growth and decline, so as to make a choice for a better project.
Since the existing group of miners and the computing power already produced is fixed, and since EpiK miners and FIL miners are compatible, anytime miners will also make a choice for more promising and economically viable projects.
Filecoin consumes the value of the next few years this time, so it will plunge.
Regarding the market issues, the plunge is not a prediction, in the industry or to keep learning iteration and value judgment. Because up and down market sentiment is one aspect, there will be more very important factors. For example, the big washout in March this year, so it can only be said that it will slow down the development of the FIL community. But prices are indeed unpredictable.
User2:
Actually, in the end, if there are no applications and no one really uploads data, the market value will drop, so what are the landing applications of EpiK?
Leo: The best and most direct application of EpiK’s knowledge graph is the question and answer system, which can be an intelligent legal advisor, an intelligent medical advisor, an intelligent chef, an intelligent tour guide, an intelligent game strategy, and so on.
submitted by EpiK-Protocol to u/EpiK-Protocol [link] [comments]

A criticism of the article "Six monetarist errors: why emission won't feed inflation"

(be gentle, it's my first RI attempt, :P; I hope I can make justice to the subject, this is my layman understanding of many macro subjects which may be flawed...I hope you can illuminate me if I have fallen short of a good RI)
Introduction
So, today a heterodox leaning Argentinian newspaper, Ambito Financiero, published an article criticizing monetarism called "Six monetarist errors: why emission won't feed inflation". I find it doesn't properly address monetarism, confuses it with other "economic schools" for whatever the term is worth today and it may be misleading, so I was inspired to write a refutation and share it with all of you.
In some ways criticizing monetarism is more of a historical discussion given the mainstream has changed since then. Stuff like New Keynesian models are the bleeding edge, not Milton Friedman style monetarism. It's more of a symptom that Argentinian political culture is kind of stuck in the 70s on economics that this things keep being discussed.
Before getting to the meat of the argument, it's good to have in mind some common definitions about money supply measures (specifically, MB, M1 and M2). These definitions apply to US but one can find analogous stuff for other countries.
Argentina, for the lack of access to credit given its economic mismanagement and a government income decrease because of the recession, is monetizing deficits way more than before (like half of the budget, apparently, it's money financed) yet we have seen some disinflation (worth mentioning there are widespread price freezes since a few months ago). The author reasons that monetary phenomena cannot explain inflation properly and that other explanations are needed and condemns monetarism. Here are the six points he makes:
1.Is it a mechanical rule?
This way, we can ask by symmetry: if a certainty exists that when emission increases, inflation increases, the reverse should happen when emission becomes negative, obtaining negative inflation. Nonetheless, we know this happens: prices have an easier time increasing and a lot of rigidity decreasing. So the identity between emission and inflation is not like that, deflation almost never exists and the price movement rhythm cannot be controlled remotely only with money quantity. There is no mechanical relationship between one thing and the other.
First, the low hanging fruit: deflation is not that uncommon, for those of you that live in US and Europe it should be obvious given the difficulties central banks had to achieve their targets, but even Argentina has seen deflation during its depression 20 years ago.
Second, we have to be careful with what we mean by emission. A statement of quantity theory of money (extracted from "Money Growth and Inflation: How Long is the Long-Run?") would say:
Inflation occurs when the average level of prices increases. Individual price increases in and of themselves do not equal inflation, but an overall pattern of price increases does. The price level observed in the economy is that which leads the quantity of money supplied to equal the quantity of money demanded. The quantity of money supplied is largely controlled by the [central bank]. When the supply of money increases or decreases, the price level must adjust to equate the quantity of money demanded throughout the economy with the quantity of money supplied. The quantity of money demanded depends not only on the price level but also on the level of real income, as measured by real gross domestic product (GDP), and a variety of other factors including the level of interest rates and technological advances such as the invention of automated teller machines. Money demand is widely thought to increase roughly proportionally with the price level and with real income. That is, if prices go up by 10 percent, or if real income increases by 10 percent, empirical evidence suggests people want to hold 10 percent more money. When the money supply grows faster than the money demand associated with rising real incomes and other factors, the price level must rise to equate supply and demand. That is, inflation occurs. This situation is often referred to as too many dollars chasing too few goods. Note that this theory does not predict that any money-supply growth will lead to inflation—only that part of money supply growth that exceeds the increase in money demand associated with rising real GDP (holding the other factors constant).
So it's not mere emission, but money supply growing faster than money demand which we should consider. So negative emission is not necessary condition for deflation in this theory.
It's worth mentioning that the relationship with prices is observed for a broad measure of money (M2) and after a lag. From the same source of this excerpt one can observe in Fig. 3a the correlation between inflation and money growth for US becomes stronger the longer data is averaged. Price rigidities don't have to change this long term relationship per se.
But what about causality and Argentina? This neat paper shows regressions in two historical periods: 1976-1989 and 1991-2001. The same relationship between M2 and inflation is observed, stronger in the first, highly inflationary period and weaker in the second, more stable, period. The regressions a 1-1 relationship in the high inflation period but deviates a bit in the low inflation period (yet the relationship is still there). Granger causality, as interpreted in the paper, shows prices caused money growth in the high inflation period (arguably because spending was monetized) while the reverse was true for the more stable period.
So one can argue that there is a mechanical relationship, albeit one that is more complicated than simple QTOM theory. The relationship is complicated too for low inflation economies, it gets more relevant the higher inflation is.
Another point the author makes is that liquidity trap is often ignored. I'll ignore the fact that you need specific conditions for the liquidity trap to be relevant to Argentina and address the point. Worth noting that while market monetarists (not exactly old fashioned monetarists) prefer alternative explanations for monetary policy with very low interest rates, this phenomena has a good monetary basis, as explained by Krugman in his famous japanese liquidity trap paper and his NYT blog (See this and this for some relevant articles). The simplified version is that while inflation may follow M2 growth with all the qualifiers needed, central banks may find difficulties targeting inflation when interest rates are low and agents are used to credible inflation targets. Central banks can change MB, not M2 and in normal times is good enough, but at those times M2 is out of control and "credibly irresponsible" policies are needed to return to normal (a more detailed explanation can be found in that paper I just linked, go for it if you are still curious).
It's not like monetary policy is not good, it's that central banks have to do very unconventional stuff to achieve in a low interest rate environment. It's still an open problem but given symmetric inflation targeting policies are becoming more popular I'm optimistic.
2 - Has inflation one or many causes?
In Argentina we know that the main determinant of inflation is dollar price increases. On that, economic concentration of key markets, utility price adjustments, fuel prices, distributive struggles, external commodity values, expectatives, productive disequilibrium, world interest rates, the economic cycle, stationality and external sector restrictions act on it too.
Let's see a simple example: during Macri's government since mid 2017 to 2019 emission was practically null, but when in 2018 the dollar value doubled, inflation doubled too (it went from 24% to 48% in 2018) and it went up again a year later. We see here that the empirical validity of monetarist theory was absent.
For the first paragraph, one could try to run econometric tests for all those variables, at least from my layman perspective. But given that it doesn't pass the smell test (has any country used that in its favor ignoring monetary policy? Also, I have shown there is at least some evidence for the money-price relationship before), I'll try to address what happened in Macri's government and if monetarism (or at least some reasonable extension of it) cannot account for it.
For a complete description of macroeconomic policy on that period, Sturzenegger account is a good one (even if a bit unreliable given he was the central banker for that government and he is considered to have been a failure). The short version is that central banks uses bonds to manage monetary policy and absorb money; given the history of defaults for the country, the Argentinian Central Bank (BCRA) uses its own peso denominated bonds instead of using treasury bonds. At that time period, the BCRA still financed the treasury but the amount got reduced. Also, it emitted pesos to buy dollar reserves, then sterilized them, maybe risking credibility further.
Near the end of 2017 it was evident the government had limited appetite for budget cuts, it had kind of abandoned its inflation target regime and the classic problem of fiscal dominance emerged, as it's shown in the classic "Unpleasant monetarist arithmetic" paper by Wallace and Sargent. Monetary policy gets less effective when the real value of bonds falls, and raising interest rates may be counterproductive in that environment. Rational expectations are needed to complement QTOM.
So, given that Argentina promised to go nowhere with reform, it was expected that money financing would increase at some point in the future and BCRA bonds were dumped in 2018 and 2019 as their value was perceived to have decreased, and so peso demand decreased. It's not that the dollar value increased and inflation followed, but instead that peso demand fell suddenly!
The IMF deal asked for MB growth to be null or almost null but that doesn't say a lot about M2 (which it's the relevant variable here). Without credible policies, the peso demand keeps falling because bonds are dumped even more (see 2019 for a hilariously brutal example of that).
It's not emission per se, but rather that it doesn't adjust properly to peso demand (which is falling). That doesn't mean increasing interest rates is enough to achieve it, following Wallace and Sargent model.
This is less a strict proof that a monetary phenomenon is involved and more stating that the author hasn't shown any problem with that, there are reasonable models for this situation. It doesn't look like an clear empirical failure to me yet.
3 - Of what we are talking about when we talk about emission?
The author mentions many money measures (M0, M1, M2) but it doesn't address it meaningfully as I tried to do above. It feels more like a rhetorical device because there is no point here except "this stuff exists".
Also, it's worth pointing that there are actual criticisms to make to Friedman on those grounds. He failed to forecast US inflation at some points when he switched to M1 instead of using M2, although he later reverted that. Monetarism kind of "failed" there (it also "failed" in the sense that modern central banks don't use money, but instead interest rates as their main tool; "failed" because despite being outdated, it was influential to modern central banking). This is often brought to this kind of discussions like if economics hasn't moved beyond that. For an account of Friedman thoughts on monetary policies and his failures, see this.
4 - Why do many countries print and inflation doesn't increase there?
There is a mention about the japanese situation in the 90s (the liquidity trap) which I have addressed.
The author mentions that many countries "printed" like crazy during the pandemic, and he says:
Monetarism apologists answer, when confronted with those grave empirical problems that happen in "serious countries", that the population "trusts" their monetary authorities, even increasing the money demand in those place despite the emission. Curious, though, it's an appeal to "trust" implying that the relationship between emission and inflation is not objective, but subjective and cultural: an appreciation that abandons mechanicism and the basic certainty of monetarism, because evaluations and diagnostics, many times ideologic, contextual or historical intervene..
That's just a restatement of applying rational expectations to central bank operations. I don't see a problem with that. Rational expectations is not magic, it's an assessment of future earnings by economic actors. Humans may not 100% rational but central banking somehow works on many countries. You cannot just say that people are ideologues and let it at that. What's your model?
Worth noting the author shills for bitcoin a bit in this section, for more cringe.
5 - Are we talking of a physical science or a social science?
Again, a vague mention of rational expectations ("populists and pro market politicians could do the same policies with different results because of how agents respond ideologically and expectatives") without handling the subject meaningfully. It criticizes universal macroeconomic rules that apply everywhere (this is often used to dismiss evidence from other countries uncritically more than as a meaningful point).
6 - How limits work?
The last question to monetarism allows to recognize it something: effectively we can think on a type of vinculation between emission and inflation in extreme conditions. That means, with no monetary rule, no government has the need of taxes but instead can emit and spend all it needs without consequence. We know it's not like that: no government can print infinitely without undesirable effects.
Ok, good disclaimer, but given what he wrote before, what's the mechanism which causes money printing to be inflationary at some point? It was rejected before but now it seems that it exists. What was even the point of the article?
Now, the problem is thinking monetarism on its extremes: without emission we have inflation sometimes, on others we have no inflation with emission, we know that if we have negative emission that doesn't guarantees us negative inflation, but that if emission is radically uncontrolled there will economic effects.
As I wrote above, that's not what monetarism (even on it's simpler form) says, nor a consequence of it. You can see some deviations in low inflation environment but it's not really Argentina's current situation.
Let's add other problems: the elastic question between money and prices is not evident. Neither is time lags in which can work or be neutral. So the question is the limit cases for monetarism which has some reason but some difficulty in explaining them: by which and it what moments rules work and in which it doesn't.
I find the time lag thing to be a red herring. You can observe empirically and not having a proper short/middle run model doesn't invalidate QTOM in the long run. While it may be that increasing interest rates or freezing MB is not effective, that's less a problem of the theory and more a problem of policy implementation.
Conclusion:
I find that the article doesn't truly get monetarism to begin with (see the points it makes about emission and money demand), neither how it's implemented in practice, nor seems to be aware of more modern theories that, while put money on the background, don't necessarily invalidate it (rational expectation ideas, and eventually New Keynesian stuff which addresses stuff like liquidity traps properly).
There are proper criticisms to be made to Friedman old ideas but he still was a relevant man in his time and the economic community has moved on to new, better theories that have some debt to it. I feel most economic discussion about monetarism in Argentina is a strawman of mainstream economics or an attack on Austrians more than genuine points ("monetarism" is used as a shorthand for those who think inflation is a monetary phenomenon more than referring to Friedman and his disciples per se).
submitted by Neronoah to badeconomics [link] [comments]

Ceterum censeo: In some yet undefined future - the halving must be removed. The question is not: if, but when (and how)

Bitcoin's mining ecosystem is saturated. Period.
The ASIC race has weakened as it has moved closer to the technological limits - achieving some kind of fragile balance. The best proof of this is Bitmain's search for new areas (vide: AI research)
After more than a decade, we are smarter than Satoshi at least in one area - we have the knowledge acquired over these more than ten years ...
"Bitcoin should have had a 0.1% or 1% monetary inflation tax to pay for security." (Peter Todd): https://www.google.com/search?q=peter+todd+inflation
If someone cannot accept the inevitability of this right now - let him think if he would change his mind while he sees the consecutive halvings - after which the network hashrate drops half by half - and does not return to the previous level, ever... (I suppose we can see that process in 4 years already...)
And the trigger could be like this (of course after general consensus):
That would be an "organoleptic" determination of the optimal inflation rate for the Bitcoin network - and there is simply no better way to determine it. Just don't belive such simplification, when is hard to find an optimum for something - the ultimate solution is zero. It's not.
Remember, that Bitcoin is not an entity detached from the reality. There are various limitations, e.g. nanometer-based technological processes limitations, there is a finite amount of cheap energy that can be obtained on a global scale, etc.) Bitcoin functions in certain realities - whether we like it or not.
Sooner or later the situation described above will get us. It is worth to be prepared mentally for it - and not to start another war, but rather discuss it calmly. If, for example, 90% of the community considers that something is necessary for the development of bitcoin - such a change will take place.
For example, the theoretical exchange of ECDSA due to the threat of quantum computers - acceptance would take place at an express rate. It will be similar in this matter. Just it shouldn't be too late for corrective action.
The small inflation rate, decreasing continuosly and slowly but never to zero, and last but not least: determined by reality - seems to be the most proper measure in this case.
Ceterum censeo...
EDIT: If:
a) tx fees are able to keep miners mining - perfect
b) miners are pushed out by consecutive halvings - not perfect
What I proposed is unbiased way for checking that (bitcoin ecosystem overall health):
if(current_network_hashrate < network_hashrate_4_years_ago) {
do_something();
}
else do_nothing();
submitted by jk_14r to Bitcoin [link] [comments]

Why i’m bullish on Zilliqa (long read)

Edit: TL;DR added in the comments
 
Hey all, I've been researching coins since 2017 and have gone through 100s of them in the last 3 years. I got introduced to blockchain via Bitcoin of course, analyzed Ethereum thereafter and from that moment I have a keen interest in smart contact platforms. I’m passionate about Ethereum but I find Zilliqa to have a better risk-reward ratio. Especially because Zilliqa has found an elegant balance between being secure, decentralized and scalable in my opinion.
 
Below I post my analysis of why from all the coins I went through I’m most bullish on Zilliqa (yes I went through Tezos, EOS, NEO, VeChain, Harmony, Algorand, Cardano etc.). Note that this is not investment advice and although it's a thorough analysis there is obviously some bias involved. Looking forward to what you all think!
 
Fun fact: the name Zilliqa is a play on ‘silica’ silicon dioxide which means “Silicon for the high-throughput consensus computer.”
 
This post is divided into (i) Technology, (ii) Business & Partnerships, and (iii) Marketing & Community. I’ve tried to make the technology part readable for a broad audience. If you’ve ever tried understanding the inner workings of Bitcoin and Ethereum you should be able to grasp most parts. Otherwise, just skim through and once you are zoning out head to the next part.
 
Technology and some more:
 
Introduction
 
The technology is one of the main reasons why I’m so bullish on Zilliqa. First thing you see on their website is: “Zilliqa is a high-performance, high-security blockchain platform for enterprises and next-generation applications.” These are some bold statements.
 
Before we deep dive into the technology let’s take a step back in time first as they have quite the history. The initial research paper from which Zilliqa originated dates back to August 2016: Elastico: A Secure Sharding Protocol For Open Blockchains where Loi Luu (Kyber Network) is one of the co-authors. Other ideas that led to the development of what Zilliqa has become today are: Bitcoin-NG, collective signing CoSi, ByzCoin and Omniledger.
 
The technical white paper was made public in August 2017 and since then they have achieved everything stated in the white paper and also created their own open source intermediate level smart contract language called Scilla (functional programming language similar to OCaml) too.
 
Mainnet is live since the end of January 2019 with daily transaction rates growing continuously. About a week ago mainnet reached 5 million transactions, 500.000+ addresses in total along with 2400 nodes keeping the network decentralized and secure. Circulating supply is nearing 11 billion and currently only mining rewards are left. The maximum supply is 21 billion with annual inflation being 7.13% currently and will only decrease with time.
 
Zilliqa realized early on that the usage of public cryptocurrencies and smart contracts were increasing but decentralized, secure, and scalable alternatives were lacking in the crypto space. They proposed to apply sharding onto a public smart contract blockchain where the transaction rate increases almost linear with the increase in the amount of nodes. More nodes = higher transaction throughput and increased decentralization. Sharding comes in many forms and Zilliqa uses network-, transaction- and computational sharding. Network sharding opens up the possibility of using transaction- and computational sharding on top. Zilliqa does not use state sharding for now. We’ll come back to this later.
 
Before we continue dissecting how Zilliqa achieves such from a technological standpoint it’s good to keep in mind that a blockchain being decentralised and secure and scalable is still one of the main hurdles in allowing widespread usage of decentralised networks. In my opinion this needs to be solved first before blockchains can get to the point where they can create and add large scale value. So I invite you to read the next section to grasp the underlying fundamentals. Because after all these premises need to be true otherwise there isn’t a fundamental case to be bullish on Zilliqa, right?
 
Down the rabbit hole
 
How have they achieved this? Let’s define the basics first: key players on Zilliqa are the users and the miners. A user is anybody who uses the blockchain to transfer funds or run smart contracts. Miners are the (shard) nodes in the network who run the consensus protocol and get rewarded for their service in Zillings (ZIL). The mining network is divided into several smaller networks called shards, which is also referred to as ‘network sharding’. Miners subsequently are randomly assigned to a shard by another set of miners called DS (Directory Service) nodes. The regular shards process transactions and the outputs of these shards are eventually combined by the DS shard as they reach consensus on the final state. More on how these DS shards reach consensus (via pBFT) will be explained later on.
 
The Zilliqa network produces two types of blocks: DS blocks and Tx blocks. One DS Block consists of 100 Tx Blocks. And as previously mentioned there are two types of nodes concerned with reaching consensus: shard nodes and DS nodes. Becoming a shard node or DS node is being defined by the result of a PoW cycle (Ethash) at the beginning of the DS Block. All candidate mining nodes compete with each other and run the PoW (Proof-of-Work) cycle for 60 seconds and the submissions achieving the highest difficulty will be allowed on the network. And to put it in perspective: the average difficulty for one DS node is ~ 2 Th/s equaling 2.000.000 Mh/s or 55 thousand+ GeForce GTX 1070 / 8 GB GPUs at 35.4 Mh/s. Each DS Block 10 new DS nodes are allowed. And a shard node needs to provide around 8.53 GH/s currently (around 240 GTX 1070s). Dual mining ETH/ETC and ZIL is possible and can be done via mining software such as Phoenix and Claymore. There are pools and if you have large amounts of hashing power (Ethash) available you could mine solo.
 
The PoW cycle of 60 seconds is a peak performance and acts as an entry ticket to the network. The entry ticket is called a sybil resistance mechanism and makes it incredibly hard for adversaries to spawn lots of identities and manipulate the network with these identities. And after every 100 Tx Blocks which corresponds to roughly 1,5 hour this PoW process repeats. In between these 1,5 hour, no PoW needs to be done meaning Zilliqa’s energy consumption to keep the network secure is low. For more detailed information on how mining works click here.
Okay, hats off to you. You have made it this far. Before we go any deeper down the rabbit hole we first must understand why Zilliqa goes through all of the above technicalities and understand a bit more what a blockchain on a more fundamental level is. Because the core of Zilliqa’s consensus protocol relies on the usage of pBFT (practical Byzantine Fault Tolerance) we need to know more about state machines and their function. Navigate to Viewblock, a Zilliqa block explorer, and just come back to this article. We will use this site to navigate through a few concepts.
 
We have established that Zilliqa is a public and distributed blockchain. Meaning that everyone with an internet connection can send ZILs, trigger smart contracts, etc. and there is no central authority who fully controls the network. Zilliqa and other public and distributed blockchains (like Bitcoin and Ethereum) can also be defined as state machines.
 
Taking the liberty of paraphrasing examples and definitions given by Samuel Brooks’ medium article, he describes the definition of a blockchain (like Zilliqa) as: “A peer-to-peer, append-only datastore that uses consensus to synchronize cryptographically-secure data”.
 
Next, he states that: "blockchains are fundamentally systems for managing valid state transitions”. For some more context, I recommend reading the whole medium article to get a better grasp of the definitions and understanding of state machines. Nevertheless, let’s try to simplify and compile it into a single paragraph. Take traffic lights as an example: all its states (red, amber, and green) are predefined, all possible outcomes are known and it doesn’t matter if you encounter the traffic light today or tomorrow. It will still behave the same. Managing the states of a traffic light can be done by triggering a sensor on the road or pushing a button resulting in one traffic lights’ state going from green to red (via amber) and another light from red to green.
 
With public blockchains like Zilliqa, this isn’t so straightforward and simple. It started with block #1 almost 1,5 years ago and every 45 seconds or so a new block linked to the previous block is being added. Resulting in a chain of blocks with transactions in it that everyone can verify from block #1 to the current #647.000+ block. The state is ever changing and the states it can find itself in are infinite. And while the traffic light might work together in tandem with various other traffic lights, it’s rather insignificant comparing it to a public blockchain. Because Zilliqa consists of 2400 nodes who need to work together to achieve consensus on what the latest valid state is while some of these nodes may have latency or broadcast issues, drop offline or are deliberately trying to attack the network, etc.
 
Now go back to the Viewblock page take a look at the amount of transaction, addresses, block and DS height and then hit refresh. Obviously as expected you see new incremented values on one or all parameters. And how did the Zilliqa blockchain manage to transition from a previous valid state to the latest valid state? By using pBFT to reach consensus on the latest valid state.
 
After having obtained the entry ticket, miners execute pBFT to reach consensus on the ever-changing state of the blockchain. pBFT requires a series of network communication between nodes, and as such there is no GPU involved (but CPU). Resulting in the total energy consumed to keep the blockchain secure, decentralized and scalable being low.
 
pBFT stands for practical Byzantine Fault Tolerance and is an optimization on the Byzantine Fault Tolerant algorithm. To quote Blockonomi: “In the context of distributed systems, Byzantine Fault Tolerance is the ability of a distributed computer network to function as desired and correctly reach a sufficient consensus despite malicious components (nodes) of the system failing or propagating incorrect information to other peers.” Zilliqa is such a distributed computer network and depends on the honesty of the nodes (shard and DS) to reach consensus and to continuously update the state with the latest block. If pBFT is a new term for you I can highly recommend the Blockonomi article.
 
The idea of pBFT was introduced in 1999 - one of the authors even won a Turing award for it - and it is well researched and applied in various blockchains and distributed systems nowadays. If you want more advanced information than the Blockonomi link provides click here. And if you’re in between Blockonomi and the University of Singapore read the Zilliqa Design Story Part 2 dating from October 2017.
Quoting from the Zilliqa tech whitepaper: “pBFT relies upon a correct leader (which is randomly selected) to begin each phase and proceed when the sufficient majority exists. In case the leader is byzantine it can stall the entire consensus protocol. To address this challenge, pBFT offers a view change protocol to replace the byzantine leader with another one.”
 
pBFT can tolerate ⅓ of the nodes being dishonest (offline counts as Byzantine = dishonest) and the consensus protocol will function without stalling or hiccups. Once there are more than ⅓ of dishonest nodes but no more than ⅔ the network will be stalled and a view change will be triggered to elect a new DS leader. Only when more than ⅔ of the nodes are dishonest (66%) double-spend attacks become possible.
 
If the network stalls no transactions can be processed and one has to wait until a new honest leader has been elected. When the mainnet was just launched and in its early phases, view changes happened regularly. As of today the last stalling of the network - and view change being triggered - was at the end of October 2019.
 
Another benefit of using pBFT for consensus besides low energy is the immediate finality it provides. Once your transaction is included in a block and the block is added to the chain it’s done. Lastly, take a look at this article where three types of finality are being defined: probabilistic, absolute and economic finality. Zilliqa falls under the absolute finality (just like Tendermint for example). Although lengthy already we skipped through some of the inner workings from Zilliqa’s consensus: read the Zilliqa Design Story Part 3 and you will be close to having a complete picture on it. Enough about PoW, sybil resistance mechanism, pBFT, etc. Another thing we haven’t looked at yet is the amount of decentralization.
 
Decentralisation
 
Currently, there are four shards, each one of them consisting of 600 nodes. 1 shard with 600 so-called DS nodes (Directory Service - they need to achieve a higher difficulty than shard nodes) and 1800 shard nodes of which 250 are shard guards (centralized nodes controlled by the team). The amount of shard guards has been steadily declining from 1200 in January 2019 to 250 as of May 2020. On the Viewblock statistics, you can see that many of the nodes are being located in the US but those are only the (CPU parts of the) shard nodes who perform pBFT. There is no data from where the PoW sources are coming. And when the Zilliqa blockchain starts reaching its transaction capacity limit, a network upgrade needs to be executed to lift the current cap of maximum 2400 nodes to allow more nodes and formation of more shards which will allow to network to keep on scaling according to demand.
Besides shard nodes there are also seed nodes. The main role of seed nodes is to serve as direct access points (for end-users and clients) to the core Zilliqa network that validates transactions. Seed nodes consolidate transaction requests and forward these to the lookup nodes (another type of nodes) for distribution to the shards in the network. Seed nodes also maintain the entire transaction history and the global state of the blockchain which is needed to provide services such as block explorers. Seed nodes in the Zilliqa network are comparable to Infura on Ethereum.
 
The seed nodes were first only operated by Zilliqa themselves, exchanges and Viewblock. Operators of seed nodes like exchanges had no incentive to open them for the greater public. They were centralised at first. Decentralisation at the seed nodes level has been steadily rolled out since March 2020 ( Zilliqa Improvement Proposal 3 ). Currently the amount of seed nodes is being increased, they are public-facing and at the same time PoS is applied to incentivize seed node operators and make it possible for ZIL holders to stake and earn passive yields. Important distinction: seed nodes are not involved with consensus! That is still PoW as entry ticket and pBFT for the actual consensus.
 
5% of the block rewards are being assigned to seed nodes (from the beginning in 2019) and those are being used to pay out ZIL stakers. The 5% block rewards with an annual yield of 10.03% translate to roughly 610 MM ZILs in total that can be staked. Exchanges use the custodial variant of staking and wallets like Moonlet will use the non-custodial version (starting in Q3 2020). Staking is being done by sending ZILs to a smart contract created by Zilliqa and audited by Quantstamp.
 
With a high amount of DS; shard nodes and seed nodes becoming more decentralized too, Zilliqa qualifies for the label of decentralized in my opinion.
 
Smart contracts
 
Let me start by saying I’m not a developer and my programming skills are quite limited. So I‘m taking the ELI5 route (maybe 12) but if you are familiar with Javascript, Solidity or specifically OCaml please head straight to Scilla - read the docs to get a good initial grasp of how Zilliqa’s smart contract language Scilla works and if you ask yourself “why another programming language?” check this article. And if you want to play around with some sample contracts in an IDE click here. The faucet can be found here. And more information on architecture, dapp development and API can be found on the Developer Portal.
If you are more into listening and watching: check this recent webinar explaining Zilliqa and Scilla. Link is time-stamped so you’ll start right away with a platform introduction, roadmap 2020 and afterwards a proper Scilla introduction.
 
Generalized: programming languages can be divided into being ‘object-oriented’ or ‘functional’. Here is an ELI5 given by software development academy: * “all programs have two basic components, data – what the program knows – and behavior – what the program can do with that data. So object-oriented programming states that combining data and related behaviors in one place, is called “object”, which makes it easier to understand how a particular program works. On the other hand, functional programming argues that data and behavior are different things and should be separated to ensure their clarity.” *
 
Scilla is on the functional side and shares similarities with OCaml: OCaml is a general-purpose programming language with an emphasis on expressiveness and safety. It has an advanced type system that helps catch your mistakes without getting in your way. It's used in environments where a single mistake can cost millions and speed matters, is supported by an active community, and has a rich set of libraries and development tools. For all its power, OCaml is also pretty simple, which is one reason it's often used as a teaching language.
 
Scilla is blockchain agnostic, can be implemented onto other blockchains as well, is recognized by academics and won a so-called Distinguished Artifact Award award at the end of last year.
 
One of the reasons why the Zilliqa team decided to create their own programming language focused on preventing smart contract vulnerabilities is that adding logic on a blockchain, programming, means that you cannot afford to make mistakes. Otherwise, it could cost you. It’s all great and fun blockchains being immutable but updating your code because you found a bug isn’t the same as with a regular web application for example. And with smart contracts, it inherently involves cryptocurrencies in some form thus value.
 
Another difference with programming languages on a blockchain is gas. Every transaction you do on a smart contract platform like Zilliqa or Ethereum costs gas. With gas you basically pay for computational costs. Sending a ZIL from address A to address B costs 0.001 ZIL currently. Smart contracts are more complex, often involve various functions and require more gas (if gas is a new concept click here ).
 
So with Scilla, similar to Solidity, you need to make sure that “every function in your smart contract will run as expected without hitting gas limits. An improper resource analysis may lead to situations where funds may get stuck simply because a part of the smart contract code cannot be executed due to gas limits. Such constraints are not present in traditional software systems”. Scilla design story part 1
 
Some examples of smart contract issues you’d want to avoid are: leaking funds, ‘unexpected changes to critical state variables’ (example: someone other than you setting his or her address as the owner of the smart contract after creation) or simply killing a contract.
 
Scilla also allows for formal verification. Wikipedia to the rescue: In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics.
 
Formal verification can be helpful in proving the correctness of systems such as: cryptographic protocols, combinational circuits, digital circuits with internal memory, and software expressed as source code.
 
Scilla is being developed hand-in-hand with formalization of its semantics and its embedding into the Coq proof assistant — a state-of-the art tool for mechanized proofs about properties of programs.”
 
Simply put, with Scilla and accompanying tooling developers can be mathematically sure and proof that the smart contract they’ve written does what he or she intends it to do.
 
Smart contract on a sharded environment and state sharding
 
There is one more topic I’d like to touch on: smart contract execution in a sharded environment (and what is the effect of state sharding). This is a complex topic. I’m not able to explain it any easier than what is posted here. But I will try to compress the post into something easy to digest.
 
Earlier on we have established that Zilliqa can process transactions in parallel due to network sharding. This is where the linear scalability comes from. We can define simple transactions: a transaction from address A to B (Category 1), a transaction where a user interacts with one smart contract (Category 2) and the most complex ones where triggering a transaction results in multiple smart contracts being involved (Category 3). The shards are able to process transactions on their own without interference of the other shards. With Category 1 transactions that is doable, with Category 2 transactions sometimes if that address is in the same shard as the smart contract but with Category 3 you definitely need communication between the shards. Solving that requires to make a set of communication rules the protocol needs to follow in order to process all transactions in a generalised fashion.
 
And this is where the downsides of state sharding comes in currently. All shards in Zilliqa have access to the complete state. Yes the state size (0.1 GB at the moment) grows and all of the nodes need to store it but it also means that they don’t need to shop around for information available on other shards. Requiring more communication and adding more complexity. Computer science knowledge and/or developer knowledge required links if you want to dig further: Scilla - language grammar Scilla - Foundations for Verifiable Decentralised Computations on a Blockchain Gas Accounting NUS x Zilliqa: Smart contract language workshop
 
Easier to follow links on programming Scilla https://learnscilla.com/home Ivan on Tech
 
Roadmap / Zilliqa 2.0
 
There is no strict defined roadmap but here are topics being worked on. And via the Zilliqa website there is also more information on the projects they are working on.
 
Business & Partnerships
 
It’s not only technology in which Zilliqa seems to be excelling as their ecosystem has been expanding and starting to grow rapidly. The project is on a mission to provide OpenFinance (OpFi) to the world and Singapore is the right place to be due to its progressive regulations and futuristic thinking. Singapore has taken a proactive approach towards cryptocurrencies by introducing the Payment Services Act 2019 (PS Act). Among other things, the PS Act will regulate intermediaries dealing with certain cryptocurrencies, with a particular focus on consumer protection and anti-money laundering. It will also provide a stable regulatory licensing and operating framework for cryptocurrency entities, effectively covering all crypto businesses and exchanges based in Singapore. According to PWC 82% of the surveyed executives in Singapore reported blockchain initiatives underway and 13% of them have already brought the initiatives live to the market. There is also an increasing list of organizations that are starting to provide digital payment services. Moreover, Singaporean blockchain developers Building Cities Beyond has recently created an innovation $15 million grant to encourage development on its ecosystem. This all suggests that Singapore tries to position itself as (one of) the leading blockchain hubs in the world.
 
Zilliqa seems to already take advantage of this and recently helped launch Hg Exchange on their platform, together with financial institutions PhillipCapital, PrimePartners and Fundnel. Hg Exchange, which is now approved by the Monetary Authority of Singapore (MAS), uses smart contracts to represent digital assets. Through Hg Exchange financial institutions worldwide can use Zilliqa's safe-by-design smart contracts to enable the trading of private equities. For example, think of companies such as Grab, Airbnb, SpaceX that are not available for public trading right now. Hg Exchange will allow investors to buy shares of private companies & unicorns and capture their value before an IPO. Anquan, the main company behind Zilliqa, has also recently announced that they became a partner and shareholder in TEN31 Bank, which is a fully regulated bank allowing for tokenization of assets and is aiming to bridge the gap between conventional banking and the blockchain world. If STOs, the tokenization of assets, and equity trading will continue to increase, then Zilliqa’s public blockchain would be the ideal candidate due to its strategic positioning, partnerships, regulatory compliance and the technology that is being built on top of it.
 
What is also very encouraging is their focus on banking the un(der)banked. They are launching a stablecoin basket starting with XSGD. As many of you know, stablecoins are currently mostly used for trading. However, Zilliqa is actively trying to broaden the use case of stablecoins. I recommend everybody to read this text that Amrit Kumar wrote (one of the co-founders). These stablecoins will be integrated in the traditional markets and bridge the gap between the crypto world and the traditional world. This could potentially revolutionize and legitimise the crypto space if retailers and companies will for example start to use stablecoins for payments or remittances, instead of it solely being used for trading.
 
Zilliqa also released their DeFi strategic roadmap (dating November 2019) which seems to be aligning well with their OpFi strategy. A non-custodial DEX is coming to Zilliqa made by Switcheo which allows cross-chain trading (atomic swaps) between ETH, EOS and ZIL based tokens. They also signed a Memorandum of Understanding for a (soon to be announced) USD stablecoin. And as Zilliqa is all about regulations and being compliant, I’m speculating on it to be a regulated USD stablecoin. Furthermore, XSGD is already created and visible on block explorer and XIDR (Indonesian Stablecoin) is also coming soon via StraitsX. Here also an overview of the Tech Stack for Financial Applications from September 2019. Further quoting Amrit Kumar on this:
 
There are two basic building blocks in DeFi/OpFi though: 1) stablecoins as you need a non-volatile currency to get access to this market and 2) a dex to be able to trade all these financial assets. The rest are built on top of these blocks.
 
So far, together with our partners and community, we have worked on developing these building blocks with XSGD as a stablecoin. We are working on bringing a USD-backed stablecoin as well. We will soon have a decentralised exchange developed by Switcheo. And with HGX going live, we are also venturing into the tokenization space. More to come in the future.”
 
Additionally, they also have this ZILHive initiative that injects capital into projects. There have been already 6 waves of various teams working on infrastructure, innovation and research, and they are not from ASEAN or Singapore only but global: see Grantees breakdown by country. Over 60 project teams from over 20 countries have contributed to Zilliqa's ecosystem. This includes individuals and teams developing wallets, explorers, developer toolkits, smart contract testing frameworks, dapps, etc. As some of you may know, Unstoppable Domains (UD) blew up when they launched on Zilliqa. UD aims to replace cryptocurrency addresses with a human-readable name and allows for uncensorable websites. Zilliqa will probably be the only one able to handle all these transactions onchain due to ability to scale and its resulting low fees which is why the UD team launched this on Zilliqa in the first place. Furthermore, Zilliqa also has a strong emphasis on security, compliance, and privacy, which is why they partnered with companies like Elliptic, ChainSecurity (part of PwC Switzerland), and Incognito. Their sister company Aqilliz (Zilliqa spelled backwards) focuses on revolutionizing the digital advertising space and is doing interesting things like using Zilliqa to track outdoor digital ads with companies like Foodpanda.
 
Zilliqa is listed on nearly all major exchanges, having several different fiat-gateways and recently have been added to Binance’s margin trading and futures trading with really good volume. They also have a very impressive team with good credentials and experience. They don't just have “tech people”. They have a mix of tech people, business people, marketeers, scientists, and more. Naturally, it's good to have a mix of people with different skill sets if you work in the crypto space.
 
Marketing & Community
 
Zilliqa has a very strong community. If you just follow their Twitter their engagement is much higher for a coin that has approximately 80k followers. They also have been ‘coin of the day’ by LunarCrush many times. LunarCrush tracks real-time cryptocurrency value and social data. According to their data, it seems Zilliqa has a more fundamental and deeper understanding of marketing and community engagement than almost all other coins. While almost all coins have been a bit frozen in the last months, Zilliqa seems to be on its own bull run. It was somewhere in the 100s a few months ago and is currently ranked #46 on CoinGecko. Their official Telegram also has over 20k people and is very active, and their community channel which is over 7k now is more active and larger than many other official channels. Their local communities also seem to be growing.
 
Moreover, their community started ‘Zillacracy’ together with the Zilliqa core team ( see www.zillacracy.com ). It’s a community-run initiative where people from all over the world are now helping with marketing and development on Zilliqa. Since its launch in February 2020 they have been doing a lot and will also run their own non-custodial seed node for staking. This seed node will also allow them to start generating revenue for them to become a self sustaining entity that could potentially scale up to become a decentralized company working in parallel with the Zilliqa core team. Comparing it to all the other smart contract platforms (e.g. Cardano, EOS, Tezos etc.) they don't seem to have started a similar initiative (correct me if I’m wrong though). This suggests in my opinion that these other smart contract platforms do not fully understand how to utilize the ‘power of the community’. This is something you cannot ‘buy with money’ and gives many projects in the space a disadvantage.
 
Zilliqa also released two social products called SocialPay and Zeeves. SocialPay allows users to earn ZILs while tweeting with a specific hashtag. They have recently used it in partnership with the Singapore Red Cross for a marketing campaign after their initial pilot program. It seems like a very valuable social product with a good use case. I can see a lot of traditional companies entering the space through this product, which they seem to suggest will happen. Tokenizing hashtags with smart contracts to get network effect is a very smart and innovative idea.
 
Regarding Zeeves, this is a tipping bot for Telegram. They already have 1000s of signups and they plan to keep upgrading it for more and more people to use it (e.g. they recently have added a quiz features). They also use it during AMAs to reward people in real-time. It’s a very smart approach to grow their communities and get familiar with ZIL. I can see this becoming very big on Telegram. This tool suggests, again, that the Zilliqa team has a deeper understanding of what the crypto space and community needs and is good at finding the right innovative tools to grow and scale.
 
To be honest, I haven’t covered everything (i’m also reaching the character limited haha). So many updates happening lately that it's hard to keep up, such as the International Monetary Fund mentioning Zilliqa in their report, custodial and non-custodial Staking, Binance Margin, Futures, Widget, entering the Indian market, and more. The Head of Marketing Colin Miles has also released this as an overview of what is coming next. And last but not least, Vitalik Buterin has been mentioning Zilliqa lately acknowledging Zilliqa and mentioning that both projects have a lot of room to grow. There is much more info of course and a good part of it has been served to you on a silver platter. I invite you to continue researching by yourself :-) And if you have any comments or questions please post here!
submitted by haveyouheardaboutit to CryptoCurrency [link] [comments]

Solution Life - New payments solution

Solution Life - New payments solution
Solution Life is an open-source platform that enables to create peer-to-peer marketplace and ecommerce applications.
https://preview.redd.it/ypmpkfwnb6s51.png?width=613&format=png&auto=webp&s=6936dbdd70f1626bb352a426f3b59383b8b8c9cc
Solution Life aims at building a global sharing economy, allowing buyers and sellers to use segments of goods and services (car sharing, service missions, home sharing, etc.) to transact on the open, distributed source web. Using Ethereum blockchain and Interplanetary File System (IPFS), the platform and its participants can interact with the peer-to-peer model, allowing the creation and placement of services and goods without going through traditional middle parties. We plan to build a large-scale commercial network:
• Exchange financial value directly (listing, transactions and service fees) from big corporations like Airbnb, Craigslist, Postmate, ... to individual buyers and retailers.
• Exchange financial value and strategic value (internal aggregation of customer and transaction data) from similar corporations to entire ecosystems
• Create new financial value for market participants who contribute to platform development (e.g. building new technology for the Solution Life platform, developing new vertical products and introducing new users and businesses)
• Build the open, distributed, and shared data layer to promote transparency and collaboration
• Allow the buyers and sellers in the world to transact without difficulty in converting currencies or tariffs
• Promote personal freedom by not allowing a corporation or central government to impose arbitrary and overly conventional rules of business operation. To conduct these ambitious goals, we created the Solution Life Platform with programs that encourage technologists, businesses and consumers to build, contribute, and expand the ecosystem. We plan to build a broad collection of vertical industry applications (e.g. short vacation rental, free software engineering, tutoring) built on standards and data sharing Solution Life. When writing this article, the Solution Life platform is currently in Mainnet Beta. Platform Version 1.0 is expected to be activated in Quarter 3/2020. While the majority of engineering work is being done by the core engineering team, we expect future developments, after launching platform 1.0 from developer, will come to open source community members Together, we will create the Internet economy of the future.
Details of Whitepaper:
• Why is a new model of peer to peer trading necessary?
• Benefits proposed on the Solution Life Platform
• Product strategy, main features and technical overview
• Overview of the Solution Life team and community
https://preview.redd.it/tzepfegpb6s51.png?width=759&format=png&auto=webp&s=62c9933e84e9945b5417591e406390d127fa1070
BACKGROUND
Since the appearance of the Internet, the digital marketplace has connected buyers and sellers of goods and services, allowing transactions that have never happened before. Craigslist launched in 1995 and dominated for many years in local and regional commerce. At the same time, eBay began to grow and create a whole new category of sales based on auction, creating a more efficient way of doing market business. Through 20 years of rapid change, many businesses on the Internet market in both B2C and B2B types have developed strongly. Currently, sharing economy markets such as Airbnb, Uber, Getaround, Fiverr and TaskRmus have been very successful in combining buyers and sellers of the sharing economy. Now, the use of distributed assets can be sold as easily as atomic items, and people around the world are exchanging their excess inventory, time, and skills for profit. New markets including the Gig economy, the service sector and the use of segment assets are particularly suitable to be basis for peer-to-peer systems built on blockchain. Most of the shared economic enterprises have some common points. Firstly, as a collection, these companies have made a big impact on the world. Consumers of the markets were able to improve their lives with access to products and services that they didn't have before. Vendors have been using these platforms to reach customers on a larger and easier scale than before. Each market creates a "private home" for consumers and suppliers to transact together, creating liquidity for that market. Secondly, most sharing economic enterprises follow the same growth cycle. Without a few exceptions, these famous markets are difficult to launch and grow. Enterprises in the market often have to start building with millions of dollars, and in terms of Uber and Airbnb, these two businesses spend billions of dollars to scale. That is also the reason why these businesses suffered serious losses in the early days. In fact, the corporation is subsidizing the use of marketplace for its users. However, due to the very positive cross-network effect, successful marketplace businesses can increase revenue exponentially over time, usually by charging a fee per transaction on the network. Network-effect enterprises, such as share economy market, are often enterprises occupying all directions and growing stage, gaining a disproportionate value from the network for corporation’s management and their shareholders. In many ways, they become the only dictator on the scale they achieve. Finally, although there are huge differences in user experience, business mechanics, and vertical specific features among companies on the Internet market, they all share many parts built and rebuild many times. Lyft, Postmate, and DoorDash themselves has designed their own solutions for user and supplier profiles, shopping experiences, matching algorithms, reviews, and ratings. This is proprietary technology that is valuable on one side. On the other hand, chasing useless things each time creates a new market vertically wasted time and effort. Consumers also create and manage dozens of accounts on these market enterprises themselves, each with their own personal data and transaction history.
In the last few years, blockchain technology innovators and investors have called teams to build peer-topeer versions of businesses in the current sharing economy and to trade the Internet in a more efficient way. P2P lodging sites like Airbnb have already begun to transform the lodging industry by making a public market in private housing. However, adoption may be limited by concerns about safety and security (guests) and property damage (hosts). By enabling a secure, tamper-proof system for managing digital credentials and reputation, we believe blockchain could help accelerate the adoption of P2P lodging and generate.” - Goldman Sachs Research (Blockchain: Putting Theory into Practice) Don Tapscott, the author of the "Blockchain Revolution", said that Bitcoin-based technology could be used to promote the interest in Uber and Airbnb. - The Wall Street Journal "It is difficult for middle parties to achieve sustainable growth in business," [Fritz Joussen] said. "These platforms [tourism middle parties] build accessibility by spending billions of dollars on advertising, and then they generate exclusive profits based on what they have with sales and marketing. They provide great sales and marketing services. Booking.com is a big brand but they make outstanding profits because they own proprietary structures. Blockchain will destroy this. "- Skift However, most of the infrastructure and transmission systems for building distributed-market applications did not exist before Solution Life was born. We aim to address the shortcomings of current market companies and are happy that we have launched the Solution Life Platform, which opens up peer-to-peer commerce with corresponding scale.
📷
ACTIVATE THE OVER THE COUNTER MARKET
Our vision is to build and develop a free service exchange on the new Internet. In order to do this, we have to build a simulation platform of most, if not all, of the functionality of a third-party intermediary on the blockchain and other distribution systems. This is an ambitious and technologically challenging goal, but we have already completed important milestones that demonstrate our technology and the realworld applications of the project. The Solution Life platform has 3 main elements, all of which are open sources:
• Solution Life enabled end user applications
• Solution Life platform for developers
• Solution Life's application protocol
Solution Life enables end user applications The Solution Life flagship marketplace app is our consumer marketplace product that allows buyers and sellers on the network to do business. It is available today on the web at shopSolution Life.com and on both iOS and Android mobile devices.
📷
Summary
For the past two decades, Internet marketplaces and e-commerce stores have changed the way that buyers and sellers connect, creating new opportunities for the exchange of goods and services. However, these marketplaces have always been governed by centralized companies that maintain their individual monopolies on data, transaction and other service fees, and ultimately, user choice. With blockchain and other distributed technologies beginning to hit the mainstream, the world is poised for a new wave of decentralized commerce. SLC is bringing change and innovation to the global peer-to-peer economy. We're excited by the opportunity to lower fees, increase innovation, free customer and transaction data, and decrease censorship and unnecessary regulation. We are building a platform that invites other interested parties including developers and entrepreneurs to build this technology and community with us, altogether working to create the peer-to-peer economy of tomorrow. We hope you’ll join us on this exciting journey.
TOKEN SOLUTION LIFE (SLC)
The Solution Life Token (also known as SLC) is a utility token that serves multiple purposes in ensuring the health and growth of the network. The ERC20 contract is live on the Ethereum network today at:
0x4d44D6c288b7f32fF676a4b2DAfD625992f8Ffbd.
At a high level, this token is intended to serve a number of key functions on the platform. First, the SLC is a multi-purpose incentive token that is intended to drive the behavior of end users, developers, market operators, and other ecosystem participants. Additionally, the SLC is an exchange intermediary that can be used for payments between buyers and sellers on the platform. Ultimately, it is intended that SLC will serve a vital part in future network governance. Since November 2020, the Solution Life token has been used to encourage various forms of participation from the platform's ecosystem participants. Token Solution Life is used to reward users, developers, marketplace operators and / or other participants for performing activities and services conducive to Platform development. Solution Life Rewards Solution Life is an incentive program targeted at end users on the Platform. Buyers and sellers on the platform have been able to earn SLC since our inaugural Solution Life Rewards campaign in Nov of 2020. Solution Life Rewards enables everyone to have a stake in the network. We’ve intentionally designed the program so that even novice, non-technical users can participate. With Solution Life Rewards, users can get SLC from account creation and identity verification. One of the best ways to network is through referrals. As such, end users can also earn tokens by inviting new users. This creates more confidence between the buyer and the seller. Users can also earn SLC by following Solution Life's social networking sites or promoting project news on public channels.
To encourage trading volume on our Solution Life Platform, we also offer a refund mechanism for users who purchase from reputable sellers on our network. Solution Life Commissions Encouraging marketplace developers and managers to use the Solution Life platform is essential. Therefore, we launched an advertising and promotion program, creating an integrated business model for the decentralized marketplace running on Solution Life. Merchants on Solution Life apps can promote their listings using SLCs for greater visibility on search and browse results on our preferred and partner apps. The only way to join this program is to pay with SLC. When a merchant creates a listing, they can add a commission paid in SLC to their listing. This SLC is placed on escrow in the Marketplace Smart Contract.
submitted by slctoken to u/slctoken [link] [comments]

WHAT IS BITCOIN? - YouTube Cryptocurrency Mining Difficulty Explained - Mining Difficulty And Analysis How did we get to the current number of bitcoins in circulation? - George Levy Bitcoin Key Levels The CryptoCurrency mining difficulty log Feb 8 2020 Bitcoin Ethereum LiteCoin Monero Eth Classic

Bitcoin Cash Difficulty historical chart Average mining difficulty per day 385.174 G. Share: btc eth ltc bch bsv xmr xrp etc zec dash doge btg vtc rdd blk ftc nmc nvc. Scale: linear log. Latest Prices: BCH/USD: 268.141 (hitbtc) BCHABC/BTC: 0.02071845 (yobit) BCH/BTC: 0.020614 (binance) BCH/BTC: 0.020616 (rightbtc) Zoom: 3 months 6 months 1 year 2 years all time. Transactions Block Size ... According to bitinfocharts.com, bitcoin network difficulty is currently at 13.8 T, its highest ever level. On Tuesday this will be increased around 8% to 15 T which will make it even harder to validate new blocks. This is notable because it is a large adjustment compared to the 1-2% it usually changes. So this example calculation gives us a bitcoin difficulty of 600,000. The real difficulty has been at this value approximately in mid 2011. How is the bitcoin difficulty prediction calculated? To predict the next difficulty, the bitcoin client. next retarget in days is an estimate when the current 2016 blocks will be mined Maximum, current and minimum difficulty []. Current difficulty can be found out by using Bitcoin command line 'getDifficulty'. Due to target function not having minimum value maximum difficulty can be calculated only approximately as following: maximum_target / 1 (as 0 would lead the equation to being infinitely big) which is an inconcievable number (~2 to the 224). Difficulty. A mechanism for regulating the time it takes to mine a block. What is the difficulty? The difficulty is a number that regulates how long it takes for miners to add new blocks of transactions to the blockchain.. This difficulty value updates every 2 weeks to ensure that it takes 10 minutes (on average) to add a new block to the blockchain.

[index] [5286] [15522] [30175] [6685] [40495] [39459] [43001] [17407] [38564] [37124]

WHAT IS BITCOIN? - YouTube

What is bitcoin? Everything you need to know about bitcoin. Learn about new coins that you can invest in https://www.patreon.com/cryptogen More info at www.b... #XRP #BitCoin #Cryptocurrency Welcome to the 18th episode of CCMDL , Feburary 24 2020 We go over talk a little about the Network difficulty& Hashrates of Ethereum , Bitcoin, Monero , LiteCoins ... A chart showing bitcoin mining difficulty changes over time Bitcoin is the currency of the future & Genesis Mining is the largest cloud mining company on the market How to buy a pack in onecoin ... https://claymore-dual.github.io/diffi... The network automatically changes the difficulty level for Bitcoin mining to ensure the discovery of a new block every 10 minutes (600 seconds) by miners. The difficulty level, that only adjusts every 2 weeks, dropped by 16%. The second biggest drop in Bitcoin's history. At current Bitcoin prices of ~$6,000 we do have a problem. Mining is already ...

#