October 2017

Inadvertently Appointing Digital Judges? A Canadian Perspective on Restricting Speech and Social Media

ByKaren Eltis.

Read More
On the heels of Charlottesville, questions respecting platforms’ responsibility for hateful content abound. Presumably recognizing that the internet has effectively undermined the formerly sacred “marketplace of ideas” paradigm, Facebook and similar platforms have now committed to suppressing “pernicious form[s] of harassment” and so-called “fake news” sites. And so it came to be that private American companies are reluctantly but surely stepping into the role of international arbiters of free expression, increasingly employing artificial intelligence to contain content that is potentially “offensive” to advertisers or otherwise disturbing.

Without a doubt, we are witness to a culture of instantaneous sharing, enabling anyone to communicate random thoughts potentially with everyone worldwide, without any ability to correct, retract, control or contextualize subsequent dissemination. They may further – to a certain degree – do so anonymously. What is more, a culture of this ilk may, in Cass Sunstein’s words, create a “ situation in which thousands or perhaps millions or even tens of millions of people are mainly listening to louder echoes of their own voices.” These refers to the confirming and reinforcing of biases and half-truths on issues as diverse as climate change and vaccination.

The Internet has certainly and laudably democratized expression, shifting the narrative from freedom from government muzzling to freedom to express oneself to a far broader audience, irrespective of financial means or social status. Prior to the digital age, only the moneyed or otherwise influential could dream of reaching a significant audience, thereby arguably emasculating any purposive construction of free expression enshrined in a constitution prizing equality. The internet has become the proverbial ”soapbox” of the 21st century, providing everyone and anyone with what may be deemed an optimal platform for expression. But again, unlike their predecessors, these new media sources are subject to little, if any, oversight, scrutiny or accountability (save AI, it now appears).

 In contradistinction to the institutional press, which typically boasts the use of built-in safeguards (editorial oversight, and a civil if not purportedly neutral tone), posters and tweeters can share information in a manner that radically compounds the difficulties inherent in traditional defamation, including punitive shaming. This may ultimately force us to recalibrate the balance between free speech – regarded as the “very life blood of our freedom and free institutions” – and competing considerations.

It is indeed the medium's very structure that tends to bestow the appearance of legitimacy and veracity on even the most mendacious of racist sites, in the absence of gatekeepers or other traditional controls. Therefore, as a medium, it may help legitimate the most pernicious forms of hate and incitement, if only due to the arduous task of distinguishing between reliable, authoritative cyber sources, and those peddling racism and fabrications under the unique guise of respectability imparted by the net environment.

The post-war human rights legal framework, which includes Canada’s Charter of Rights, was devised with government action in mind, as a bulwark against government abuses. Yet “government action,” however generously interpreted, requires at least a measure of just such abuse.

It is apparent, however – and increasingly so of late –  that the power to infringe on constitutional values, however inadvertently, including (but not limited to) freedom of expression and privacy, or due process as presumably in the Loomis case, does not lie principally – or at the very least exclusively – with the state. Rather, in a post-Costeja (ECJ Right to be Forgotten decision) world, private parties, namely “data controllers” with global influence, using Artificial Intelligence, have become the unwitting arbiters of the global public interest, a role they presumably neither covet nor properly fit, especially in the absence of guidance or criteria.

Facebook and other platforms are thus incomprehensibly saddled with the gargantuan task of determining how to “balance the need for transparency with the need to protect people’s identities.” This inevitably leads to ad hoc approaches by these companies. In addition, transparency and accountability are notoriously difficult to cultivate when balancing delicate constitutional values, such as freedom of expression and privacy. Even the constitutional courts and policy makers who typically perform this balancing struggle with it, as for example in the context of the controversy associated with so-called “judicial activism.” This difficulty soars when the balancers are instead inexperienced and reticent corporate actors, who presumably lack the requisite public legitimacy for such matters, especially when dealing with foreign (non-U.S.) nationals.

This leads in turn to absurd results, such as the suppression of the picture of a 1972 depiction of a Canadian-Vietnamese child war victim by Facebook’s algorithms, subject only to the immediate oversight of outsourced corporate actors abroad. Another example is PayPal’s algorithm marginalizing cookbooks featuring the terms “Syria” or “Cuba” in a misguided effort to comply with security regulations.

 Mindful of the above, the notion of “government action” might be purposefully revisited with a view to its adaption to the exigencies of this digital age. If this does not take place, the ultimate arbiters of the proper limits on fundamental rights may be algorithms or other forms of Artificial intelligence deployed by platforms that can be assumed to lack judicial training, not to mention cross-border accountability. As we struggle with delineating the proper limits on speech in a post-Charlottesville world, let us be cognizant of the importance of maintaining courts’ oversight on constitutional values and the proper limits on expression, rather than letting them become the province of the unknown.

 

Read Less

SPECTRE: Serialization of Proof-of-Work Events, Confirming Transactions via Recursive Elections

By: Yoad Lewenberg, Yonatan Sompolinsky and Aviv Zohar.

Read More
A Fast and Scalable Distributed Ledger Protocol
It is by now well known that the classic Bitcoin protocol laid down by Satoshi Nakamoto doesn’t remain secure when block sizes are increased or block creation rates are sped up. This has caused quite a lot of turmoil in the Bitcoin community, as heated debates began regarding the best ways to scale the protocol.

In a new paper that we just wrote we describe a DAG-based protocol for a permissionless distributed ledger that establishes high throughput and fast confirmation times while maintaining resilience to 50% attackers.

The implications are quite dramatic: it is possible to securely set the block creation rate in the protocol to be very high, perhaps several blocks per second. This results in:

  • Confirmation times of several seconds
  • High throughput (on-chain scalability)
  • Miner rewards with low variance

We hope that this contribution will complement other efforts (such as off-chain transaction channels) and will enable cryptocurrencies to scale. The purpose of this post is to explain the general idea behind the protocol in the hope of making it more accessible. The full details, including some lengthy proofs of correctness, appear in the full online version.

The basic properties of SPECTRE

SPECTRE guarantees two main properties with regards to transactions:

  • Weak Liveness: Transactions issued by an honest user gain fast confirmations and are quickly “accepted”.
  • Safety: Once a transaction is “accepted” by a recipient it is unlikely to be double spent or reversed.

Both properties hold assuming the attacker holds less than 50% of the hash rate. Just as with the Bitcoin protocol, the probability of a successful double spend decreases exponentially with each block that is built above the transaction. The difference here is that blocks in SPECTRE are created extremely fast.

The weak liveness property above only applies to honest users: a malicious user that double spends may have his transactions delayed indefinitely — neither accepted nor rejected, but rather in an indefinite “pending” state. The recipients of the double spend will not consider the money received (and are therefore safe from any fraud).

To give a feel for the numbers: let’s assume that it takes around 5 seconds to relay blocks to the vast majority of miners, and the block creation rate is set to 10 blocks per second. Say the merchant is willing to tolerate a risk of 1% of transaction reversal by an attacker with 10% of the hash rate, then he can accept the transaction after approximately 10 seconds. Compare that to Bitcoin: to receive a similar level of security from a 10% attacker, the merchant would need over 7 confirmations which would take roughly 70 minutes to obtain.

Here are some simulation results for the time till transactions become irreversible (from a global oracle’s point of view). Epsilon is the probability of transaction reversal by an attacker with 25% of the hash rate (in this example) and the block creation rate is 10 blocks per second.

image1

From block chains to block DAGs

The main problem that arises when block creation rates are increased in Bitcoin, is that blocks are often created in parallel without referencing each other. Such blocks are considered conflicting, and are “thrown away” by Bitcoin’s protocol if they are not on the longest chain.

While in miners “classic” Bitcoin create blocks that contain the hash of a single parent block and extend a single chain, SPECTRE naturally includes blocks that were created in parallel. Under SPECTRE, each block may reference several predecessors. The resulting structure is a Direct Acyclic Graph (DAG) of blocks. For example, you may get something like this:

image2

A Block DAG. Each block may refer to several predecessor blocks by including their hash in its header.

Notice that the arrows in the figure above (which correspond to hash referrals) point backward in time. The block DAG actually represents a “causal relation”: if block included the hash of block y, then must have been created after y.

Miners in SPECTRE are asked to include a reference to all “tips” of the block DAG that they are aware of, that is, to all blocks that have not been extended thus far. A block is only considered valid if all of its predecessors are valid, and known by the node (thus, if block x references some block y, there is no need to reference y’s predecessors as well — this is implied).

As in Bitcoin, miners are required to broadcast blocks they create or receive as quickly as possible, but we do not assume that all miners share the exact same view of the DAG at all times (some recent blocks may not be known to all).

Deciding between conflicting transactions

Blocks in SPECTRE may contain conflicting transactions (after all, they are often created in parallel without miners knowing about each other’s work). The heart of the protocol is the way it constructs the set of accepted transactions.

Step 1: For each pair of blocks x, y determine whether “x defeats y” (denoted x<y) or “y defeats x” (denoted x>y).

Step 2: Output a set of accepted transactions. A transaction embedded in block x is considered “accepted” if block x defeats all blocks that contain conflicting transactions, and all inputs of x were accepted.

A few comments before we continue:

  • Do not be confused by the notation < in step 1 to think that the smaller block loses: In this context x<y means that x is before in the order of precedence, and x’s transactions will be accepted over those in y.
  • The fact that block x defeats block will only matter if they contain conflicting transactions. Transactions without conflict will always be accepted.
  • If block x is a predecessor of block y (either directly or indirectly) we will always have x<y.
  • Unfortunately, the “defeat” relation (that we will define more precisely in a bit) may be cyclical. That is, it is possible that x<y, y<z, z<x all at the same time (just like in a game of rock-paper-scissors where each strategy is defeated by another). Luckily, this fact does not prevent us from reasoning about transactions.

The core of the protocol is establishing the “defeat” relation in Step 1 above, given some block DAG. This is done via a voting procedure.

Pairwise voting in SPECTRE

Given two blocks x,y we decide whether x<y or y<x by taking a vote. The voters are all blocks in the DAG. The vote of each block is not explicitly written in the block, but is rather deduced via the structure of the DAG.

The past of a block is the set of all blocks that it references (directly or indirectly). Similarly, the future of a block is the set of all blocks that reference it.

Block z votes as follows:

  • If x is in z’s past but y is not, then z votes x<y and vice versa.
  • If both x and y are in the past of z, then z votes according to the majority of votes in its past (a recursive call for the vote in the DAG past(z)).
  • If neither x nor y are in the past of z, then z votes according the majority of blocks in its future.
  • Block x always votes for itself relative to any other block that isn’t in its past.

Here is an example of the voting procedure for one such pair:

image3

An example of the voting procedure in the DAG for blocks x,y.

  • Blocks x,6-8 all vote x<y (block x is in their past, but y isn’t).
  • Similarly, blocks y, 9-11 vote y<x.
  • Blocks 1-5 vote x<y. This is because they see more x<y voters in their future than y<x voters.
  • Block 12 votes according to a recursive call on the DAG that does not contain blocks 10,11,12 (in this case the recursive call has each block voting exactly the same, but it is not necessarily true for all DAGs).

Another useful observation is that the voting procedure used in SPECTRE matches the longest-chain rule when it is applied to simple chains: each of the blocks in the longer chain defeats all blocks in the shorter one.

image4

SPECTRE coincides with the longest chain rule on simple chains. Each of the blocks in the longest chain (5–8) would defeat each of the blocks in the shorter one (9–11). To see this consider, for example, the vote between blocks 6 and 10. It is easy to check that blocks 5–8 vote in favor of block 6, and blocks 9–11 vote in favor of block 10. Blocks 1–4 would then be convinced by the majority of blocks in their future (i.e., by the longest chain) to vote in favor of block 6.

A robust acceptance policy

In general, it isn’t enough to see a transaction accepted by the protocol. What we need most of all is a guarantee that it will remain accepted forever with high probability. SPECTRE includes a procedure that analyzes the block DAG and outputs the safety level of transactions. SPECTRE’s acceptance policy is quite technical, but still relies on similar principles like the voting procedure above. Robustness is achieved thanks to the fact that new blocks vote according to the majority of their predecessors: the result of the vote becomes stronger as each new block supports the majority.

FAQ

Here are a few answers to questions that are likely to come up:

Mining rewards: In SPECTRE all valid blocks receive the same block reward. The reward is reduced if the block was mined according to a lower difficulty than the current one. The schedule of money creation can be decided by the designers of the cryptocurrency.

Transaction fees: The fee from a transaction is given to the block that contained it. In case several blocks contain the same transaction, the allocation of the transaction’s fee to different miners is considered a conflicting transaction (separate from the transfer itself). Accordingly, the block that defeats the others wins the reward exclusively. Miners can agree on a division of the fees early in case resolution of the conflict is delayed (see further details in the paper).

Difficulty adjustments: As in Bitcoin, we must limit the number of blocks created per second. The main bottleneck of the system is the bandwidth of nodes. Accordingly, a reasonable setup would be to aim at some given bandwidth, e.g., 1 MB per second. This can be achieved, for example, by creating 10 blocks of 100 KB each second. The precise way that the difficulty is calculated is defined in the paper.

Efficient implementation: At first glance it seems SPECTRE involves a great deal of computation: there are many pairs of blocks that need to be compared, and each vote procedure requires naively to determine the way each block would vote. Fortunately, it can be shown that the decisions on each vote can be computed efficiently, and that relatively few pairs of blocks ever need to be compared. We hope to release our efficient implementation of the core of the protocol soon.

Increased throughput: In principle, blocks that are created in parallel may include the same transactions which may lead to very little increase in throughput. However, a previous paper that we wrote on inclusive blockchain protocols analyzes the incentives of miners in a similar scenario. Miners are in fact incentivized to randomly select transactions from their mempool. If the mempool is large enough (and it will be if throughput is constrained) then few collisions between transactions in parallel blocks are expected to occur.

Click here to watch our Youtube video!

 

Read Less