In his keynote address at the RSA Conference on Feb. 14, 2017, Microsoft President Brad Smith described his vision for a more peaceful cyberspace.
Smith articulated an arrangement in which governments would work together with private tech companies to tackle the growing threat of state-sponsored cyberattacks, as steady increases in the frequency, sophistication and cost demands it. Encouraged by positive indications in the international legal and diplomacy arena, Smith proposed a program based on three pillars.
First, governments—led by the American and Russian presidents—would agree to a “digital Geneva Convention,” forswearing cyberattacks on both the private sector and civilian infrastructure. Second, a new international regulatory agency would recruit experts from academia, industry and government to investigate and identify nation-states violating the convention—a sort of International Atomic Energy Agency for the cyber realm. Third, key tech companies would jointly assume a function like that of the Red Cross, playing “100 percent defense and zero percent offense,” protecting any state under attack and rejecting any government’s request to aid in attacking anyone.
I and others saw Smith’s proposal as a refreshing and encouraging sign of new thinking about how to make our new digital world safer.
The ransom screen from an unknown version of the Petya ransomware attack that swept the world in 2017. The U.S. government believes nation-states are behind some of the attacks. (Credit: Wikimedia)
More than a year later, Microsoft led 33 technology and cyber security companies, most in the U.S. to sign the Cybersecurity Tech Accord (CSTA), an agreement that harkens to Smith’s “digital Geneva Convention.” It includes four declarative principles:
Protecting all customers everywhere;
Opposing cyberattacks on innocent civilians and enterprises from any source;
Empowering customers and developers to strengthen their cybersecurity protections;
Partnering with each other and other like-minded entities to enhance cybersecurity.
Smith approves of the CSTA, describing it as “the first step in creating a safer internet,” which must come from “the enterprises that create and operate the world’s online technologies and infrastructure.” Still, some commentators have criticized the accord, focusing on what they see as the impracticalities of its principles and on the absence of Amazon, Apple, Google and other major tech companies, excluding Facebook, from the list of signatories.
Developments since the 2017 RSA conference do not paint an optimistic image.
Some of the positive signs that Smith identified in his speech have vanished. For example, the United Nation Group of Governmental Experts reached a dead-end in June 2017, failing to affirm the full application of international law, or at least its main principles, in cyberspace. Nevertheless, the Tallinn Manuals haven’t had yet a significant impact on state practice, as a new research paper surveying all major cyber operations conducted in the last five years shows (A Rule Book on The Shelf? Tallinn Manual 2.0 on Cyber Operations and Subsequent State Practice, by Dan Efrony and Yuval Shany - forthcoming).
Furthermore, cyber attacks conducted or sponsored by nation-states haven’t stopped—or even slowed. To the contrary, the world has been rocked by unprecedented global cyberattacks, including the WannaCry attack, which hit computer systems in almost 150 countries simultaneously, and the NotPetya attack, which harmed civilian institutions and companies in 60 countries. Both attacks seem to have been executed by nation-states and caused enormous damages to civilian populations, corporations and institutions all over the globe.
On March 15, the New York Times reported on a failed August 2017 cyberattack against a Saudi petrochemical plant that investigators unanimously agree was “most likely intended to cause an explosion that would have killed people.” All that prevented the explosion, according to that reporting, was a bug in the attacker’s computer code. The incident was described as the first time in which a cyberattack intended to trigger a kinetic explosion. Apparently, it was the second: Between July and September 2016, a string of cyberattacks against oil and petrochemical facilities in Iran triggered fires and two explosions, killing one person and causing seven to be injured. Though there has been no formal attribution of those incidents by the victim nations (except a general accusation) the dominant professional assessment understands them as state-sponsored cyber operations. It certainly would not be a wild to imagine that Iran and Saudi Arabia might be the perpetrators, perhaps relying on the clandestine support of a more capable state.
Reveling in Ambiguity
The need for significant international initiatives to assure security and stability in the new digital world remains. Yet despite the limited success of the CSTA, no state has publicly expressed support or opposition to Smith’s idea—even those states that are very active in the field in using cyber capabilities to meet their national security needs and political interests. The lack of official stances is consistent with the ambiguity states typically embrace with respect to both legal doctrine and state practice in cyberspace.
States have a unique ability in cyberspace to accomplish their military, political or economic goals anonymously, with little risk of being held accountable and being required to pay a price for their actions. The same is also true for non-state actors and individuals, whatever their motives might be—though their cyber capabilities are usually much less sophisticated and dangerous than those of states.
The legal and political ambiguity coupled with the power to act covertly benefits the most technologically capable nations in cyberspace, and those nations won’t voluntarily give away their newly acquired strategic superiority. With the international community increasingly divided along political, ideological, and technological lines, the odds do not favor a legal breakthrough—neither through adopting Smith’s proposal, nor through resuming the Governmental Group of Experts process. It is thus likely that the U.N. secretary-general’s recent statement calling on the international community to finally regulate the field shall remain a voice crying in the wilderness.
A pessimistic outlook might be an accurate description of the current situation, which is often compared to the American Wild West. However, in my view, there is still a light in the end of the cyber tunnel—some hope for paving a new way, which, under certain conditions, could be more promising and practical than the launching of a new international convention.
A Shift Toward Retaliation
Western nations such as the U.S., U.K., France, Germany and others typically react with caution to cyberattacks conducted by suspected Russian, Chinese or Iranian culprits, as our research analyzing their recent responses shows. First, none of these states explicitly attributed attacks to Russia, China or Iran, even when the attack in question could have been attributed with a high degree of confidence. One exception is the Russian interference in the 2016 U.S. elections, which the U.S. did attribute and respond to, albeit hesitantly. Second, the U.S. adopted a name-and-shame by indicting the specific individuals involved in the attacks, in absentia, without acknowledging that their acts were committed in their capacity as state-agents.
A public speech by U.K. Prime Minister Theresa May on Nov. 14 of last year may be an early indication of a change in approach. May clearly pointed the finger at Russia, criticizing the Kremlin for orchestrating cyberattacks against Western countries as part of information or influence operations designed to undermine democracies and the international order. She warned Russian President Vladimir Putin: “We know what you are doing, and you will not succeed ... The UK will do what is necessary to protect ourselves, and work with our allies to do likewise.”
A month later, on Dec. 19, the U.S., U.K., Australia, Canada, New Zealand and Japan jointly attributed the WannaCry attack to North Korea. It was the first time a group of nations joined together to clearly assign responsibility to a specific nation and promising punitive measures would come.
Two months later, on Feb. 15, the U.S. announced that the Russian military launched NotPetya, and did so in order to destabilize Ukraine. The statement depicted the attack as “reckless and indiscriminate” and promised that it would be met “with international consequences.” The U.K. and Australia joined the U.S.’s attribution. A month later, the U.S. government imposed additional sanctions on Russian hackers and two Russian intelligence agencies involved in the 2016 election-interference operation, the NotPetya attack and additional intrusions targeting critical U.S. infrastructure. And just weeks ago, the U.S. imposed another round of sanctions on Russian oligarchs, senior Russian government officials, and Russian companies to retaliate for Russia’s interference in the 2016 election and other ongoing aggressions across the globe in Crimea, Ukraine and Syria. Those measures coincided with Britain’s prompt countermeasures against Russia for its use of chemical weapons directed against former Russian spy Sergei Skripal on British territory. In the aftermath of the Salisbury incident, more than 150 Russian diplomats have been expelled from 24 European countries, the U.S., Australia and Canada. This reflects a broad repudiation of Russia’s violations of international law and norms—presumably including, its role in cyberattacks.
Hopefully, after almost a decade of significant cyber incidents, each of them defined at the time as a “wake-up call” for the international community, someone finally woke up and started stretching his arms.
These new developments are very important, but they must be married to a comprehensive long-term strategy.
In my opinion, Brad Smith’s vision is too ambitious. One premise should be set outright: In the foreseeable future, there is a negligible, not to say zero, chance to overcome the geopolitical, ideological, political, economic and technological gaps and conflicts that divide the international community. Therefore, barring a paradigm-shifting shock to the system, it is impractical to imagine a meaningful international convention that would apply to the permanent members of the U.N. Security Council, regulate their activities in cyberspace and curb their ability to engage in cyber warfare—be it a digital Geneva Convention, or something else. The policy recommendations elaborated below represent, in my opinion, a more realistic course of action toward reducing the intensity of cyber-conflicts and improving security on cyberspace.
Push back and be as transparent as possible
The first quarter of 2018 suggests that the major Western governments have understand that focusing primarily on defense in cyberspace is not enough; these nations need a more proactive, offensive approach under international law, such as attributing clear responsibility and undertaking countermeasures. After attribution is established, the evidence underlying it should be made as transparent as possible—restricted only by clear cut national security considerations. Attributing responsibility to a specific state with a high degree of transparency would help delegitimize abuses of cyberspace. It could also convince international public opinion, including other governments, to embrace more benevolent standards of conduct. Thus, when attribution is clear and transparent, countermeasures could and should be undertaken collectively by a group of convinced states (the bigger the group of states, the better) and as soon as possible.
Furthermore, releasing the evidence underlying attribution claims could open the door for additional deterrent measures, such as civil legal proceedings. In the case of NotPetya, for example, at least three multinational firms—Fedex, Maersk and Merck—and their insurance companies might sue Russia for causing direct damages totaling almost $1 billion.
International agreement is still available and needed
Of course, international law can also be shaped by the actions of small group of states. The EU and the Five Eyes members could generate standards about which values and principles are applicable to cyberspace and focus their efforts on enlarging the number of countries that share the same views. The negotiations among those countries should aim at formulating a new treaty based on what is lex ferenda (the law as it should be, to meet the new challenges of cyber capabilities) as opposed to regulation based on lex lata (the law as it is, based on the traditional international law).
Establish an accepted formula of attribution
One of the most important questions that should be addressed urgently, even if only through opinio juris, is the process for attributing responsibility. The evidentiary standards currently for attributing a cyber attack to a specific state remain open-ended. There is no consensus as to the legal and technological thresholds that need to be cleared to conclude whether the responsibility can be attributed to specific state. To date, only a few destructive cyber attacks have been clearly attributed to state “with high confidence.” In none of those cases did the nation to whom the attack was attributed accept the allegations and assume responsibility for the attacks. To the contrary, such states firmly denied any connection, instead calling on their accuser to disclose the evidence that the attribution was based on, rejecting the allegations for being politically biased, and at times, even demanding their participation in a common investigation committee.
Notably, the few attribution claims made explicitly with respect to incidents such as the Sony hack, attributed to North Korea, and the hack of the Democratic National Committee in 2016, attributed to Russia, have been subject to some disagreement even among American experts. In the case of the DNC hack, even the U.S. intelligence community expressed internal disagreement in its public report: whereas the FBI and the Department of Homeland Security made their joint assessment regarding Russia’s motives “with high confidence,” the NSA made its assessment only with “moderate confidence”.The question of attribution is connected to identifying how confidently certain technical parameters support the conclusion that a given actor conducted an attack. If attribution is based on accepted objective parameters and is transparent, other states would follow it, contributing to the establishment of state practice.
Establish an International Cyber Attribution Agency
Essential to analyzing technical parameters is the establishment of an International Cyber Attribution Agency (ICAA), an independent international agency like Brad Smith proposed. Such an agency would have the professional capacity to investigate and clearly attribute responsibility or to confirm the findings of another investigatory body. Every victim state could be eligible to invite this agency to independently investigate or join to ongoing investigation and publish its final conclusions. To increase trust in the agency, the agency should comprise of a variety of professional and academic experts (hailing from different countries and continents), and the selection should be strictly based on professional considerations. None of the experts should be simultaneously employed by governmental institutions to avoid even the appearance of conflicts of interest.
The leading companies in the private tech sector should voluntarily cooperate with the ICAA. Such cooperation both has social value will build customers’ confidence—surely a welcome development, given increased criticism faced by those companies overt failures to protect private and governmental data. And if the companies fail to cooperate, they might face new regulations sooner or later. If the new accord (CSTA) leads to the establishment of an international organization analogous to the Red Cross, then obviously that committee should embrace full and direct cooperation with the ICAA.
Some of the challenges that should be met in the process of establishing an attribution mechanism include setting a professional formula that includes the requisite, legal and technical, variables and the evidentiary standards to determine who is behind a given attack; and implementing appropriate transparency practices, including evidence and conclusions. If the process lasts more than three months, the agency should consider publishing intermediate findings and conclusions.
There is a serious deficit of trust among the leaders of the international community and a substantive reminder that it is easier to destroy than to build. The proactive approach presented above, if adopted and implemented, could help rebuild the trust among members of the world community and setting principles and norms to maintain international order, stability and security in our dynamic digital world.
In a recent judgment of the European Court of Human Rights (ECtHR), the Court ruled that imposing objective liability on a news website company for publishing a hyperlink is a disproportionate restriction on the company's right to freedom of expression. The case brings up the complex discourse on the costs and benefits of freedom of expression in the world of online communication and social media.
In a nutshell, the applicant was a Hungarian private company that operates a popular news website. It published an article on an incident when a group of football supporters entered a school and threatened Roma minority students.The article included a hyperlink which referred to an interview on YouTube, with a leader of the Roma minority who claimed that members of Jobbik, a political party in Hungary, were the ones who came and attacked the school. Consequently, the political party Jobbik initiated a defamation proceeding against the company and several other defendants including the media outlets that provided links to the impugned video. The domestic courts in Hungary found that the content in the YouTube video indeed falsely conveyed the impression that the Jobbik had been involved in the incident in the school. According to the decision of the domestic courts, by publishing a hyperlink to the YouTube video, the company was objectively liable for disseminating its defamatory contents and thus infringed the political party's right to reputation. In other words, the domestic courts found that posting of a hyperlink leading to content which was later held to be defamatory qualifies as the publication of defamatory contents and thus entails a finding of objective liability against the applicant's news portal.
Following this decision, the applicant company brought the case before the ECtHR. The legal question before the Court was whether such interference with the applicant's freedom of expression is necessary in a democratic society, in accordance with the provision in article 10(2) of the European Convention on Human Rights.
Unlike the domestic courts,the European Court held that the mere posting of a hyperlink does not qualify as dissemination of a defamatory content and should not entail liability for the content itself. It concluded that imposing objective liability on the applicant company might lead publishers and journalists to refrain from hyperlinking completely; thus impeding the flow of information on the internet and resulting in a chilling effect on freedom of expression on the internet.Therefore, according to the Court, such a measure is a disproportionate restriction in the Applicant's right to freedom of expression and a violation of Article 10 of the European Convention.
Essentially, the Court is basing its conclusion on "the unique features" of the hyperlink. Those features include mainly: (1)The indispensable nature of hyperlinks on the internet. According to the Court, hyperlinks are not merely a technical platform; they are essential instruments for making information accessible for users by helping them navigate an endless amount of information. Imposing strict liability on every publication of a hyperlink would impede the easy navigation and operation of the internet.(2)The difference between a hyperlink and an act of publication. Unlike a publication, a hyperlink does not deliver the content itself but only points to the existence of content which is located on another website. They are "content neutral".(3) The lack of control over the referred content. The content in the website to which the hyperlink leads might be changed, while the person who posted the hyperlink usually does not have control of it, and is usually not even aware of any changes that are made after the hyperlink was posted. (4) The content behind the hyperlink, is already, and independently, publishedand available.
Therefore, the Court asserts that imposing liability on the journalist who published the hyperlink could be justified only on a case-by-case basis and requires examination of several relevant elements. Primarily, whether the journalist knew that the referred content in the hyperlink was defamatory or otherwise unlawful, and whether the journalist endorsed or repeated the content in his/her article, or merely provided the hyperlink without any further reference. The Court found that in this case the article merely provided access to the interview through the hyperlink without endorsing or repeating the content either directly or by inference; the journalist merely mentioned that an interview with a leader of the Roma minority was available on YouTube. The Court also asserted that the journalist could not have assumed that the content was unlawful.
In its decision, the Court points to the struggle in adapting human rights protections to the digital age: the internet, according to the Court, has a "particular nature"; as such, the duties and responsibilities – in this specific case – of an internet news portal relating the right to freedom of expression - should be modified to some extent from those of a traditional publisher. Nevertheless, the Court indicates that the criteria it chooses to use in order to assess the liability of the news website are based on parameters that are relevant for traditional publication, particularly, the requirement for endorsement of the content of the hyperlink and the requirement to respect journalist ethics.
It is important to note that the case dealt with the right to reputation of a political party, therefore – unlike in the case of a private individual – the right to know and the public interest in information and expression would from the outset outweigh the right to reputation.
Even so, looking at the principles that the ECtHR drew on, it seems that the Court’s analogy to traditional journalism has overlooked or at least underestimated the power of the mere act of “communicating existing content” in the age of the online world. Arguably, the article in question did not necessarily endorse, repeat or refer in any way to the content of the video, but it gave it additional stage and publicity by linking to it. Unlike a quote or a reference to a source in traditional journalism, in light of the rhythm and widespread nature of online media, greater responsibility attaches to the mere act of republishing a certain video.
Providing a link to a video in the digital realm has the potential to be highly influential due to some prominent features of the internet. The first is the internet’s ability to preserve information and thereby to make the information not only widespread, but also permanently available to the public. The phenomenon of “Content Permanency” was described by the privacy scholar Solove, who stated that “content which was in the past scattered and forgettable is becoming permanent and searchable". This contrasts with the well-known perception of the traditional newspaper as “tomorrow’s fish and chip paper.” As one may recall, this feature had, inter alia, triggered recognition of the right to be forgotten in the famous ECJ Costeja case (later incorporated in the GDPR). This right now allows European users to ask to delete personal information about them by removing the links to the content that appears in a search engine if a certain amount of time has passed, even if the content itself is not harmful but simply no longer necessary.
Another relevant feature of the ןnternet is the so-called "Aggregation Problem". This refers to the fact that in the digital age, information can be easily assembled and thus, even details that were traditionally considered as neither private, nor harmful, nor defamatory, when collected together create a kind of 'digital profile' of an individual or an entity that does not necessarily accurately reflects their real tangible identity.
These two features lead to another complication, namely the problem of inconsistency and lack of transparency. Without any transparent reason, much of the information online disappears within days or months, while other pieces of harmful content can remain forever.
To conclude this point, when dealing with the reputation of a person, or even an entity, it is hard to ignore the “online circumstances,” and above all the fact that pieces of information including videos can disseminate and float in cyberspace without any distinction, sometimes forever. Thus, linking to a video might have far-reaching implications, even without endorsing the content and even if the content that appears in the hyperlink does not necessarily fall into one of the categories of harmful or defamatory contents. Therefore, one must use hyperlinking with the requisite care.
Proper attention should also be paid to the fact that in the Magyar Jeti Zrt case, the publisher runs a popular online news portal. In fact, the identity of the publisher, whether a private individual, a blogger, or a large-scale news company, was not viewed by the Court as a relevant aspect in assessing liability of a publisher of a link.
This is surprising in light of the immense implications that even an unintentional mistake in reporting might have today in the digital realm. This consideration could have lead to assume a stricter level of expectations or responsibility from a professional news website. It is true that according to the Court's decision, no intent to disseminate defamatory content was found and of course that this case is very far from a fake news incident (which generally describes articles that are "intentionally and verifiably false and could mislead readers.") Yet again, the scale of the implications, emphasizes the gap between online and offline reporting and hence affirms the limitations of traditional protections of the right to reputation. This dissonance is evident also by the fact that the Court stressed the importance of maintaining compliance with journalistic ethics in today's world, which is characterized by a vast amount of information. Such an approach does not necessarily correspond with the reality of the digital world, which is teeming with writers and bloggers, and where private users of social platforms are typically not committed in any way to journalistic ethics.
Indeed, the Court rightly concluded that imposing strict liability is akin to "throwing the baby out with the water". It is not the proper way of dealing with the implications of posting a hyperlink because it will create a chilling effect and impose an excessive restriction on the right to freedom of expression. The fact that the case discusses the right to reputation of a political party obviously supports this conclusion. However, while the Court prioritized the right to free expression in the digital arena, emphasizing its new features, the alternative criteria that the Court provided for assessing liability in this type of cases, are neither sufficient nor well enough developed to address the new challenges of protecting the right to reputation in the digital age. This highlights the need to develop norms that are adjusted for the digital arena, in particular regarding the proper balance between the right to free expression and the right to reputation and privacy.
* This blog post was written in my personal capacity. It does not necessarily reflect the views of Israel’s Ministry of Justice and/or the Israeli government
 European Court of Human Rights, Application no. 11257/16, Magyar Jeti Zrt v. Hungary (4.12.2018). European Convention on Human Rights (CTE no. 14), Article 10.
 Ibid p.21, para 73; p. 27, para 4: Hyperlinks are the glue that holds the Web together in so far as they enable people to easily and quickly navigate to other webpages to retrieve, view, access and re-share information. Without hyperlinks, publishers would have to provide alternative instructions for readers to find more information. For most ordinary people, this would be difficult, if not impossible, to execute without a strong technological background.
 Ibid p. 22, paras 77-80: The Court identifies in particular the following aspects as relevant for its analysis of the liability of the applicant company as publisher of a hyperlink: (i) did the journalist endorse the impugned content; (ii) did the journalist repeat the impugned content (without endorsing it); (iii) did the journalist merely put an hyperlink to the impugned content (without endorsing or repeating it); (iv) did the journalist know or could reasonably have known that the impugned content was defamatory or otherwise unlawful; (v) did the journalist act in good faith, respect the ethics of journalism and perform the due diligence expected in responsible journalism? ….With these principles in mind, the Court would not exclude that, in certain particular constellations of elements, even the mere repetition of a statement, for example in addition to a hyperlink, may potentially engage the question of liability.
 p. 22, para 77, Also see Concurring opinion of Judge Pinto De Albuquerque p. 29, para 9 and 13:
…In order to impute liability, be it civil or criminal, there must be concrete evidence of endorsement by the journalist, who knowingly assumed the unlawful content as his or her own by means of explicit and unequivocal language. This endorsement corresponds to the publication or dissemination of the defamatory or otherwise unlawful content, which is equated to traditional forms of publication.
 Ibid, p. 23, para 81: Furthermore, the limits of acceptable criticism are wider as regards a politician – or a political party – as such than as regards a private individual. Unlike the latter, the former inevitably and knowingly lays himself open to close scrutiny of his or her every word and deed by both journalists and the public at large, and he or she must consequently display a greater degree of tolerance.
 Solove, Daniel J., The Future of Reputation: Gossip, Rumor, and Privacy on the Internet (Yale University Press, 2007), p. 4, p. 33.: Viktor Mayer-Schönberger, The Virtue of Forgetting in the Digital Age. (Princeton University Press, 2009), p. 11; Ambrose, M.L., 2012. It's about time: privacy, information life cycles, and the right to be forgotten. Stan. Tech. L. Rev., 16, p. 369
 European Court of Justice, Case C-131/12, Google Spain SL v. Ageficia Espanola de Protecci6n de Datos, Mario Costeja Gonzdlez, (13.5.2014). para 93-96 (Hereinafter: Costeja):"… It follows from those requirements, laid down in Article 6(1)(c) to (e) of Directive 95/46, that even initially lawful processing of accurate data may, in the course of time, become incompatible with the directive where those data are no longer necessary in the light of the purposes for which they were collected or processed. That is so in particular where they appear to be inadequate, irrelevant or no longer relevant, or excessive in relation to those purposes and in the light of the time that has elapsed…"
 Daniel J. Solove, Access and Aggregation: Public Records, Privacy and the Constitution, 86 Minn. L. Rev. 1137 (2002), p. 1185: I contend that the nature of the harm stems from what I call the "aggregation problem." Viewed in isolation, each piece of our day-to-day information is not all that telling; viewed in combination, it begins to paint a portrait about our personalities. The aggregation problem arises from the fact that the digital revolution has enabled information to be easily
amassed and combined. Paul M. Schwartz; Daniel J. Solove, The PII Problem: Privacy and a New Concept of Personally Identifiable Information, 86 N.Y.U. L. Rev. 1814 (2011), p. 1821, 1842.
 Ambrose, M.L., 2012. It's about time: privacy, information life cycles, and the right to be forgotten. Stan. Tech. L. Rev., 16, pp. 368-369.
 Robert C. Post, The Social Foundations of Defamation Law: Reputation and the Constitution, 74 CALIF. LAW REV. 691, 693 (1986). Also see: Cheung, Anne S. Y. and Schulz, Wolfgang, Reputation Protection on Online Rating Sites (September 15, 2017). 21 Stanford Technology Law Review 310 (2018). ; University of Hong Kong Faculty of Law Research Paper No. 2017/026. Available at SSRN: https://ssrn.com/abstract=3037399 .
 Magyar, p. 21, para 70: The Court observes that the Internet news portal in question is professionally run, publishes some 75 articles in a wide range of topics every day, and attracts a readership of about 250,000 persons per day.
 Allcott, Hunt, and Matthew Gentzkow "Social media and fake news in the 2016 election." Journal of Economic Perspectives31.2 (2017): 211-36.(Albeit the term "Fake news" is still a controversial definition.)
Note: This article leads up to the design and WIP implementation of an anonymous voting protocol on Ethereum with zero-knowledge proofs for preserving anonymity.
The history of anonymous voting dates back thousands of years to the ancient Greeks and Romans. In their cultures, anonymous voting was used to exile individuals perceived as threats and also for general electoral matters. Preserving anonymity was and is important in past and present cultures. Anonymity allows individuals to make unprejudiced decisions on the basis that these decisions cannot come back to haunt the voter. In modern society, and more specifically modern democracies/republics, most public elections occur with almost anonymous voting. The almost part comes from various precautions to detect election fraud, such as numbering systems and other schemes for reverse-engineering a vote. All in all, they are mostly secret, but how did we get here?
Voting in the United States
Before we had the secret ballot or Australian ballot in the USA, we had public elections. Until the 1890s, some states even used oral ballot systems for determining elections, a process in which voters would congregate around government buildings and cast votes in a public manner. As was common in Australia and soon America around this era, local gangsters and criminals would intimidate voters in public and coerce different outcomes of elections. Soon, the consequences of public voting systems grew too large and the governments changed to their private counterparts.
The conditions of the updated secret ballot were :
an official ballot being printed at public expense,
on which the names of the nominated candidates of all parties and all proposals appear,
being distributed only at the polling place and
being marked in secret.
There are some states that, according to their constitution, still use either mailed or open ballot systems, and all states allow the option of an absentee ballot. The states adopting different practices are:
West Virgina — open, sealed, or secret ballot decided by the voter
Colorado, Oregon, and Washington — entirely mailed ballots.
Consequences of a secret ballot.
As a result of improvements to privacy, anonymous voting at the level of secret ballots led to changes in voter turnout. Before the change, political parties had an incentive to pay people to vote, ideally for their party. This also closely aligns with that of a rational economic framework , where bribes were used to incentivize voter participation. Ultimately, the change to a secret ballot shielded the actual vote of voters from the general public, preventing political parties from the verification of an individual vote. Unable to verify a task they were paying people for, the parties have little to no incentive to continue making payments. As a result, the secret ballot caused a decrease in voter participation, though not because voters decided not to vote, but rather that a pool of voters who were incentivized to do so weren’t anymore.
Another recent study in 2012 sought to understand consequences of a secret ballot from the perspective of unmet expectations by voters . The researchers designed an experiment to understand whether voters withheld participation because of their beliefs or rather lack thereof in the actual anonymity of votes. They found that nearly 3% of voters with histories of notparticipating did so because they didn’t believe the votes were truly secret. To me, this translates to a lack of incentive based on the alleged privacy of the system and a lack of trust.
Today, our election system is still not anonymous. There are mechanisms designed to reverse-engineer the voting process to uncover election fraud. This poses a risk for privacy-focused voter participation, if participation is a metric we hope to improve.
Designing a vote
There are many ways to design a vote, all with different qualities and consequences. Voting systems should be designed with an engineering mentality and with a heavy emphasis on security engineering. We should design voting systems from an adversarial perspective in hopes of creating a favorable environment. This means that we should analyze the weakest links of our system, since these weaknesses present the easiest opportunity for an attacker to exploit. In order to determine these weaknesses, we must begin the construction of the voting mechanism and analyze it, repeatedly improving the system until it is as robust as possible.
It is pertinent to first determine who should be participating in a vote because why else would we be designing one?
Citizens — Legal members of sovereign states and nations.
Organizations — And the members contained within. There could even be an interest in multi-organizational participation such as from participants in a trusted alliance like the Ethereum Enterprise Alliance.
Users of products — Sourced individuals who should have a say in the governance of future product iterations.
We might then ask: who wants to attack a voting system and how? This question can be answered differently at many different stages and sizes of elections. Obviously as elections grow in size, so too does the amount of resources required to attack it. We concern ourselves with the most powerful adversaries possible with the remark that as we decrease the size of the adversary, so too does the scale at which an attack occurs. That is, it is less likely for a nation-state like Russia to have large incentives in attacking your local cities election versus their incentive for attacking the presidential election. However, there are still adversaries at the local city scale with the incentives and capabilities of attacking such an election.
Adversaries — Nation-states, political parties, multi-national corporations, individuals with unbounded money and resources
Methods — Information warfare, election fraud, bribery
Beyond threat modeling, there are other design decisions such as how to handle tallying of votes. Different strategies potentially affect the outcome of elections since knowledge of progress or lack thereof influences the beliefs voters have about the power of their vote. When coupled with a voter’s incentive, it can lead to undesirable effects on voter participation.
Public — Where progress can be viewed throughout the entire process such as on online polls (after submitting your vote of course).
Private — Where progress is hidden until the election has concluded such as in the popular vote for the next president in the United States.
Next, there is the decision to hold a public or anonymous vote. These are both quite common to us as we have most likely participated in one or the other or both. We witness, participate in, and hear news of both in the form of general popular vote elections (almost anonymous) and the electoral voting process (public), wherein our congressmen vote on new bills as well as select a new president.
Public — Where knowledge of who voted for what is public, with “who” being a strict, non-pseudonymous identity.
Pseudo-public — Where any identity and the respective identity’s vote is publicly linked. An example could be a pseudonym that proves membership in some organization and publicly votes.
Anonymous — Where knowledge of who voted and for what is unlinkable. This can also be extended from a pseudo-public setting where any individual that proves members in some organization votes anonymously.
Finally, there are the tools at one’s disposal for designing a desirable mechanism. The tools for designing voting mechanisms exist in both physical and digital ecosystems since, after all, we have voting solutions in both manifestations: government elections and online polling respectively.
Physical — Tools such as written ballots, mailed ballots, voting booths, and actual people for conducting election.
Digital — Cryptography, websites, social media, and other software that interface with hardware for conducting elections in a digitized manner.
Most important to us for the rest of this article will be cryptography. We have laid out the groundwork for designing a voting system, and now we will build one.
A decentralized, anonymous voting mechanism.
Let’s now design a decentralized voting mechanism on Ethereum using zero-knowledge proofs to achieve true, cryptographic anonymity. If you are unaware of what zero-knowledge proofs are, I recommend reading up on thisand anything on zkp.science.
Ethereum — Ethereum is a blockchain project that enables the execution of smart contracts, which can be thought of as programs running on a decentralized computer.
Zero-knowledge Proof of Knowledge — A ZKP is a cryptographic proof detailing knowledge of an input x to an output f(x) without revealing x.
Anonymous coin mixer/tumbler — An anonymous mixer or tumbler is a service that “mixes” or “tumbles” cryptocurrency to anonymize the trail of ownership. A user deposits coins into a mixer and withdraws them using different, unlinked addresses, preserving anonymity.
Merkle Tree — A merkle tree is a cryptographic binary tree data structure that is built by repeatedly hashing the children to create parent nodes, starting at the leaves or data nodes.
The voting mechanism we will build uses:
Anonymous voting strategy
Public tallying strategy
and is designed with these groups in mind:
Ethereum owners as the organization entity
Any adversary capable of finding and bribing token holders.
The protocol that we will build is largely a derivative of an anonymous coin mixer on Ethereum, called Miximus developed by barryWhiteHat .
The mixer functions primarily through the novel usage of a merkle tree and non-interactive zero-knowledge proofs(ZK-SNARKs). The leaf nodes of the merkle tree contain the hash of a secret, which we will call H(x1,…,xN). The scheme works as follows:
First, a user deposits 1 ETH into the smart contract and creates a leaf node in the merkle tree with data H(x1,…,xN) with an address A.
Then, using the secret (x1,…,xN), the user creates a special zero-knowledge proof or ZK-SNARK, denoted p.
Finally, the user creates a new address B that is not linked in any way to Aand withdraws 1 ETH from the mixer, providing p as a proof that the user knows the secret contained in a leaf of the merkle tree.
Security of protocol. The protocol above is secure under the assumption that the Diffie-hellman discrete log problem is hard under the chosen hash function. This hardness prevents any adversary from finding the pre-images of the data of leaf nodes input into the tree. Moreover, the use of a ZKP allows the withdrawer to prove they deposited into the tree without disclosing which leaf is their corresponding deposit. Under the former assumption, finding out which leaf they eventually withdraw from is also hard.
We will design a voting protocol that is weighted against token holdings; that is, under a 1 coin, 1 vote paradigm. These are useful in polling the opinions/thoughts of the aggregate market of token holders of a specific token like Ether and have nice analogues to the voting capabilities of shareholders in publicly traded companies on the stock market. Our voting protocol requires a few changes but is derived from the protocol above.
First, a poller/developer deploys an Election smart contract with a future time cliff t for ending the election. The contract contains logic for voting.
Then, a user deposits 1 ETH into the smart contract and creates a leaf node in the merkle tree with data H(x1,…,xN) with an address A. This locks the user’s ETH until time t’ such that t<t’.
Then, using the secret (x1,…,xN), the user creates a special zero-knowledge proof or ZK-SNARK, denoted p.
Next, the user creates a new address B that is not linked in any way to A and submits their vote v, providing p as a proof that the user knows the secret contained in a leaf of the merkle tree.
Finally at any time t’ such that t<t’, the user withdraws the 1 ETH stuck in the contract to the same account that voted. This can be handled by storing voters on the contract and marking when various voting addresses have withdrawn or not.
Security of protocol. The protocol above is secure under the same cryptographic assumptions as the Mixer protocol. The reason the votes are not cryptographically linked to the addresses of the token holders is due to the fact that the tokens cannot be either. Following a vote, these new, unlinkable accounts can proceed in a similar fashion — mixing and voting — for as long as they desire.
However, now that we are using a token-weighted voting protocol, there is the potential for linking votes with token holders through statistical analysis. This, however, is largely unavoidable while tallying of votes remains public. In the scheme above, votes are recorded in plaintext and thus provide insight into the state of the market’s belief or side on a given issue, which can pose other issues in relation to voter incentives.
Example. There are 3 owners of tokens: Alice, Bob, and Charlie. Assume for the sake of argument that Alice, Bob, and Charlie purchased all the tokens in an ICO; that is, the tokens were sent to addresses of Alice, Bob, and Charlie. Since these fractions were disclosed through the on-chain transfer of tokens, we can potentially link the votes of a user depending on how many votes we have. Under a rational voter model, a truthful voter will vote for only their side of an issue. Thus, when and if they use all their tokens to vote, it is possible to link, with some probability, the likelihood that Alice voted a certain way AND controls the mixed tokens from the resulting protocol.
Say what? More security!
In the example described above, we’ve potentially run into an issue due to the public tally of our designated voting protocol. We can resolve this by changing a portion of the protocol, instead requiring a commitment to a vote. Our changes start at step 4, but we repeat its entirety for clarity. Recall H is our chosen hash function, we use (y1,…,yM) to denote randomness for obfuscating a committed vote.
First, a poller/developer deploys an Election smart contract with a future time cliff t for ending the election and a further time period Δ for the revealing of commitments. The contract contains logic for voting.
Then, a user deposits 1 ETH into the smart contract and creates a leaf node in the merkle tree with data H(x1,…,xN) with an address A. This locks the user’s ETH until time t’>t.
Then, using the secret (x1,…,xN), the user creates a special zero-knowledge proof or ZK-SNARK, denoted p.
Next, the user creates a new address B that is not linked in any way to A and submits a commitment to a vote H(v,y1,…,yM), providing p as a proof that the user knows the secret contained in a leaf of the merkle tree.
Finally at any time t’ such that t<t’<t + Δ, the user reveals their vote v’ along with the randomness (y’1,…,y’M) to be verified by the contract such that the following relation holds H(v,y1,…,yM)=H(v’,y’1,…,y’M). In the event this doesn’t hold, i.e. a user submits a false vote, the voter loses their deposit.
If ultimately there are unrevealed votes at time t’ such that t + Δ<t’, those deposits are also similarly slashed and kept as punishment. This provides an incentive for users to actually reveal their votes in the allotted time.
With hashed commitments, our election or poll is no longer publicly tallying. Instead, votes are revealed after the voting protocol has closed and tallied at the time t’ such that t<t’<t + Δ. The downside of this enhancement is that it requires more effort on behalf of the voters, though it provides added incentives or rather punishment against not revealing.
The protocols above are examples of a token based voting system, weighted by the number of tokens a voter has. It can be extended to allow larger casting of votes on arbitrary orders such as X coins for X votes.
We can utilize tokens as the de facto method for casting anonymous votes but alter the distribution process of those tokens. Perhaps, a trusted third party such as a basketball team can issue tokens to season ticket holders. Then using these tokens, 1 per season ticket holder, they can vote on new changes they would like to see in stadium offerings.
I leave most of this imagination up for another time. I think there are interesting applications with these protocols when combined with other identity solutions and wonder if they’re even possible given the structure of the protocols defined above.
I will conclude the article with a discussion on where these voting systems fail. The downside of anonymous voting on blockchain based networks using the protocols defined above rests precisely on their anonymity. These problems potentially restrict the solution above to only successfully handle weighted-token voting schemes. The reason anonymity is a downside is because it opens up the possibility for bribery and election fraud.
Recall our previous ICO example with potentially many more token owners. Alice and Bob can collude off-chain and share secret information for verifying their respective ZKPs. Then, either of them can cast votes on behalf of newly anonymized accounts.
Season ticket holders can purchase votes from one another outside and away from the basketball team’s owners. The main reason is there is pure anonymity and thus no guarantee that the future owners of the tokens remain the same. After all, coin mixing services were designed to obscure the path of ownership of tokens. We simply abuse that principle to handle anonymous voting.
Furthermore, this attack vector makes it nearly impossible to handle reputation or identity within the protocol, since there should not be a way to transfer reputation or identity to anyone.
Protocol and proof extensions
On the contrary and with a more delicate design, there may be ways to build a system to allow for reputation and/or greater identity based voting solutions, beyond owning tokens. Some of these ideas lead to alternative weighted voting mechanisms where:
Users continue to submit ZKPs of ownership of a chain of accounts in the procedure, without disclosing which accounts they own.
Users submit ZKPs of reputation on accounts they own without disclosing those accounts, in a differentially-private manner.
Users submit ZKPs of identity that allows them ability to vote at all.
I will be working on an implementation here on my github and potentially in this repo.