April 2018

The Regulation of Cryptocurrencies: Between a Currency and a Financial Product

ByHadar Jabotinsky.

Read More
As cryptocurrencies become better known, the number of the people using them grows. Their attractiveness is due in part to their decentralized, peer-to-peer construction. This makes them an alternative to national currencies controlled by central banks. In times of financial instability, an increase in the use of cryptocurrencies is apparent. Given that these cryptocurrencies are already replacing some of the “regular” national currencies and financial products, the question then arises: Should they be regulated, and if so – how? Some countries, such as China and Russia, forbid Initial Coin Offerings (ICOs) altogether, while others are struggling to gain a full understanding of the currencies in order to formulate coherent regulation.

The question of how to regulate cryptocurrencies is interesting throughout the life of the coin, but is of special interest during the Initial Coin Offering. The reason for this is that the value of the cryptocurrency depends not only on the value of the currency, but also on security issues. Since these coins exist in the virtual world, the sites on which they are traded are sensitive to hackers. Even if hacking the network of the coin itself is difficult, other sites such as stock exchanges of coins are more vulnerable to theft. The ICO process is also vulnerable. An ICO is a process in which people buy virtual tokens from the makers of the cryptocurrency. As the startup grows, these tokens are supposed to increase in value. This is a way of crowdfunding investment in order to cut transaction costs associated with raising capital elsewhere. In some respects, buying some of these tokens is similar to buying stock in an Initial Public Offering of corporations. Currently, unlike regular IPOs, very little information is given to potential investors and these ICOs have until recently largely remained “under the radar” of the securities authorities. Questions of disclosure are particularly interesting with regard to the “safety” of these coins: Should disclosure requirements include cyber security matters? Are investors equipped with the right tools to assess cyber issues with regard to cryptocurrencies?

This paper posits that although the technology underlying most cryptocurrencies is very similar, the logic behinds them differs. Some cryptocurrencies function like regular national currencies and have traditional currency traits. As such, they provide a medium of exchange, unit of account, and/or store of value. Other cryptocurrencies, however, may also represent additional rights. This interesting phenomenon actually causes some cryptocurrencies to be viewed as closer to real national currencies while others seem to be closer to financial products (such as securities or derivatives).

This phenomenon requires regulatory authorities to investigate each new ICO and determine how to classify the token. Some tokens, such as Bitcoin, indeed resemble currency and should therefore only be regulated to insure that fraudulent behavior is prevented. These types of cryptocurrencies should be more carefully regulated in case they increase systemic risks in the general financial system. Other tokens resemble securities and should be regulated accordingly. The main distinction between the two types of cryptocurrencies is whether or not their value is dependent on the efforts of others.

With regards to cryptocurrencies that are actually securities, additional mandatory disclosure should be required with regards to security issues surrounding the ICO. Investors should be informed about what kind of Blockchain technology is being used, who developed the code, and whether it was published openly. In addition, information with regards to what kind of cyber audits were conducted prior to the issuance of the coin is essential.

Hopefully, by implementing the steps suggested by this paper, regulatory authorities will be able to protect financial markets while still allowing room for innovation.

 

Read more here.

 

Read Less

Several Thoughts Following the Fatal Uber Accident in Tempe

By: Gadi Perl.

Read More
The fatal car accident in Tempe Arizona on March 18, 2018, in which an autonomous Uber vehicle killed a pedestrian crossing the road, instantly became world news. Many were curious to learn who was to blame, and video footage of the incident provoked much debate regarding potential mechanical fail and/or human error on the part of the supervising driver.

Since the liability or technical issues regarding autonomous cars have already been widely discussed, I would like to focus on two issues that may have been somewhat overlooked during the debate. The first is the amount of documentation provided by the autonomous vehicle regarding the accident. The second is the almost predictable failure of the human driver to pay attention and attempt to prevent the accident.

My first remarks concern the potential implications for the right to privacy caused by the amount of documentation generated from cameras mounted on the vehicle that was involved in the accident. Let’s consider the amount of cameras installed on the Uber autonomous car:

Following the accident, two videos were released, one showing the accident from the driver’s perspective and the other recording the driver’s response. It is therefore clear that at least two cameras were installed on the vehicle; an exterior camera documenting everything in front of the car (hereinafter the “front camera,”) and an interior camera, aimed at the driver.

While there is no doubt regarding the technical need for a front camera, since this is required for navigational purposes, the need is less clear regarding the interior camera. With no apparent technical need for an interior camera, one might speculate that the reason for installing a camera aimed at the driver has less to do with engineers and more to do with lawyers.

In an executive order of April 2017 promoting for the testing of autonomous cars in the state of Arizona, Governor Ducey required all companies to monitor their experimental autonomous vehicles. With similar regulations in the California Code of Regulations relevant to the Department of Motor Vehicles, state lawmakers have made sure that companies provide comprehensive documentation if any accident happened.

On the one hand, a requirement for comprehensive documentation is understandable considering the application of an experimental new technology on public roads with the potential to risk lives. On the other hand, even with the need for safety, little thought has been given to exactly what should and should not be documented.

It was not surprising to see telemetry data regarding locations, speed, and the direction of the car’s movement emerge. Such data are required for the operation and testing of the car and must be provided if we are to investigate what had happened. Nevertheless, I found it troubling to see the footage of the driver being published worldwide. It wasn’t clear to me why the driver had been videoed in the first place.

It seems to me, that lawmakers and companies alike had focused excessively on risk minimization during the test runs, without giving enough thought to other considerations, such as privacy. Installing cameras to oversee passengers is a severe invasion of privacy – and in this case not fully warranted. If there was a need to examine human responses, a camera pointed at the driver’s face was very inefficient, giving us a partial picture at best. Even if one could claim that there is a need for a driver-facing camera, publication of the video was not required and should have been forbidden. 

Will our future cars document us sitting there watching the road? Will they document our conversations with our loved one? Will they save footage when we’re crying, or simply when we’re having a bad day? I personally felt bad for the driver who became an unwilling celebrity.

It is important that when autonomous cars become available to the public, special attention be given regarding the preservation of our right to privacy, whether be it as passengers on board an autonomous vehicle, or even as pedestrians randomly filmed by passing autonomous cars. We should avoid a future when every passing car may lead to an uploaded YouTube video making us famous without our consent.

My second point goes to the risks of having an autonomous car, but still trying to have a person assume responsibility in extreme cases.

It is clear from the video published on YouTube that during the moments leading up to the accident, the Uber employee did not have her eyes on the road, and she seemed relatively distracted. Only a second before the final crash does the driver realize what is about to happen, after it is too late to react.

I believe that the failure of the supervising driver to prevent the accident was almost inevitable. I was not surprised that the supervising driver, who most probably had nothing to do 99% of the time, was not vigilant in the moments leading to the accident. The constant vigilance required from drivers to prevent accidents is difficult to maintain in human-controlled vehicles, and impossible in autonomous vehicles. This test driver was unemployed most of her driving experience with the autonomous car in charge for navigation. It is unrealistic to think that she could have stayed alert for such a long period doing nothing.

This incident emphasized to me that when autonomous cars become commercially available the question regarding the identity of the driver should be a dichotomous one. Either a human will be driving or a machine – not both. Any attempt to maintain human supervision will most likely prove ineffective, and drivers will not be able to prevent accidents. This is a legal statement and not just a practical one. Liability to prevent accidents cannot be attributed to the human supervisor. It simply won’t work.

Yang and Coughlin (2014) discuss self-driving vehicles and their challenges for ageing drivers, who tend to respond more slowly, to suffer from loss of hearing, and to have lower acceptance rates for new technologies. Therefore, apart from being inefficient, giving responsibility to human drivers to supervise autonomous cars may also have discriminatory aspects. Elderly drivers will be less equipped to respond quickly and less adapt at understanding the data from the autonomous cars.

There are many more issues which researchers are trying to examine as self-driving vehicles become a reality. Liability for harm[1], cybersecurity[2]keeping cars safe and moral decisions in life and death situations[3] are just examples. Real life accidents help us understand that what we write in articles, and opinions we debate in various forums, have an impact on people’s lives. Finding solutions beforehand can make technology safer and promote equality.

[1] Colonna, Kyle, Autonomous Cars and Tort Liability (Fall 2013). Case Western Reserve Journal of Law, Technology & the Internet, Vol. 4, No. 4, 2012. Available at SSRN: https://ssrn.com/abstract=2325879 or http://dx.doi.org/10.2139/ssrn.2325879

[2] NHTSA - Automated Vehicles for Safety, https://www.nhtsa.gov/technology-innovation/automated-vehicles-safety.

[3] Ethics Commission's complete report on automated and connected driving, German Federal Ministry of Transport and Digital Infrastructure, 28.8.2017, available at: https://www.bmvi.de/SharedDocs/EN/publications/report-ethics-commission.html?nn=187598.

 

Read Less