During early April 2018, Mark Zuckerberg testified before a confused Congress about issues relating to the Facebook–Cambridge Analytica data scandal. After about two days of unfocused questioning by senators, Facebook gained more than $25 billion in market value. The message to financial onlookers was clear: Facebook is immune to governmental regulation.
In America, there have been increasing numbers of public revelations and concerns about discrimination, bias, and privacy infringements against individuals by giant data-collecting technology companies and their machine learning algorithms. The fanciful warnings of the threat of surveillance capitalism to privacy rights and civil liberties–via intrusive, constant monitoring and use of behavior data to manipulate future behavior at scale–are increasingly coming true.
Unfortunately for U.S. citizens, little change has been made by lawmakers to protect their interests. On the subject of private and ethical AI, the U.S. government has been disinterested, lacking in expertise, and impotent to stand up to tech corporations.
American vis-à-vis other National AI policies
Multiple foreign governments have presented national AI policies and strategies that highlight their awareness of ethical concerns in AI and their commitment to developing safe and beneficial AI technologies:
-
The UK’s strategy specifically "consider[s] the economic, ethical and social implications of advances in artificial intelligence" and recommends preparing for disruptions to the labor market, open data and data protection legislation, data portability, and data trusts. It notes that “large companies which have control over vast quantities of data must be prevented from becoming overly powerful.”
-
France’s strategy similarly includes a focus on developing an ethical framework for "inclusive and diverse AI" and avoiding the “opaque privatization of AI or its potentially despotic usage.”
-
India’s strategy highlights the importance of AI ethics, privacy, security and transparency as well as the current lack of regulations around privacy and security.
-
Canada has a National Cyber Security Strategy for protecting Canadians’ digital privacy, security and economy and a commitment to collaborate with France on ethical AI.
-
China has a National Standard on Personal Data Collection which addresses issues similar to those in the European Union’s General Data Protection Regulation (GDPR). The nation's "New Generation Artificial Intelligence Development Plan" underlines the need to “strengthen research and establish establish laws, regulations and ethical frameworks on legal, ethical, and social issues related to AI and protection of privacy and property.”
-
China, Japan, and Korea have all recently revised their legislation on personal information protection and France and Japan have formulated personal information protection rules for new industries such as cloud computing.
-
The European Union Legal Affairs Committee recommends "privacy by design and privacy by default, informed consent, and encryption, as well as use of personal data need to be clarified."
Meanwhile, the U.S. has mostly concerned itself on the military aspects of AI policy, with the House Committee on Armed Services legislating a National Security Commission on Artificial Intelligence, and the Department of Defense investing in military applications of AI. The U.S. has also sounded alarms over losing global dominance of AI technology, for which the most concrete action the U.S. has attempted has been a costly trade war aimed at curtailing China’s intellectual property theft and AI ambitions.
On the side of high-level national strategy, in 2016 the Obama administration created the U.S. Artificial Intelligence Research and Development Taskforce to provide direction for R&D, with an eye toward the safety and security of AI systems. The Trump administration has since disbanded that taskforce and dismissed the two reports issued by the Obama administration on AI technology. The first report had discussed a host of issues core to establishing a ethical AI policy: applications of AI for public good; AI regulation, fairness, safety; international cooperation; cybersecurity; and human considerations regarding use of AI in weapon systems. The second had discussed the effects of AI-driven automation on the U.S. job market and economy and recommended policy responses.
The Trump administration finally took steps toward a taskforce of its own on May 9, 2018, when the White House hosted a Summit on AI for American Industry and planned a Select Committee on AI. Unfortunately, the summary report does not make any reference to the safety and rights of individuals, and prepared remarks by Deputy U.S. Chief Technology Officer Michael Kratsios emphasis a continued laissez-faire approach to corporate regulation:
Our Administration is not in the business of conquering imaginary beasts. We will not try to "solve" problems that don’t exist. To the greatest degree possible, we will allow scientists and technologists to freely develop their next great inventions right here in the United States. Command-control policies will never be able to keep up. Nor will we limit ourselves with international commitments rooted in fear of worst-case scenarios.
Perhaps more troubling, the Select Committee on AI is co-chaired by President Trump, the director of DARPA, and the director of the NSF. The first lacks a technical background but would have unilateral control over the entire committee. The second’s official capacity dictates that he approach issues from a military perspective. It is questionable how much influence the third would be able to have over the committee. The committee is technically also co-chaired by the director of the Office of Science and Technology Policy, but that post has remained vacant throughout the Trump administration. This setup is reflective of the general want of scientific expertise in the current government.
The development of a responsible national AI strategy requires competent, knowledgeable, and neutral authorities who understand new AI technologies and their implications on privacy, security, and other ethical concerns. Until the U.S. leadership gains such expertise, it will have difficulty legislating and implementing effective and beneficial AI policies.
The United States as a Corporatocracy
Another challenge the United States faces in exercising judicious public authority in tech is the particularly outsized political influence of its large corporations. These play a deciding role in determining American society’s economic and political policies by means of campaign contributions, lobbying, access to and representation of corporate elite amongst politicians, and rights of corporate personhood. Today it is no longer even clear where the line between the public sector and the corporate sector ends. On one hand, the autonomy of private tech companies allows for a check against government overreach. Corporations can resist military involvement, government censorship, and assistance with federal investigations, and with significant public support to do so, as in the case of Google rejecting a Pentagon contract renewal and Apple refusing to unlock its iPhones in its encryption dispute with the FBI. On the other hand, the corporatocracy makes it difficult for the government to reign its tech companies on behalf of its citizens, even if it were become interested in and adept at AI policy.
Some believe that the private sector can self-regulate, hoping that the morality of tech leaders and fear of public disapproval will sufficiently deter unethical corporate behavior. A recent case of a tech giant asserting that it will follow higher ethical standards is Google’s publication of AI principles to guide its business. In the wake of outspoken backlash amongst internal employees and in academia over Google’s participation in a military drone surveillance project, the company announced it would not pursue harmful AI applications such as weapons causing injury to people and surveillance technologies "violating accepted norms." Silicon Valley corporations have also tried to take public interest into their own hands through the establishment of non-profit and multistakeholder organizations. Nonprofit research company OpenAI was founded and backed in 2015 by Silicon Valley tech elite Elon Musk, Sam Altman, Reid Hoffman, Peter Thiel and others to develop safe artificial general intelligence. The Partnership on AI is a consortium of other 50+ companies and nonprofits founded in 2016 by Amazon, Facebook, Google, DeepMind, Microsoft, IBM, and Apple to establish practices on AI technologies and advance the public’s understanding of AI.
While these are laudable and important steps toward corporate responsibility, any policies proposed by the formed organizations do not have the power of law. Likewise, ethical codes proclaimed by corporations are non binding-promises that can be modified any time, as evidenced by Google’s recent reshuffling of its "Don’t Be Evil" motto in its corporate code of conduct, or simply not followed. Lack of neutrality in corporate-led initiatives is also an issue: in February, Elon Musk left OpenAI’s board due to conflict of interest with his role in Tesla. But most importantly, there has been little material change to the existing manners of transgressive harvesting and utilization of user data. The poor regard for personal protection and rights in the current unregulated state of affairs shows us that we cannot simply rely on the goodwill of tech companies. Indeed, the nature of corporations themselves may expose them to lawsuits if they fail to prioritize the interests of their shareholders over debatable moral concerns. We need a citizen-centric government to shepherd the ethical and fair use of technology.
Recommendations
Given the U.S.’s current lack of expertise and effectiveness in tech governance, an ideal current strategy that would also serve it well in the long term would be to cooperate with other countries to curtail the dangers of American corporatocracy and make enforceable agreements in which all sides develop weaponizable AI tech in tandem, while banning all tech that is sufficiently dangerous or negative-sum. The U.S. can also piggyback off the expertise and articulated strategies of other nations, which may have more technocratic leadership, longer governmental terms, and/or policy-driven agendas that are better suited to the kind of cautious, long-term thinking needed to address the challenges of AI. Furthermore, the U.S. can make use of the leverage that other nations have over U.S. corporations. This has happened inadvertently: in May 2018 Americans have received a string of emails lately from the various platforms they use about new privacy updates. Flawed as the GDPR may be, it is telling that the first time in recent years that U.S. corporations have been forced to make non-perfunctory amends on behalf of individuals has been a consequence of foreign regulation. We can look to the European AI alliance and European Declaration of Cooperation on Artificial Intelligence as examples of multinational coordination and alliances coming together to cooperate on AI, as well as the AI for Good Global Summit and ISO group on AI, which already see representation from American industries and universities.
While we wait for lawmakers to take action, the rest of us should not forget that societal change is a collective responsibility. Researchers in academia and industry have a duty to conduct research on technologies to protect us all: data anonymization; algorithmic bias and fairness; and the privacy, security, and interpretability of machine learning systems. Technical, legal, and humanities scholars should develop and teach AI ethics and explore the intersection of AI, law, and policy. Users and employees can vote for socially responsible businesses with their dollars, time, and labor. Entrepreneurs, engineers, and tech designers could develop alternative user-aligned internet apps and platforms. Journalists, non-profit policy advocates, and legal representatives all have a part in pushing for policies that protect all Americans.
But ultimately, we do need the government to assume its rightful role in protecting personal privacy and rights in the AI era. The network effects that benefit incumbent tech monopolies present practically insurmountable barriers for users to switch to newer, more ethical platforms and services without government intervention. Myopia and conflicts of interest between private individuals and entities make it difficult to unite their separate well-intended efforts into comprehensive and far-ranging changes. The government needs to step in and use its resources and powers of legislation and coordination to provide the structure for industry and research to develop and utilize AI without compromising civil rights and liberties, and do so soon. What is at issue is unprecedented assault to personal data and behavior; what is at stake is personal safety, privacy, dignity, autonomy, and democracy.
Citation
For attribution in academic contexts or books, please cite this work as
Melody Guan, "Regulating AI in the era of big tech", The Gradient, 2018.
BibTeX citation:
@article{GuanGradient2018,
author = {Melody, Guan}
title = {Regulating AI in the era of big tech},
journal = {The Gradient},
year = {2018},
howpublished = {\url{https://thegradient.pub/regulating-ai-in-the-era-of-big-tech/ } },
}
If you enjoyed this piece and want to hear more, subscribe to the Gradient and follow us on Twitter.
Melody Guan is a current PhD student in machine learning at Stanford University, with a focus in ML privacy, security, interpretability, and fairness. Prior she worked on research in deep reinforcement learning at Google Brain and financial trading at D. E. Shaw. Melody holds an MA in Statistics and a BA in Chemistry & Physics from Harvard.