Stanford's Human-Centered AI Launch Symposium

Stanford's Human-Centered AI Launch Symposium

. 7 min read

Will artificial intelligence replace or augment humanity?

On Monday, Stanford gave its answer by launching the new Institute for Human-Centered Artificial Intelligence (HAI). The opening event featured a star-studded speaker lineup that included industry titans Bill Gates, Reid Hoffman, Demis Hassabis, and Jeff Dean, as well as dozens of professors in fields as diverse as philosophy and neuroscience. Even California governor Gavin Newsom made an appearance, giving the final keynote speech. Other luminaries, such as former Secretaries of State Henry Kissinger and George Shultz, former Yahoo CEO Marissa Mayer, and Instagram cofounder Mike Krieger watched from the audience. This was a big deal.

Henry Kissinger in the audience, along with Bill Gates

Any AI initiative that government, academia, and industry all jointly support is good news for the future of our field. HAI differs from many other AI efforts in that its goal is not to create AI rivaling humans in intelligence, but rather to find ways where AI can augment human capabilities and enhance human productivity and quality of life. Think collaboration, not replacement.

If you missed the event, you can view a video recording here. Below, we highlight some of our most interesting takeaways from the launch's lightning talks and panel discussions.

Opening Remarks 10:36

Opening remarks from Stanford President Marc Tessier-Lavigne leave no doubt of Stanford's dedication towards ensuring AI's bright future. Notable quote: “By putting human centered values in AI, we can bring about new renaissance of thinking and learning.” HAI co-director Fei-Fei Li (of ImageNet fame) also emphasized that HAI's focus was as much about AI education and policy as it was on conducting world class research.

Human Inspired Intelligence

How Infants Learn 1:12:00

Stanford psychology professor Michael Frank argues that infants need to learn from social context and interaction, and that true intelligence cannot be obtained by just blindly reading language. Deepmind's methods of teaching agents through reinforcement learning align closely with the above thoughts.

Understanding The Human Brain 1:17:45

Stanford neuroscience professor Surya Ganguli argues that modern neural networks are very simplified versions of the human brain, and that they can be substantially improved by taking inspiration from the first intelligent beings on Earth (us).

Stanford professor Percy Liang on the power of langauge

The Power of Language 1:24:09

Stanford computer science professor Percy Liang argues that language is a powerful way to represent knowledge. It is easier to explain the definition of a prime number than to show you millions of random numbers and tell you if they are prime or not.

Demis Hassabis, cofounder and CEO of Deepmind

Industry vs Academia Panel 1:31:51

This panel was an interesting mix of industry veterans (Jeff Dean from Google and Demis Hassabis from Deepmind), who tend to be extremely optimistic on AI’s potential, and academics (Chris Manning from Stanford and Alison Gopnik from UC Berkeley), who tend to focus more on addressing the limitations of current AI systems.

One interesting moment occurred when Manning asked the panel what differentiates humans from chimpanzees. Monkeys have strong visual systems, hands nearly as dexterous as a human's, and can show a broad range of emotions. Yet only Homo Sapiens went on to dominate the world. Why? Language.

Language allows humans to band together and build institutions that transcend any given individual. It allows knowledge to be efficiently transmitted from generation to generation so that humanity as a whole grows stronger as time progresses. Yuval Harari made a similar argument in his seminal work Sapiens.

In response, Hassabis agreed, but argued that we need to figure out how to integrate deep learning with classical AI logical systems before AI can truly understand language.

Jeff Dean, legendary Google engineer

Bill Gates: Keynote 3:55:41

In this keynote, Gates was asked a wide variety of questions ranging from AI's potential societal benefits to responsible development of AI. Gates repeatedly emphasized healthcare, choosing applications such as exposing nutrient deficiencies for children in developing countries and analyzing microbiomes to help with drug discovery.

Bill Gates, founder of Microsoft and philanthropist at the Gates Foundation

Human and Societal Impact

Universal Basic Income 4:33:09

Stanford philosophy professor Juliana Bidadanure contends that universal basic income gained national attention over growing concerns that AI would displace humans from their current jobs. She argues for universal basic income as a potential safeguard.

Economics of AI 4:39:22

Stanford economics professor Mark Duggan explains that economists abstractly view AI as automation so analyzing the effects of AI becomes tractable. He then illustrates AI's impact on labor markets, market dynamics, and income inequality.

The Rhetoric Surrounding AI 4:45:14

Stanford communications professor Jennifer Pan uses headlines such as "China is overtaking US as the leader in artificial intelligence" to illustrate how we interpret AI as being driven by entities. She then argues that these entities are in fact composed of individuals with different goals and incentives, and that understanding the competitive dynamics of these individuals within the entities is crucial to understanding AI development.

The Intersection of Law and AI 4:51:09

Stanford law professors David Engstrom and Daniel Ho introduce potentially troublesome collisions between law and AI in law enforcement agencies. The crux of the problem is that governmental agencies promise clarity and transparency, but deep learning is currently neither explainable nor interpretable.

Panel on Human and Societal Impact 4:57:26

This panel was composed of Susan Athey (Stanford), Kate Crawford (New York University), Tristan Harris (Center for Humane Technology), and Eric Brynjolfsson (MIT), and was moderated by James Manyika (McKinsey). The panel discussed topics including the economic impact of AI and its deployment; fairness, bias, ethics; the difficulty in addressing these problems; and ultimately who should be accountable.

The economic aspects questioned how to ensure that AI was beneficial for all of society. Athey and Brynjolfsson discussed the need to adapt both society and infrastructure to advances in AI and other technological developments. Athey in particular advocated for ongoing policy research to anticipate the consequences of AI. Crawford also mentioned concern over how few companies or countries have the potential to develop AI at scale. Jobs were also discussed at length, paying particular attention to future job availability and potential of re-skilling workers.

On ethics, the panel weighed-in on fairness and bias. Crawford was concerned over 'dirty-data' (ex: unconstitutional policing) training predictive policing systems. Brynjolfsson and Athey chimed in on fairness and how human values can effectively be coded into algorithms. Crawford also raised concerns on the lack of a quantitative measure of fairness, and the need for accountability and due process. Harris mentioned his own difficulties when trying to raise awareness within tech companies on issues of fairness and accountability, as well as the difficulties of solving these problems. Athey agreed, pointing out that company metrics necessarily focus on the short term.

The panel concluded with a discussion of where the responsibility lies in solving these problems, and concluded that the members of the community have a collective responsibility to guide the future of AI.

Student speaker Stephanie Tena-Meza

Augmenting Human Capabilities

Helping Humans Collaborate 6:21:28

Stanford human-computer interaction professor Michael Bernstein wants to use AI to shift the way organizations are created. Specifically, his research investigates giving feedback to existing teams/help them be more productive, determining optimal team-switching to spread ideas, and coordinating on-demand teams of experts.

Personalized Education 6:27:21

Stanford reinforcement learning professor Emma Brunskill wants to use AI to transform personalized education. Students can learn faster and more effectively with the help of cutting edge algorithms.

AI/Computer-Assisted Healthcare Spaces 6:32:56

Stanford professor of biomedical data science Serena Yeung wants to endow healthcare spaces with ambient intelligence that is capable of reducing the burden on healthcare providers/physicians and nurses.

Improving Human-Computer Interaction 6:39:23

Stanford robotics professor Dorsa Sadigh wants to develop algorithms interact with humans safely and reliably, and claims that the starting point is having systems that can better model humans and their preferences.

Panel on Augmenting Human Capabilities 6:48:18

This panel was composed of Russ Altman (Stanford), Justine Cassell (Carnegie Mellon), Fernanda Viegas (Google), and Bob Zhang (Didi), four speakers with diverse backgrounds spanning from bioengineering to human-computer interaction.

The panelists collectively dissected the nature of human-augmented intelligence. Cassell first pointed out that augmentation is a positive buzzword that often fails to capture the true nature of human-AI interactions. While situations where AI serves as a tool to amplify human capabilities could accurately be described as augmentation, social interactions between AI and humans might better be described as collaboration or even competition. She emphasized that humans and AI are often interdependent, making augmentation poor wording.

Altman also identified several potential problems with the term collaboration. He noted that collaboration generally implies equality, but AI systems are designed to prioritize human happiness and welfare over the AI. Viegas added that insights from design thinking and human-computer interaction will be crucial for perfecting the process of augmenting human intelligence with AI.

California Governor Gavin Newsom gives the keynote on empathy. Source

Final Keynote: Societal Divisions 7:45:36

The governor of California, Gavin Newsom, concluded the symposium by discussing the societal divisions that AI might create. He called for empathy, reminding the audience that even though AI and automation can tremendously benefit society, many citizens fear for their jobs. He then emphasized the importance of education in a world with increasingly powerful AI, expressing the goal of providing excellent, lifelong education for everyone. Finally, Governor Newsom expressed his gratitude to Stanford HAI for the goal of developing AI to augment, and not replace, humans.

Images all taken from Twitter (with the exception of Gavin Newsom which was taken from CNET).


Felix Wang is a sophomore at Stanford University with a background in mathematics research and computer science.

Steven Ban is Harvard graduate who is currently a researcher at UC San Francisco interested in neuroscience and AI.

Hugh Zhang is an editor at the Gradient and a researcher at the Stanford NLP Group, with interests in generative models and AI policy. Hear his thoughts on Twitter.


If you enjoyed this piece and want to hear more, subscribe to the Gradient and follow us on Twitter!