The Far-Reaching Impact of Dr. Timnit Gebru

The Far-Reaching Impact of Dr. Timnit Gebru

. 8 min read

Dr. Timnit Gebru's contributions range from circuit design at Apple to computer vision research at Stanford to her global leadership in AI Ethics

Few researchers make breakthrough contributions to even a single field.

Fewer still can claim to have made breakthrough contributions to multiple fields. Dr. Timnit Gebru is one of those few. She has worked on computer vision problems in fine-grained object recognition; used large-scale image sets to gain sociological insight; conducted audits of biased facial recognition systems which have influenced real-world regulation; designed standards and processes to mitigate ethical issues with datasets and models; developed a framework of algorithmic audits for AI accountability; and more. Many of her papers have been cited hundreds of times.

Her impact goes far beyond her own research. She is one of the founders of the ACM Conference on Fairness, Accountability, and Transparency (FAccT), one of the most prestigious and well-known conferences related to machine learning ethics. As co-founder of Black In AI, she helped increase the number of Black attendees at NeurIPS from just 6 in 2016 to 500 in 2017, a nearly 100-fold increase in just one year. After more than half of Black in AI speakers could not get visas to Canada for NeurIPS 2018, she successfully advocated and organized to have ICLR 2020 held in Ethiopia, which would have been one of the first major AI conferences to be held on the African continent (it had to switch to remote due to covid).

While Gebru was already well-known by academics working on computer vision, AI ethics, and fairness, a much broader audience has learned her name in the past week after Google fired her from her role as a manager of their AI ethics research team, a move covered by outlets including BBC, NBC, Guardian, and New York Times. As of the time of this post, 2,278 Googlers and 3,114 academic, industry, and civil society supporters have signed a letter protesting Google’s actions and supporting Gebru. While her termination has sparked crucial discussions regarding industry censorship of unfavorable research, racial discrimination in tech, corporate diversity efforts, and the failings of our current AI ethics framing, here I will focus primarily on Gebru’s research and contributions to machine learning.

Electrical Engineering at Apple and Computer Vision at Stanford

Prior to pursuing her Ph.D. at Stanford, Gebru worked as an engineer at Apple, designing circuits and signal processing algorithms for various products including the first iPad. She earned her PhD from Stanford under the guidance of the legendary Dr. Fei-Fei Li (the senior author on the ImageNet paper, amongst many others), where she studied fine-grained object recognition, use of fine-grained domain adaptation towards overcoming dataset shift, and gaining sociological insight from large-scale, publicly available image sets. This work culminated with Gebru leading a project analyzing 50 million images of street scenes gathered from Google Street View to classify 22 million cars and make predictions about income, voting patterns, race, and education at a neighborhood level. Her work was published in the prestigious Proceedings of the National Academy of Sciences, and was highlighted in the New York Times.

Algorithmic Audits and Real-World Impact

As a postdoctoral fellow at Microsoft Research, Gebru partnered with Joy Buolamwini to work on GenderShades, a rigorous academic study auditing commercial facial recognition software. They discovered that error rates for recognizing dark-skinned women were higher than for any other group. Rarely do academic studies have the profound real-world impact that GenderShades has had. Citations of this landmark study include many letters, lawsuits, bans, proposed federal & state bills, and local laws about facial recognition. These citations include two federal bills (the Algorithmic Accountability Act and No Biometric Barriers Act), state bills in New York and Massachusetts, and the city of Oakland’s ban. This impact was not accidental; Buolamwini, Gebru, and Inioluwa Deborah Raji (who worked with Buolamwini on the follow-up study) designed their studies with great care and overcame many challenges of algorithmic auditing, including biased benchmarks, lack of access to target algorithms, and hostile corporate reactions. Raji and Gebru have developed a framework for algorithmic auditing as part of end-to-end AI system development.

Gebru has been at the forefront of pushing both the AI community, as well as a general audience, to consider ethical issues beyond just narrow, technical definitions of fairness. In a 2019 New York Times interview, Gebru wrote, “A lot of times, people are talking about bias in the sense of equalizing performance across groups. They’re not thinking about the underlying foundation, whether a task should exist in the first place, who creates it, who will deploy it on which population, who owns the data, and how is it used? For me it’s not as simple as creating a more diverse data set and things are fixed. That’s just one component of the equation.” In a 2020 New York Times interview, Gebru highlighted how due to an imbalance of power, “even perfect facial recognition can be misused”, noting as an example that, “Baltimore police during the Freddie Gray protests used facial recognition to identify protesters by linking images to social media profiles.”

Dataset Collection, Use, and Standardization

In 2018, Gebru combined her extensive electrical engineering background with her expertise on the role of datasets in machine learning to write Datasheets for Datasets, one of my all-time favorite papers and one that I include on the syllabus for my data ethics course.  Electrical components, such as circuits and resistors, are always accompanied by a datasheet specifying the details about how and where they were manufactured, and under what conditions it is safe to use them.  Gebru proposed a similar idea for datasets: to record factors about how a dataset was created, in which contexts it would be appropriate to use, potential biases or ethical issues, what work is needed to maintain it, and more. This proposal accounts for the fact that all data has context and there is not some perfectly objective ground truth. Datasheets for Datasets also contains historical case studies of how standardization and regulation came to three industries (the electronics industry, car safety, and pharmaceutical drugs) with an eye towards what we may learn as we consider standardization and regulation around machine learning.

Together with Dr. Margaret Mitchell and a team of other researchers, Gebru built on her work in Datasheets for Datasets with the creation of Model Cards for Model Reporting, as a way to organize the essential facts of machine learning models in a structured way, similar to nutrition labels for food.  This work was released both as an academic paper and as a prototype by Google, advocating for shared standards. Model cards clarify intended use of an ML model, limitations, details of performance evaluation (including checking for bias), and more.

An image from one of Google's sample Model Cards, for a facial recognition model

In Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Learning, Eun Seo Jo and Gebru analyzed how data collection in machine learning often emphasizes size and efficiency, taking data in masses indiscriminately without critiquing its origin, motivation, platform, or potential impact. They offered lessons we can learn from the library sciences on more thoughtful data collection, including the importance of full-time curators responsible for weighing risks and benefits, codes of ethics with frameworks for enforcement, and standardized documentation.

It is worth noting the positive, constructive, and practical nature of Gebru’s work (including her research on end-to-end algorithmic audits, datasheets, model cards, and lessons from archives). She often focuses on practical tools and processes for those working in industry towards the goal of more ethical work.

Black in AI and FAccT Conference

Gebru was also part of a group of 7 researchers who created and launched the stand-alone ACM Conference on Fairness, Accountability, and Transparency (FAccT), now one of the most prestigious and well-known conferences related to ethics in computer science. The FATML (Fairness Accountability & Transparency in Machine Learning) workshop had been held yearly at NeurIPS beginning in 2014, with more and more participants each year, until it began outgrowing the structure of a workshop. Having a well-respected conference to serve as a home to research on these topics is crucial: researchers need a venue where this work can count towards tenure decisions and research requirements, and it allows for dissemination to a broader audience. Creating and organizing a new conference requires careful thought, weighty decisions, and a huge amount of effort, yet is typically not valued towards career advancement. The entire team of FAccT founders, including Gebru, did a great job with this and helped change the field of machine learning in the process.

Gebru has worked hard to address the diversity crisis in AI.  She describes her experience at NeurIPS 2016 after seeing only 6 Black attendees out of 8,500 as "literally panicking. This field was growing exponentially, hitting the mainstream; it’s affecting every part of society. It is an emergency, and we have to do something about it now."  In response, she founded Black in AI, and over 500 Black machine learning researchers participated in the Black in AI workshop at NeurIPS 2017, just one year later.  As Lyne Tchapmi wrote, “This is almost a 100X improvement on the FIRST TRY. In all my career as an AI researcher, I’ve yet to read a paper or research methodology with this level of drastic impact. And this is in part why Dr. Gebru is to me the most impactful AI researcher I know.”  Black in AI has also provided mentoring to over 400 Black applicants to graduate programs since 2017; guidance, mentorship, and resources for current graduate students; and a postgraduate application network (with 100 soon-to-be or recent Black PhD graduates currently participating).

Gebru anticipated that there would be visa issues for African machine learning researchers planning to attend NeurIPS 2018 in Canada, so she and a team of other volunteers began working on the issue over 5 months in advance: making phone calls, filling out applications, asking other leaders to intercede with the Canadian government, booking flights, and more.  Despite these efforts, half of all Black in AI speakers were denied visas to NeurIPS 2018. Partly in response to this, Gebru successfully advocated for ICLR 2020 to take place in Ethiopia. This was an unprecedented move for a major machine learning conference, as most have a western-centric bias which puts the disproportionate burden of cost and travel on those outside the west, and impossible visa restrictions prevent many Africans from attending at all.

Stand with Timnit

Many members of Gebru’s former team at Google Brain have referred to her as an inspiring leader, fantastic and caring, and at least four have called her the best manager they’ve ever had. Gebru’s own manager, Samy Bengio, a Google director overseeing  300 research scientists on the Brain team and one of the initial developers of the Torch software library, wrote that, having been kept in the dark about her firing he was “stunned by what had happened”  and said “I have always been and will remain a strong supporter of her scientific work… She taught me a lot and still is. I stand by you, Timnit.” I hope that we can all stand with Dr. Timnit Gebru now. Please read her papers, quote her, cite her, include her work on your syllabus if you teach, organize for collective action if you work at Google, and sign this letter of support.

Author Bio
Dr. Rachel Thomas is director of the Center for Applied Data Ethics at the University of San Francisco and co-founder of fast.ai, where she helped create the most popular free online course on deep learning. Rachel earned her Ph.D. in mathematics at Duke University and previously worked as a data scientist and software engineer. She was selected by Forbes as one of 20 Incredible Women in AI.

Acknowledgments
Thanks to Jeremy Howard, Andrey Kurenkov, Jessica Dai, and Hugh Zhang for comments and suggestions on this piece.


If you enjoyed this piece and want to hear more, subscribe to the Gradient and follow us on Twitter.