Artificial Intelligence and the Future of Demos

Artificial Intelligence and the Future of Demos

. 20 min read

Artificial Intelligence (AI) has an increasing say in the range of opportunities we are offered in life. Artificial neural networks might be used in deciding whether you will get a loan, an apartment, or your next job based on datasets collected from around the globe. Generative adversarial networks (GANs) are used to produce real-looking but fake content online that can affect our political opinion-formation and election freedom. In some cases, our only contact for a service provider is an AI system, which is used to collect and analyze the content of customer input and to provide solutions with natural language processing.

In the context of Western democracies, threats and issues related to these tools are frequently viewed as problematic. On the one hand, AI technologies  are shown to help include more people in collective decision-making and potentially decrease the cognitive bias occurring when humans make decisions, leading to fairer outcomes.On the other hand, studies indicate that certain AI technologies can lead to biased decisions and decrease the level of human autonomy in a way that threatens our fundamental human rights.

While recognizing individual cases where rights and freedoms are being violated, we can easily neglect rapid and in some cases alarming changes occurring in the big picture: People seem to have ever less control over their own lives and decisions that affect them. This has been brought forward by several authors and academics, such as James Muldoon in Platform Socialism, Shoshana Zuboff in Surveillance Capitalism and Mark Coeckelbergh in The Political Philosophy of AI.

Control over one’s life and collective decision-making are both essential building blocks of the fundamental structure of most Western societies: democracy.

Whereas some attempts have already been made to better understand the relationship between AI and democracy (see, e.g., Nemitz 2018, Manheim & Kaplan 2019, and Mark Coeckelberg’s above-mentioned book), the discussion remains limited. Authors addressing the relationship between AI and democracy rarely specify which element of democratic governance AI affects. Out of a vast range of ideals, what kind of democracy is being discussed? If, say, the current direction of AI development threatens the freedom of public discourse that is essential for deliberative democracy, does this mean minimalist democracy based on competitive elections could still thrive?

As Lucy Bernholz, Hélène Landemore and Rob Reich put it, democratic theorists have so far remained relatively silent about digital technology and engineering sciences have barely touched democratic theory, although cooperation between the two fields is greatly needed. Here, I aim to help fill this gap by seeking a deeper understanding of the intersection of AI and democracy.

One way to examine the relationship between AI and democracy is to turn the attention towards the very basic unit common to all forms of democracy: the demos.

In what follows, I discuss the potential impacts of the ongoing direction of AI development to the people – demos – through its potential and already emerging implications for equality, autonomy, and the traditionally nation-based concept of demos. Finally, I suggest steps that could be taken to get closer to mitigating the risk of harm and to steer the development towards human-centric, democratic artificial intelligence that serves the people and preserves our values – not the other way around.

Demos as the basic unit of democracy

What are we talking about when we talk about demos? The word democracy is derived from ancient Greek demos, meaning the people, and kratos, meaning power. Even though contemporary democracies differ from one another and a wide range of democratic ideals coexist, the idea of the rule by the people remains at the core of every form of democratic governance.

The question of who belongs to “the people” has, however, changed over time, and might again in the age of AI.

In one of the claimed birthplaces of democracy, Ancient Athens, demos covered all Athenian citizens, who had an equal say in collective decision-making. Yet, their concept of citizenship was highly exclusive. As Robert A. Dahl, for instance, explains in Democracy and its Critics, only adult males with fully Athenian ancestry (excluding slaves) were entitled to citizenship.

This left out all the women and people with an immigrant background, regardless of whether they themselves were born in Athens or contributed to its development their entire lives. Hence, the Greek demos consisted of a relatively small percentage of those affected by the decisions made in the democratic process.

Today, democracies have adopted a more inclusive understanding of demos. To begin with, belonging to the people can be based on an official citizenship or nationality. Alternatively, it can be based on identity. Taking the European Union (EU) as an example, according to the latest Eurobarometer, approximately 7 out of 10 citizens in the EU on average feel that they are citizens of the EU. This means around 3 out of 10 EU citizens are officially citizens, but do not identify as such. The size of the gap varies between EU-countries.

Image Source: Standard Eurobarometer 96 Infographics: https://europa.eu/eurobarometer/surveys/detail/2553.

The gap between official citizenship and identity-based citizenship can have a corrosive effect on democracy, because the lack of common identity discourages political participation and erodes legitimacy of collective decision-making. Why would I bother participating if my voice is not heard? Why would I comply with rules when the voice of my people is not heard in the process of setting them?

Even if each citizen has the legal right – and, according to some theories, responsibility – to participate in collective decision-making as members of demos, some can feel alienated and thus step aside, which leads to weaker political participation.

The identity-based conception of demos is also one of the cornerstones of populist ideology. As Jan-Werner Müller writes in Democracy Rules, populists thrive from the idea that there is a ‘real’ people — demos — that they rightly represent, consequently implying that others’ understanding of the people is not quite as real, or that perhaps those others do not belong to “the people” in the first place.

Few populists oppose the idea of democracy but insist on more direct forms of participation than the representative government. Populist politics poses a problem for democracy when it aims to exclude certain groups of people from the democratic rights attributed to citizenship and restrict their liberties, which threatens the core values attached to liberal democracy – most importantly, equality and freedom.

For populists, the ‘real people’ are those entitled to the rights attributed to citizenship, whereas the others should preferably go back to their own people. And only the real people – the demos – can recognize the ‘real’ from the ‘not-so-real.’

In essence, if you are not part of the demos, you have no say in collective decision-making. And this is where AI comes into play in our modern democracies.

AI shaping the future of demos

Emergence of AI technologies has inspired many advocates of democracy to seek solutions for contemporary challenges – such as lack of participation and interest in politics – from new AI tools.

For example, König and Wenzelburger (2020) present a scenario according to which AI could be used to help citizens with managing information overload by “algorithmically enhanced navigation of political information.” That would make participating politics feel less complicated as the information available would be easier to absorb. They also suggest that AI tools that enable analysis of large datasets could help politicians with making better informed, citizen-led decisions and enable new opportunities for better, timely public services.

What is more, authors such as Hélène Landemore (2021) and Dirk Helbing (2021) propose that AI tools, such as natural language processing, could be used to facilitate online deliberation and enable direct participation in collective decision-making. According to Cavaliere and Romeo (2022), AI could even help with strengthening democratic legitimacy, if used properly.

For such purposes, platforms are already emerging. For instance, Pol.is is an open source platform for collecting and analyzing opinions from big crowds. The Computational Democracy project behind the platform promises on Github that they “bring data science to deliberative democracy, so that governance may better reflect the multidimensionality of the public’s will.” Initiatives like Pol.is could offer approachable tools for governments and other institutions to better engage citizens, strengthening the role of demos in democratic decision-making.

Nevertheless, few tools have been applied in practice, and the results of the pilot programs are as of yet inconclusive. For example, the Finnish Innovation Fund Sitra piloted an AI solution for increasing participation on municipal level via automated phone calls, using natural language processing methods for analyzing the citizen input. The results of the pilots showed that automation could help with scaling municipal participation, but several challenges, such as reaching a diverse demography and flaws in language processing algorithms, still need to be addressed.

In Estonia, AI has been used in the Estonian Unemployment Insurance Fund (EUIF) to help counsellors connect job seekers with services suitable for their situation. According to a piece of news published by the developing company, “[u]sing the trained model and 60 different attributes and indicators, each unemployed person is evaluated, and their chances of finding a new job is calculated" (emphasis added). The tool uses attributes such as “education, previous job experience, right to benefits, health restrictions, and about the labor market” to do probability calculations. Doing so, the tool has potential to make public services more efficient, timely, and accurate, enabling citizens to better exercise their democratic rights. Potential effects to democracy have not, however, yet been academically studied or analyzed.

Providing scalable tools for participation, aid in recognizing false information, and better public services do all seem to have potential for strengthening democracies and the role of demos: With tools that recognize hate-speech and fake news, our future demoi could be more inclusive. Better informed decisions could ensure that no one is excluded from the citizen-based demos. More efficient opportunities for participation and detection of hate-speech would empower people with all kinds of backgrounds to be equally included in collective decision-making and also strengthen the identity-based demos.

Even so, none of these opportunities come without challenges.

Democracy is in trouble when AI technologies that deeply affect human lives are not aligned with democratic principles and values – including an inclusive demos consisting of free people – even when used to support democracy. These issues have been brought forth by several academics, such as Alnemr (2020) who argues that today’s algorithms are undemocratic and problematic because they are programmed by someone other than expert deliberation facilitators. Consequently, they might not be compatible with democratic principles. Similarly, in their above-mentioned article, König and Wenzelburger also discuss a negative scenario where the use of AI could lead to biased opinion formation, technocratic decision-making and accountability issues.

AI is thus hardly either good or evil, friend or foe. Such a dichotomous perspective can prevent us from reaching towards opportunities that could solve ongoing difficulties faced by democratic governments. Instead of imagining AI development as a linear phenomenon with two opposite ends  – disaster and triumph – we should be picturing complex entities where helpful and harmful features can coexist inside the same tools and processes. The same tool could both increase the activity in political participation and bias opinion formation. Thus, if not addressed, the harms could render otherwise helpful tools useless, or even worse, counterproductive.

Discussing all these aspects at once would require far more of your time and patience, dear readers, so in this article, I concentrate on the most burning questions related to the future of demos that could prevent us from making use of numerous opportunities AI could hold for democracy.

Let us look at how certain uses of AI-based technologies could distort our understanding of the modern demos by undermining equality, freedom and the traditionally nation-based concept of the people.

Harmful bias and discrimination

First, algorithmic bias can lead to discrimination against minorities and disadvantaged groups, which is at odds with equality – a shared core principle of most democratic theories. Although this phenomenon is being thoroughly researched, the proxy problem is still a stubborn challenge in AI-assisted decision-making: even if the demographic indicators, such as gender, race, or age, were deleted from a dataset, redundant encodings by proxies that indirectly reflect the sensitive attributes can lead to harmful bias.

Harmfully biased outcomes affect both the citizenship-based and the identity-based conceptions of demos. In the case of AI-assisted immigration decisions (for example, in the Canadian immigration office), bias can lead to systematic exclusion of groups of people from enjoying the democratic rights of residents, weakening their legal status on unfair grounds and excluding them from citizenship-based demos. In other areas of life, such as recruitment, loan decisions or housing applications, systematic discrimination can further lead to a weaker sense of belonging to the demos – exclusion from the identity-based demos.

What complicates the situation is our tendency towards automation bias. Recent research by Yochanan Bigman et al. shows that discrimination by algorithms causes less moral outrage than discrimination by humans, even when the consequences are just as severe. They also showed that organizations where discrimination by AI occurs also tend to be held accountable less often. Therefore, harm from AI might become part of established societal structures, such as job markets or housing, without our even noticing before it is too late.

“Running a poorly designed algorithm on a faster computer doesn’t make the algorithm better; it just means you get the wrong answer more quickly.” Stuart Russell, Human Compatible, p. 37.

Harmful discrimination by AI most often happens due to incompetence in mitigating bias, which has urged numerous sets of AI ethics guidelines, codes of conduct and research from the part of governmental organizations, NGOs, academics and private companies (for an extensive review, see, e.g., Jobin et al. 2019). How these principles could be operationalized in practice is, however, still an ongoing discussion that we will return to later in this article.

What is often overlooked in peaceful democracies is the risk of deliberate exclusion following shifts in power relations. Both citizenship-based and identity-based demoi could be tampered either by endogenous or exogenous political forces seeking power. The opaqueness of complex AI systems – backed up by automation bias – makes questioning AI assisted decisions nearly impossible, which can easily be exploited, making AI an especially ugly tool for such action.

At the time of writing, AI is being used in a way that excludes people from demos by producing deepfakes and using content-spreading AI bots. These pieces of dis- and misinformation that are spread online can encourage racist or otherwise unfair discrimination, which can lead to exclusion from identity-based demos. For example, fake TikTok accounts were created to spread deepfakes in order to undermine minorities.

Political competition and plurality of opinions is an essential part of democracy, which is most clearly highlighted in perspectives of radical democracy, such as those of Chantal Mouffe (in, e.g., The Democratic Paradox) and Jacques Rancière (e.g., Hatred of Democracy). Yet, if the political debate is based on denial of someone else’s fundamental rights, which opposes the very principles of equality and freedom, can we still talk about strengthening democracy?

Freedom and human autonomy

Rule by the people requires that people have real opportunities to exercise their power: they need to be considered free, autonomous individuals.

Deepfakes and online bots can, however, be used to steer human decision-making and opinion formation. One seemingly harmless way to do so is to influence our everyday decisions by nudging.  As Coeckelbergh demonstrates, this hidden activity violates human autonomy by steering human decision-making:

“[W]hile this is not a threat to negative freedom since no one is forced to do something or to decide something, nudging by AI is a threat to positive freedom. By working on people’s subconscious psychology, it manipulates them without respecting them as rational persons who wish to set their own goals and make their own choices.” Mark Coeckelbergh, Political Philosophy of AI, p. 18.

In the case described by Coeckelbergh, collective decision-making is not made by rational, free people belonging to demos, as it should be according to most democratic theories and democratic constitutions. In a society where people are constantly nudged with AI, the power is transferred to the nudging organization, often a private company or a public authority.

When manipulation is brought into the context of politics, societal harm becomes further pronounced. A study by Robert M. Bond et al. showed already in 2012 how social media content directly influenced the political behavior of millions of people. Several similar observations have been made regarding the 2016 presidential elections in the USA. In addition, Kilovaty (2019) shows how online manipulation poses “a considerable and immediate danger to autonomy, privacy, and democracy.” When social media content we consume is curated with machine learning algorithms, it is the agent managing the algorithms who decides to which ideologies we are subjected.

What is more, as deepfake technologies and technologies for autonomous content creation develop, it becomes harder to distinguish fake from real. Thus, the manipulative potential of such technologies increases, which has been seen as a threat to democracy by several academics, such as Cristian Vaccari & Andrew Chadwick  and Bobby Chesney & Danielle Citron.

Original Photo: Daria Shevtsova / Pixabay, edited by author

In democracies, it is the demos that should have the topmost power over collective decision-making. Although this right is executed differently in different democratic theories, it is always based on a demos consisting of free people. Similarly, people should have power over setting the rules for the decision-making process – setting national constitutions being perhaps the most descriptive example.

Consequently, the above-described nudging and manipulation reinforced with AI technology seems to threaten freedom and human autonomy, and thus has potential to erode Western democracy.

Disappearance of nation-based demos

AI also challenges the current geographical definition of demos. Democracy is designed for geographically limited entities, such as nation-states or collectives thereof. AI, on the other hand, is not a national phenomenon, nor are value chains and networks of data economy fueled by it. Tech giants governing the development of AI technologies that run the data economy do not consider national demoi to be of relevance when thinking about their markets and expansion.

Therefore, it might be that the current AI-fueled data economy forces us to reconsider the scope of the basic unit of collective decision-making. If decisions that affect the lives of people in Sweden are made in the USA, should the Swedish people have a say in those decisions?

These sorts of decisions are being made as the regulatory bodies’ treatment of tech giants is changing. If we have identity-based demos of citizens of the globe that is affected by global AI technologies, what kinds of societal structures would fit this perspective? What is the institution that the Swedes of the previous example can go to in order to control the use of their personal data?

Even if we tried to create an international democracy and demos, we might never succeed. The ability of international organizations to be democratic has been questioned by, e.g., Dahl (1999), due to their high level of representation and resulting alienation from the people they are supposed to represent. On the other hand, Lopes & Casarões (2019) present another interpretation, according to which international organizations could be considered democratic by thinking about them as global polyarchies.

In the context of global AI technologies, things seem ever more complicated. Big Tech companies do not seem to represent the people that are affected by AI, as they are more beholden to their shareholders. There is no global citizenship-based demos to be represented. Unless one day the users of, say, Google’s products, identify themselves as one collective, no identity-based demos exist either.

Hence, if the AI-fueled concentration of power over collective decision-making capacity erodes the nation-based concept of demos and we are not able to provide a re-definition, the very basic foundation of democracy – the rule by the people – could be challenged.


The above-mentioned aspects of the current direction of AI development could change democratic societies based on the rule by the people, and not necessarily in a positive direction. Next, we will look into possible ways to prevent harm related to inequality, freedom, and disappearance of nation-based demos.

Re-empowering demos in the age of AI

If AI development takes a direction that can undermine equality, freedom and nation-based concept of demos which deprives us of taking advantage of the opportunities that could strengthen demos, we could be heading towards an uncontrolled erosion of Western democracy.

Luckily, the game is far from over. Many of the most severe threats, such as manipulation by deepfakes and discriminatory AI decisions have not been realized and might never be. In fact, democracies have proven themselves rather sturdy in times of uncertainty. Müller, for example, considers in Democracy Rules uncertainty an essential building block of modern democracy, without which democracy cannot survive.

“[O]n a very basic level democracy makes no sense without the possibility of people at least sometimes changing their minds, and that includes changing their minds about democracy and how it’s realized through particular rules at any given point.” Jan-Werner Müller, Democracy Rules, p. 73.

AI is only a tool and we are the users, which means that we can still align AI development with values and structures we are not willing to compromise.

Recent propositions for strengthening democracy include a concept of Open democracy by Hélène Landemore. She proposes an alternative for today’s representative democracy by replacing representative structures with a scalable digital society for collective decision-making where “online deliberative platforms [are] facilitated and aided by natural-language analysis performed by artificial intelligence algorithms.”

This would in principle empower the demos in terms of governmental decision-making, but as the power over AI development and the platform economy resides in the hands of Big Tech, that would hardly resolve the problem of loss of human autonomy and erosion of nation-based concept of  demos.

Muldoon addresses the capitalist ideology coexisting with democracy and suggests a concept of Platform Socialism. For Muldoon, platform socialism is a form of governance that re-empowers citizens to take control over digital platforms and infrastructure that has become an essential part of what is considered a decent standard of living in the 21st century. Platform socialism is based on collectively owned associations that govern the platforms, giving the topmost power to the people and direct the benefit back to the people, the demos.

These suggestions are bold and could work if, and only if, we find robust ways to develop ethical, societally sustainable algorithms that prevent the above-discussed threats from being realized. Algorithms would need to serve the demoi, and the potential changes they cause to democratic governance need to be controlled.

Furthermore, both Landemore’s and Muldoon’s suggestions would require fundamental changes to today’s societal and economic structures. Although re-inventing the society and executing the necessary changes is not impossible, it is an endeavour that requires such time and effort that the harmful structures established as a result of current AI development will probably take root before these changes can be made, making it harder to issue any fundamental changes.

Where should we start in the current situation, then?

1) Setting a common goal

As humans, we need to clarify a common goal of building AI technologies in a way that serves the people, not the other way around. AI is our tool, and not the other way around, so we should align it with values and structures we want to preserve.

The values and principles of today might not resemble those of the very first democracies, or even those defined by the current democratic constitutions. As Jan-Werner Müller says, democracies rely on the possibility of changing our minds and re-defining our societies. By using this opportunity, we can take the first step towards avoiding the pitfalls discussed in this article.

Skipping the goal-setting and jumping straight into figuring out the action points might help with treating the acute symptoms of a flawed system, such as giving justice to someone who has been denied a loan by an AI algorithm on racist grounds, but it would not change the decision-making algorithm.

Setting efficiency, functionality or optimize as the main goal of AI development might produce astonishing new tools, but for what purpose? What exactly are we optimizing, and what would we want to optimize?

If we want to preserve democracy and/or demos based on equality and freedom, we could start asking ourselves: Is our future demos nation-state-based or global, and how could we align AI development with this ideal? How do we ensure a demos that is inclusive? Is there maybe a gap between identity-based and citizenship-based demos that is aggravated by AI algorithms, which prevents us from preserving our common values?

With the preservation of common values as our main goal, we are ready to take the next step.

2) Multidisciplinary deliberation and action when inventing future societies

Due to the multidimensionality of AI technologies, we cannot strictly separate democracy, market economy, and technological innovation from one another when pursuing the common goal. Instead, I argue that these should be seen as different functions under the umbrella of democracy. If we do not accept authoritarian governance by the state, why would we discard our democratic principles and accept authoritarian rule by the Big Tech?

To invent structures that preserve the common values, technologists, engineers, democratic theorists, ethicists, people, can no longer discuss the developments in their separate forums. Today, stronger democratic structures and empowerment of the demos are possible with the use of scalable AI technologies.

Different functions of society cooperate and interact to serve the people, demos. Image by author.

Several initiatives have already been taken. For example, AI Commons was established in 2016 to bring together people from various fields with a common goal of “working towards promoting AI for Good and bringing the benefits of AI to everyone and using the technology towards social and economic improvement.” Also, many governmental organizations and NGOs have established multidisciplinary expert groups, such as the EU’s High Level Expert Group on Artificial Intelligence.

Yet, cross-sectorial discussion is still far from sufficient, which has led to threats described above. Are these collectives talking to each other, too? Are we also engaging companies who code the life changing algorithms? Yes, I’m talking to you too, Meta and Google. No, your own dependent ethics boards alone do not check the box.

As Buhmann & Fieseler (2022) point out, the proprietary nature of AI technologies can make the participation of tech companies in democratic deliberation problematic, which requires careful conceptualization of the forms of future deliberation.

After the foundations have been laid, we are ready to proceed from principles to practice.

3) From principles to practice

After finding common values to preserve, we need truly useful ways to put them into action in the production process of AI technologies. As demonstrated by several studies (see e.g., Mittelstadt 2019), whereas discussion on AI ethics principles is important, it is only one step in the process.

Hence, to avoid the pitfalls of AI development discussed above, the empowerment of the demos requires involvement of a broad spectrum of competences in every step of AI development – innovation, execution and evaluation.

As Morley et al. (2021) point out, AI practitioners cannot do all of this alone. Pro-ethical AI development is considered to be resource-intensive and slowing down innovation, while not enough useful tools exist to operationalize the existing values and principles. According to the research group, such tools would require contribution from various stakeholders in all stages of development, application and audition.

Likewise, e.g., Ibáñez & Olmeda (2021) suggest, after reviewing existing practices, that ethics should form an integral part of organizations’ practices and processes in all phases of AI development, which could be supported by multidisciplinary collaboration.

Finally, I argue that an ongoing evaluation of the impacts of new AI tools on societal structures that we should not too hastily abandon, such as democracy, need to be an integral part of the process. The development of auditing tools and frameworks must stay open for perspectives from different fields of expertise to ensure they are eventually usable by all developers and users of AI technologies.

Only with multidisciplinary contributions can we find tools to align AI with common values & principles, creating technologies that serve the demos and not the other way around.

Conclusions

“All the world’s a platform, and all the men and women are merely users. By setting the stage and charging for tickets, tech entrepreneurs manage a show in which we are both unpaid actors and swindled audience members in our own production. Let’s take back the theatre, rewrite the script and put on the performance of our lives.” James Muldoon, Platform Socialism, p. 25.

To better understand the relationship between Artificial Intelligence and democracy, I have proposed here a perspective that puts the basic unit of democracy under the spotlight by discussing potential impacts of AI on the future of the people, demos.

Whereas AI offers tools for facilitating citizen participation in collective decision-making, strengthening political deliberation and legitimacy, recent research implies that certain aspects of the current direction of AI development could have adversarial effects on demos: First, that using AI could increase harmful bias in collective decision-making and encourage discrimination of minorities. This can threaten both citizenship-based and identity-based conceptions of demos and undermine the very principle of equality. Second, human autonomy that forms the foundations of the rule by the people can be weakened with AI-enhanced nudging and artificial content creation. Lastly, the AI-fueled data economy could cause an unintended dispersion of national demoi due to the shift in power from democratic governments to non-democratic global tech giants.

These phenomena could lead to an uncontrolled erosion of democracy instead of taking advantage of opportunities, such as strengthening democratic deliberation. This could guide us towards serious threats to fundamental human rights currently guaranteed by democratic constitutions.

Instead of creating algorithms that exaggerate the concentration of wealth and power to the elite at the expense of modern democratic values, such as an inclusive demos based on equality and human autonomy, we could use AI to embrace pluralism and strengthen multidisciplinary democratic deliberation, participation and even legitimacy, ending up with better lives for all.

In order to do so, we should start with defining the common values and principles we want to preserve in AI development. From there, the societal structures that support the preservation of these goals need to be established in multidisciplinary collaboration. Finally, AI should be aligned with the common values and principles and operationalized in a way that supports the societal structures, from development to application and evaluation.

Re-empowering demos in the age of AI. Image by author.

This discussion and action requires contribution from all.

All being us, humans.


Author bio

Salla Ponkala is an AI ethicist focusing on intersections between AI and democracy, aiming to make future technologies more ethical. She is a PhD researcher in Information Systems Science at Turku School of Economics and part of Future Ethics, the leading research group in IT ethics in Finland. Ponkala also currently works as a project researcher for the project Worker Wellbeing from Digitalization in Turku School of Economics. She holds a MSSc in Political Science and an MA in French linguistics, which brings multidisciplinarity to her work in academia and consultancy.