The ignorance of algorithms and new synthetic realities

Artificial intelligence technologies are an expression of who we are. When everything is possible, what do we decide to do? This is a perfect technology for both creation and control: we are in possession of tools that magnify and perpetuate social inequalities, but which, at the same time, allow us to tackle the contemporary challenge of rethinking the ways of looking at the world.

The opportunity for a new monster

In 1818, Mary Shelley’s novel Frankenstein or The Modern Prometheus was published, describing an extraordinary creation, a promise with infinite potential that was first experienced amidst a feeling of hope, and then horror. The monster makes its presence felt the moment its creator sees it awaken and feels responsible for what they have just brought to life.

This story is very similar to what we are currently experiencing given the more widespread and popular use of generative models of artificial intelligence (AI). Faced with the integration of these tools in both the workplace and the domestic sphere, many questions are arising: What have we created? What potential and what impact does it have? Is it really intelligent? And, if so, is it really human intelligence?

These questions fuel debate and media coverage, and spark research at several universities, creating deeply polarised camps of opinion. Last March, Massachusetts Institute of Technology (MIT) researcher Max Tegmark published a letter signed by key figures in the technology sector (including Turing Award winner Yoshua Bengio, Tesla CEO Elon Musk and Apple co-founder Steve Wozniak), calling on major AI developers to halt any developments related to AI systems that are more powerful than the existing ones. This demand primarily sought to give legislators, policymakers and society at large sufficient scope to integrate these technologies at a manageable pace. Today, the letter, which has already received more than 27,000 signatures, has not gone beyond the media, and these companies have not slowed down any of their development projects.

Moreover, concerns of all kinds expressed by creatives – for instance, that the emergence of generative AI will undermine the value of their work – have been clearly voiced by journalists and authors such as Yuval Noah Harari and Dani Di Placido. This new wave of generative algorithmic models has called into question arenas and techniques that, until recently, we believed were safe and very hard to automate, such as writing and visual languages. Now, more than ever, we are asking ourselves what the value of what we do as humans is, what impact it bears, and how our intelligence differs from that of a machine.

The US Copyright Office recently ruled that AI-generated images are “not the product of human authorship” and therefore cannot be copyrighted. Authors are developing tools to protect their work, but it will be a constant new battle in the creative arena from now on. Lately, the fantasy and science fiction magazine Clarkesworld Magazine had to temporarily close its doors to submissions after being inundated with short stories written with ChatGPT. How will we know if 100% of what we read and see on social media has been written by a person? For the time being, it’s impossible, because we do not possess the tools to find out, and we are thus up against a huge danger and a huge opportunity.

Today, the political and legal tools at our disposal to defend certain collectives and social interests need slower paths, so that the capacities of the companies behind these technologies can be properly thought out and legislated. This imbalance between the pace at which we privately develop tools and the ways in which we understand the effect they have on us, on society and on the planet is decisive to determine whether what we are creating is Mary Shelley’s monster or a tool brimming with potential.

Under the synthetic carpet

One aspect of this technology worth bearing in mind is that, as with most such developments, there is a certain amount of alienation. AI may appear to be an ethereal, cloud-dwelling, clean and somewhat supernatural force, but it is made up of enormous amounts of natural resources, fossil fuels, human labour and massive infrastructure. Tools like ChatGPT may seem lightweight and disconnected from any material reality, but in fact they require vast amounts of computing power and extractive resources to operate[1].

Some public figures, such as Dr. Sean Holden, Associate Professor at Cambridge University, or the renowned linguist Noam Chomsky,[2] clearly explain the limits of artificial intelligence and the reasons why this technology is far from perfect. We therefore use the concept of “artificial ignorance” here to refer to all these phenomena generally understood as errors or realms of stupidity that are determined by algorithmic technology. Two examples are given: in October 2021, Imane, a 44-year-old divorced Moroccan migrant mother, was being interrogated in Rotterdam while recovering from abdominal surgery. The welfare payments that allowed her to pay her rent and buy food for her three children were in jeopardy, and not only that, she could be charged with fraud. She had to prove her innocence in a difficult and costly bureaucratic process because an algorithm ranked her as being at “high risk” of committing welfare fraud.

This is not an isolated case, but part of a global pattern in which governments and companies use algorithms to reduce costs and increase efficiency. Like Imane in Rotterdam, more than 20,000 families have been falsely accused by an algorithm[3] of fraudulently receiving welfare payments. Often these systems do not work as intended and can reproduce worrying biases, sometimes irreparably affecting the communities that need the most help.

Another more popular example is the algorithm that Amazon used in 2015 to screen staff and recommend wage increases. For every thousand resumes entered, the aim of this system was to determine which five people should be hired or whose salary should be increased. The problem was that the data used to train the algorithm had been collected over the previous ten years in a company that already had an underlying bias that showed much greater preference towards men than women. As a result, the system began to rule out and discriminate against all curriculum vitae from which any inference of female gender could be drawn, thus replicating an unfair pattern that, had it not been for the obvious gender imbalance, would have gone wholly undetected.[4]

Again, the root of the problem with this algorithm is that it bases all its answers on statistical models. These technologies do not make a distinction between truth and falsehood; they only look for patterns. Today’s predictive algorithms are incredibly useful machines, but they can generate erroneous results that are largely detached from reality. An AI will have a much smaller margin of error than a meteorologist in predicting whether it will rain tomorrow, but it will never understand what it means when it rains, or why it rains, or what that might mean for a culture or a people. There is no knowledge as to why, only what, and this detached and decontextualised nature is especially dangerous when acting in social arenas where prejudices are often perpetuated, and individuals and institutions can justify certain behaviours with the simple response of “it is recommended by the algorithm”.

Creating fertile ground for pluralism

When we realise that most of the current uses of these technologies in the public realm are linked to the control, optimisation and monitoring of citizenship and resources, it becomes clear that there is a profound lack of imagination in the areas in which generative artificial intelligence can be used. If we want to reverse this, we need to think about how to use it to address different questions.

One of the exercises I often used with students in the classroom was to buy four newspapers with very different political views. The students and I would find the same news story and analyse how each media outlet reported it. In recognising this pluralism of versions, much more informed critical debates always emerged, and single perspectives and entrenched positions would disappear. One of the prospects we see for the use of AI is precisely its capacity to be pluralistic. Currently, one of the projects we are working on is an integration for browsers that allows an automated version of any news item to be offered, in which several alternative critical interpretations of the same facts are given. A new way of understanding current affairs through many other perspectives, all bearing their own bias.

The opposite of a bias is not its elimination, it is not absolute neutrality; it is transparency, it is rolling back the carpet and understanding what the bias looks like and what it affects. A bias can be positive, it can give a specific perspective on an unrepresented community, because neutrality never totally exists, there is always a view, a situated knowledge.[5] The conflict arises when we use a tool that is not aware of its predefined viewpoint, which represents the interests of a few, usually powerful people. If we are able to reverse the bias and use this tool in a transparent manner, this technology can provide us with a new space where we can visualise the tension between different ideas and be more exposed to plural perspectives.

We need machines that give us room to rethink. But it is tricky: it takes courage. A machine that helps us to deny ourselves the comfort of always being the same person, the one that arrived at a set answer long ago and has never had any reason to doubt it. A machine that, in other words, helps us to keep ourselves open.

As F. Scott Fitzgerald said, “the test of a first-rate intelligence is the ability to hold two opposed ideas in mind at the same time and still retain the ability to function”[6]. The willingness to change one’s mind is a superpower in the modern age, and these technologies can help us, at a time of extreme polarisation, to change course and be more open to ideas that are different from those we define as our own.

Synthetic realities

Since February 2022, Domestic Data Streamers has been developing a programme for experimentation, training and debate on the possible applications of generative artificial intelligence technologies. The key project we have been working on has been the development of tools for the recreation of synthetic memories through image generation. The main difference with other image editing software is that AI has the ability to create images directly from the description of a scene, and with some training, very high quality results can be achieved at a speed of five to ten seconds. This makes it accessible and enables fast iteration, which is essential for working with a large number of collectives and individuals.

The applications are wide-ranging. We are already working with social workers, psychologists and medical experts in dementia and psychoneurology to understand the positive impact that synthetic memories during companionship can have on the progression of degenerative diseases such as Alzheimer’s or senile dementia. The first findings are proving to be particularly exciting and will soon be published in documented form.

We are also taking this technology and these processes to other spaces, such as the reconstruction of the historical memory of refugee communities in Athens or the survivors of Hiroshima and Nagasaki in Japan. These are communities that, for various reasons, have lost visual documentation of personal moments, often of significant historical interest; memories erased from the realm of visual culture that we can now evoke with more accessible technology, from the communities involved, who are generally excluded from access to the latest technologies, such as the elderly, migrants or patients suffering from mental health conditions. These tools can enable us to understand ourselves as well as communities belonging to other times and social realities. We must be able to approach everyone in a transparent, responsible and collaborative manner; synthetic reality can be a space between fiction and reality, an intermediate space where we can meet.

Re-imagining the city

This same technology opens up new possibilities in the process of co-designing public space. In recent months, we have worked in several sessions with international architectural firms to understand how these tools can be incorporated into design processes and what impact they can have in terms of integrating neighbours, businesses and other agents in the development of urban planning and architectural proposals. These technologies facilitate access to the visualisation of urban alternatives in a very affordable manner: streets, buildings and uses of public space can be redesigned in a matter of seconds. The greatest limit, for the most part, is one’s own imagination.

While the images are, admittedly, strange looking and lacking in detail, they are no match for the work of an experienced architect, but they allow results to be visualised in sufficient detail to gain an idea of the possibilities of alternative architecture. It is another tool to move forward very quickly in visualising change, and thus enables local residents and city dwellers to better understand and support the urban alternatives that most interest them. The danger lies in the fact that the results of these tools, being so visual, attractive and affordable, can end up overriding other relevant factors, such as observations of use, demographics or the diverse social needs of each space. Visualisation of change is a great tool for communities, but it must be used in a responsible and informed fashion.

The issues at stake

When I see how these technologies are being rolled out across society, it is evident that we are still a long way off from having a pluralistic and humanistic perspective. It seems that conversations throughout the media are focused on the technology itself, on the almost magical phenomenon it seems to contain. Few people would describe the workings of a car, an email or a WhatsApp audio as a magical tool, but it seemed that way to us when we were first exposed to these technologies. A similar phenomenon holds true for generative artificial intelligence. And this distracts us from what is really important: What impact do we want these tools to have on our society? Why are we developing them? Who has access to them? Who benefits from them? Whose interests do they serve, and how do we make them work within a set of values?

If artificial intelligence is the answer, what is the question?

 

[1] Katz, Y. “Noam Chomsky on Where Artificial Intelligence Went Wrong”. The Atlantic. 2012. http://ow.ly/58Qv50NW6vi

[2] Burgess, M., Schot, E. and Geiger, G. “This Algorithm Could Ruin Your Life”. Wired. 2023. http://ow.ly/zrkF50NW6Gu

[3] Larson, E. “Amazon is Sued for Alleged Racial and Gender Discrimination”. Bloomberg. 2021. http://ow.ly/mG3950NW6rT

[4] Haraway, D. “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective”, a Feminist Studies, 14(3), 575-599. 1988. https://doi.org/10.2307/3178066 

[1]Crawford, K. The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press, 2021.

[2] Katz, Y. “Noam Chomsky on Where Artificial Intelligence Went Wrong”. The Atlantic. 2012. http://ow.ly/58Qv50NW6vi

[3]  Burgess, M., Schot, E. i Geiger, G. “This Algorithm Could Ruin Your Life”. Wired. 2023. http://ow.ly/zrkF50NW6Gu

[4] Larson, E. “Amazon Sued for Alleged Race, Gender Bias in Corporate Hires”. Bloomberg. 2021. http://ow.ly/mG3950NW6rT

[5] Haraway, D. “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective”. Feminist Studies, 14(3), 575-599. 1988. http://ow.ly/Wyyc50OoYwV 

[6]  Cita original: “The test of a first-rate intelligence is the ability to hold two opposed ideas in mind at the same time and still retain the ability to function”.

References

Lohr, S. Data-ism: The Revolution Transforming Decision Making, Consumer Behavior, and Almost Everything Else. HarperBusiness, 2015.

Pasquinelli, M. i Joler, V. “The Nooscope manifested: AI as instrument of knowledge extractivism”AI & Society, 36(4), 1263-1280. 2020. http://ow.ly/IRvb50NWaKS

The newsletter

Subscribe to our newsletter to keep up to date with Barcelona Metròpolis' new developments