Contact

020 7377 9279
hi@thecubelondon.com

Issue B: Truth as Data

With the advent of machinic decision-making in the world and our lives, whether it’s through self-driving cars or artificial intelligence, it is important to investigate the connotations of such. The statement ‘truth as data’ connotes a machinic worldview that is free from subjectivity and human error and is presented as the ultimate form of empiricism or objectivity. But how valid is this claim? Do biases exist within algorithms? Deep learning technologies through artificial neural networks suggest a certain opacity at this moment in time as it is not known how most advanced algorithms work. At the core of this issue, lies the question – how do machines make decisions, can we rely on these decisions, and what are their consequences?

These were the questions set up to address for the ‘Truth as Data’ discussion that was held on 28th June, 2017 at THECUBE. We were joined by Prof. Peter Latham (UCL), Dr. Edgar Whitley (LSE) and Dr. Sara Marino (KCL). Sara uses and interprets digital media data for her research, Peter focuses on understanding how biologically realistic networks carry out computations, and Edgar has previously written on digital identities and biometrics amongst much else.

Artificial intelligence can be understood simply as machines being taught to mimic human cognitive processes, including speech recognition, facial recognition, associative “thinking”, logical deduction based on data provided etc.1 The discussion at hand started with defining ‘truth’ through a discussion challenging the idea that there is in fact a singular truth that is within reach through the study of data. It can be argued that objective truth only lies in Mathematics, but how is that relevant to the world that we perceive and the reality we judge, and how to translate that into our daily lives? Subjectivity plays a large role, and often what is thought as objective is merely representative of a certain hierarchy at play within society and what we understand as ‘objective’ as being highly hegemonic and reductive of non-conforming truths. This raises the question – who gets to decide what is beneficial for society at large and the way technology is employed in the future, along with the specific calibrations of technologies?

Through the discussion there were multiple definitions of truth that we encountered, including cumulative truth through vast quantities of data which represents an abstract reality where truth can be interpreted through data (and this can be compared to how evidence is gathered in any criminal investigation), the key being the quantity. On its own data can be seen as raw material, that needs to be activated in order to be of any practical use to us. With the accumulation of digital data we acquire networked information through the use of datasets, which can be argued to be limiting in some respects, though its capacity for usefulness should not be underestimated either.2 Another definition of truth that we encountered through the discussion is multiple truths through subjectivity that overlap to form one ultimate truth. This is something that the Frankfurt school philosopher Walter Benjamin has theorised in his work.3 Rather than fully establishing a singular notion of truth, we explored the nature(s) of truth that we encounter.

Looking at technology it is a natural or perhaps, arguably, learned assumption that the cold, removed nature of technology is separate from human subjectivity, but it is important to challenge these notions of positivist thought. There are important questions to address as technology today takes larger prominence in our lives than ever before, and slowly we are handing over a measure of control to intelligent machines such as in the case of self-driving cars and the production, dissemination, and networking of information. Through networking data, it is provided with context and meaning. One of the primary concerns in this topic is the notion of technology as novelty and how that detracts from dealing with larger societal problems that are replicated in technology. The idea of equality being inherent in technology and the internet being largely based on ideas of machinic objectivity to form a democratised space is misleading at the least, when it is in fact human beings with unconscious biases that are training algorithms and datasets.

Looking at statistics of the technology industry, it is seen that in 2017, 85.5% engineers are male in the US, and an overwhelming majority of them being white.4 It is illogical to assume that this small subsection of society will be able to accurately represent the entire human race, especially when they enjoy a certain position of power within society. And this connotes graver problems when algorithms are used to determine whether or not someone is viable for a car loan, for example, or in the case of predictive policing, both being largely authorised in the US.5 There is statistical evidence that minorities, i.e. underrepresented parts of society are disproportionately targeted in cases like these. The importance of representative sampling and sample size through various demographics cannot be stressed enough. This is a big reason why assuming technology to be neutral is dangerous as it is then put on a pedestal and it becomes difficult to question due to the importance and vitality we allow it to assume. It has become crucial to incorporate changes that are more representative of society.

We are at a point in technological progress, namely through the use of deep learning or artificial neural networks, where we can’t see the reasoning behind specific functions that the machines make. Deep learning incorporates learning data outcomes rather than task specific ones, in order to allow the machine to form its own internal logic, and contains more than one hidden layer, often multiple. They replicate biological neural networks, but the problem that arises is that scientists and programmers don’t understand why it is, specifically, that the machines are making the decisions that they are, due to the hidden or invisible layers present.6 And while deep learning has allowed for a new site of research that is currently thriving, it leaves us with ethical questions of whether we actually do want machinic decision-making to gain further prominence and whether we can rely on these deep learning systems – does this represent a further internalisation by the machines of biases that already exist within society, to a point where it becomes inextricable?

It can also be argued that it is important to prioritise problems, and instead of trying to understand and replicate the chaotic nature of human beings, perhaps we should be working in synthesis with machines to solve larger and more prescient issues that we face as a collective society today. It is important to understand the “thought processes” of these deep learning systems in order to determine the fairness and accuracy of judgment on the machine’s part. An interesting example of using machine-specific intelligence to challenge learned human intelligence was with IBM Watson, the artificial intelligence program that interprets vast amounts of data to answer questions. It was fed vast amounts of data about recipes and an understanding of the human tongue, and how chemicals are processed, and what constitutes of flavours, and it ended up constructing recipes that a person would deem to be strange combinations. However they ended up being functional, well-received recipes.7

Another example is Google playing Go and finding how techniques of tackling the complicated game, moves that people considered to be outright bad moves, until they saw that it played the game more efficiently. This has opened a new chapter for the game.8

It is particularly interesting to take facial recognition into account as it is a technology that is wielded in high security situations like passport control, as well as lower forms of defining identity, for example, in Facebook’s tagging mechanism. Algorithms are programmed to chop up datasets in order to “recognise” facial features, and in doing so there is a simplification of the complexity of gender. And so they are often programmed to recognise long hair as female and short hair as male, which ends up reducing identity to simplifications. It has been found that often facial recognition technologies in various parts of the world are unable to detect facial features properly when there is a different skin tone presented to the machine.9 This reflects a problem with calibration rather than an inherent racial bias in the system, however, it does disproportionately affect minorities. The idea of capture and control through attributing certain qualities of violence to minorities has been explored in Keith Piper’s video installation Tagging the Other, and comprised of a good deal of his practice.

Machines have a much higher rate of accuracy than human beings in recognising faces – however, the technology is unable to detect discrepancies that a human being would naturally disregard or overlook.    These interferences or obstructions often weaponised are termed as adversarial images. For example, it was found that machines often cluster objects within images together as it views it along the same two-dimensional plane. Machine learning of image and facial recognition takes place through the annotation of images and finding commonalities between the images tagged as same, through repositories of information. And through this aggregation, there is a computer “image” formed of approximations of images. And because of this it becomes easy to fool a machine.10

An important point that was discussed was the dangerous idea of inevitability in terms of progress, especially when looking at technology. The problems that currently exist within technology must be worked on in the present rather than looking to the abstract Future. Taking an example, George W. Bush took away funding for stem cell research, while Obama brought it back, and it is now likely that President Trump will reduce funding or scrap it altogether. Policymaking plays an extremely important role towards the future of research. It can hardly be said that data is the end of the story, and instead it is merely information that we gather and it is up to us to examine it and put it to use.

tagging the other installation

Keith Piper, Tagging the Other, mixed media installation with four video monitors and slide projection. First exhibited January 1992 Impressions Gallery, York. (image above) Installation at Nederlands Foto Instituut, Rotterdam, Holland. March-April 1994

 

While there has been a change in the quantities of data that we collect now, compared to the past, we still portray the same attitude towards data and often use to fit our worldviews instead of studying it to seek what is really true. This is often true in the case of the criminal justice system and how we use this to further legitimise our standpoints regarding minorities and underrepresented groups of people. We still seek predictability but we look for it on our terms. Often it becomes a point of concern that the money is ruling the research and serving individual or corporations’ desires rather than addressing gaps in societal progress that can be addressed through technological advances, or symbiotic problem-solving. Instead of seeing technology as a solution, it should be used as a tool to develop a democratisation of information and representation.

 

References

1 p. 2, Russell & Norvig (2009). Artificial Intelligence: A Modern Approach, Pearson: Harlow.

2 p. 4, Munster, A. (2016). An Aesthesia of Networks: Conjunctive Experience in Art and Technology, Routledge: Abingdon.

3 p. 29, Benjamin, W. (1998) The Origin of German Tragic Drama, pp.27-58, Verso: New York

4 Collins, K. {2017). ‘Tech is overwhelmingly white and male and white men are fine with that’ in Quartz. Accessed on: 12-07-2017. Available at: https://qz.com/940660/tech-is-overwhelmingly-male-and-men-are-just-fine-with-that/

5 Hvistendahl, M. {2016). ‘Can ‘predictive policing prevent crime before it happens?’ in Science. Accessed on: 12-07-2017. Available at: https://www.sciencemag.org/news/2016/09/can-predictive-policing-prevent-crime-it-happens

6 Knight, W. (2017). ‘The Dark Secret at the Heart of AI’ in MIT Technology Review. Accessed on: 12-07-2017. Available at: https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

7 Kleeman, A. (2016). ‘Cooking with Chef Watson, I.B.M.’s Artificial Intelligence App’ in The New Yorker. Accessed on: 12-07-2017. Available at: http://www.newyorker.com/magazine/2016/11/28/cooking-with-chef-watson-ibms-artificial-intelligence-app

8 Cadell, C. (2017). ‘Google AI beats Chinese master in ancient game of Go’ in Reuters. Accessed on: 12-07-2017. Available at: https://in.reuters.com/article/us-science-intelligence-go-idINKBN18J0PE

9 p. 14, Magnet, S.A. (2015). Feminist Surveillance Studies, Duke University Press.

10 Emerging Technology from the arXiv, ‘Machine Vision’s Achilles’ Heel Revealed by Google Brain Researchers’ in MIT Technology Review. Accessed on 12-07-2017. Available at: https://www.technologyreview.com/s/601955/machine-visions-achilles-heel-revealed-by-google-brain-researchers/

 

 


We are excited to have recently published our second issue of THECUBE‘s magazine, with contributions from both members and friends. For this issue we have been looking at the thematic of ‘TRUTH’. This article by Sukanya Deb, is just one of the many that can be found in the magazine. If you would like to read the entire magazine please click here. We hope you will enjoy it, and feel free to get in touch if you are interested in contributing to our next issue for more details.