In his recent book on what artificial intelligence could mean for a culture steeped in the spirit of self-improvement (an $11 billion industry in the United States alone), Mark Coeckelbergh points to a sort of ghostly double that accompanies each of us now: the quantified self, an invisible and ever-growing digital duplicate, made up of all the traces left behind whenever we read, write, view or buy anything online, or carry a device, such as than a phone, which can be tracked.
These are “our” data. Again, they’re not: we don’t own or control them, and we have little say in where they go. Companies buy and sell them and use them to determine patterns in our choices, and between our data and that of others. Algorithms target us with recommendations; Whether or not we click or watch video clips they have predicted will catch our attention, comments are generated which refines the cumulative quantitative profile.
The potential for marketing self-improvement products calibrated for your specific insecurities is obvious. (Just think of the amount of home fitness equipment now gathering dust that was once sold with the blunt instrument of infomercials.) Coeckelbergh, professor of media and communication philosophy technology at the University of Vienna, worries about the effect of can only be to reinforce already vigorous tendencies towards self-centeredness. The individual personality, driven by its own cybernetically reinforced anxieties, would atrophy into “a thing, an idea, an essence isolated from others and from the rest of the world and which no longer changes”, he writes in personal improvement. Elements of a healthier ethic are found in philosophical and cultural traditions emphasizing that the self “can only exist and improve in relation to others and the wider environment”. The alternative to always digging into numerically enhanced ruts would be “a better harmonious integration into the social whole by fulfilling social obligations and developing virtues such as compassion and reliability”.
A big challenge, that. This involves not just debate about values, but public decision-making about priorities and policies – decision-making that is, ultimately, political, as Coeckelbergh picks up in his other new book, The political philosophy of AI (Political regime). Some of the basic questions are as familiar as recent headlines. “Should social media be more heavily regulated or should it regulate itself, in order to create better public debate and political participation” – using AI capabilities to detect misleading or hateful messages and remove, or at least reduce their visibility? Any discussion of the issue is bound to revisit well-established arguments about whether freedom of expression is an absolute right or a right limited by limits that need to be clarified. (Should a death threat be protected as freedom of expression? If not, is it a call for genocide?) the saying goes.
In this regard, The political philosophy of AI doubles as an introduction to traditional debates, in a contemporary tone. But Coeckelbergh also pursues what he calls “a non-instrumental understanding of technology”, for which technology is “not just a means to an end, but also shapes those ends”. Tools capable of identifying and stopping the spread of falsehoods could also be used to “push” attention to accurate information – bolstered, perhaps, by artificial intelligence systems capable of assessing whether a source data uses reliable statistics and interprets them plausibly. . Such a development would likely end some political careers before they begin, but most worrying is that such technology could, as the author puts it, “be used to push a rationalist or techno-solutionist understanding of politics, which ignores the inherently agonistic aspect [that is, conflictual] dimension of politics and risks excluding other points of view.
Whether or not lying is inherent in political life, there is something to be said for the benefits of its public exposure in debate. By directing debate, AI runs the risk of “making the ideal of democracy as deliberation more difficult to achieve… threatening public accountability and increasing the concentration of power.” Such is the dystopian potential. The absolute worst-case scenarios involve AI turning out to be a new form of life, the next stage of evolution, and becoming so powerful that managing human affairs will be the least of its concerns.
Coeckelbergh occasionally winks at this type of transhumanist extrapolation, but his real aim is to show that a few thousand years of philosophical thought will not automatically become obsolete due to the prowess of digital engineering.
“AI politics,” he writes, “deeply reaches into what you and I do with technology at home, at work, with friends, etc., which in turn shapes that politics.” Or it can, anyway, provided we focus a reasonable amount of our attention on questioning what we’ve done with this technology, and vice versa.