top of page

Technology and Democracy: A Double-Edged Sword

Updated: Feb 20, 2021

I’m not an alarmist, I promise!

The 21st century has brought with it technological advances unforeseen to even the deepest thinkers of the past. With its advancement, we have seen fascist regimes and democracies alike rise and fall. Unbeknownst to many, current political leaders around the world are looking for ways in which to harness the power of technology; some are more open than others about their intent.


The purpose of this article is not to instill fear; rather, it is to reframe the way in which technology is viewed. Like any tool, it can be used in many ways. A knife, for example, can be used to cut food or to harm; either way, the knife remains indifferent. In a sense, we can view technology in this way -- it too is indifferent to its application. Whether it is used to bolster democracy or to monitor the masses is dependent on the decisions we make today. Of course, to broadly use the term technology is akin to saying the word sport to describe a tennis match. For the purpose of this article, we will focus on the following:

  • Mass surveillance made possible through artificial intelligence

  • Biotechnology and its relation to democracy

Fascist Regimes of the Past: Feigning Surveillance


George Orwell, in his speculative writing of 1984, hit the nail on the head with almost all aspects of his technophobic hellscape. The only difference I would offer to Orwell is a title change; perhaps 2034 is more appropriate. Around 1930, Joseph Stalin established a complete dictatorship over the USSR. In doing so, the Soviet Communist Party sought to inject fear into the heart of its citizens. Through his tenure, cruel tactics were used to convince the general population that they were being watched. This, of course, was not actually possible.

Let us pause for a moment and reflect on this:


At the time, it was not possible for Stalin’s regime to actually monitor its citizens.

Perhaps papers upon papers could have been collected on the behaviors of various personnel -- but then, who would look through this collection?


We can only be left to ponder what Stalin would have done with access to present day technology. Not only do we now possess the ability to collect gross amounts of predictive data, we also have the computing power to pour through it all. Sniffing out those that are anti-authority, brainwashing the masses, and shifting public opinion on any given topic has never been easier.


Okay, so what? Technology is dangerous... Are you suggesting we stop innovating?


For a myriad of reasons, this is not possible nor my suggestion. For starters, to stop participating in, for example, the AI race because of its implications is to assume that other nations will do the same. I’ll believe it when I see it; beyond that, it would be impossible to know if another country stopped researching even if they publicly stated their intent to halt. For the sole reason of avoiding lopsided power, it cannot and will not happen.


Part of the problem that is often neglected is the compounding effect of data collection and its relation to government policy. Briefly imagine a world in which a country orders all of its citizens to take a DNA test. (In this instance, data privacy laws do not exist. This is not as obscure as one might hope.) In using AI to pour through the collected data, patterns are discovered which predict predisposition to genetic diseases.


Fantastic! Sounds like a net gain for humanity, no?


Perhaps not. Now the world knows that this database is the most accurate and comprehensive when it comes to predicting genetic disease. Why would anyone in the world want to go elsewhere to learn about their ancestry? Alas, the feedback loop begins:

  • Strongest dataset leads to more accurate predictions

  • More accurate predictions lead to more people wanting to use that service

  • More people using that service leads to a stronger dataset

  • And the pattern continues, leaving no room for competition

So what?


Data is power, and the race to understand the best mechanisms by which to use it is ongoing. Consolidating this power in one place is a dangerous game to play given that we have yet to establish global protocols protecting citizens from its misuse. If this seems far fetched, look no further than present day China to see data collection as a method to control citizens:

  • A personal social credit system has been put in place with inputs affecting the algorithm ranging from internet usage patterns to societal participation

  • AI facial recognition has been put into place; tickets for jaywalking or speeding are automatically applied to bank accounts

    • This has been paired with social humiliation; faces are then plastered onto billboards to further discourage such activity

  • Riding trains or booking flights is restricted to those that meet the social credit system’s threshold

  • Corporations are given a social credit score with the possibility of being blacklisted should their score not meet a threshold

The above is just a glimpse into the dangers of AI being left unchecked.


Great. What do you propose we do then?


Technology is not deterministic, meaning the decisions that we make today will influence our societal outcome: good or bad. Let’s revisit the idea of a tool being agnostic to its application. Knowing this, there is an option sitting right in front of us that is often overlooked: a twofold monitoring system. There is no principle reason that any data collected on citizens cannot also be gathered on those in office. And if this were the case, I would be willing to bet that favorable laws would be put into place to ensure proper data privacy.

All of this to say that monitoring is not inherently bad. Healthcare would improve immensely. However, recognizing that it is not a one way street is imperative and a good starting point. In the future, I foresee AI “companions” serving as a virus protection from algorithms looking to intrude on behavior, decisions, and data collection.


Covid-19: A Turning Point for Privacy


Historians have a peculiar job in that they must detach themselves from the emotion of historical events and decipher the impact based on the known facts. In doing so, the ripple effect can be better understood. For instance, imagine the disparity in viewpoint between a 19th century industrialist and a 22nd century historian studying the fruits of a fossil fuel based economy. It is difficult to see the full picture of consequential effects any given event has had in the present moment.

On the topic of COVID-19, we are treading in dangerous waters. As the pandemic continues to worsen, more ideas will begin to proliferate around utilizing technology for contact tracing. This will of course be under the guise of strict data privacy (at least in the United States). Think for a moment now: eventually, the biometric queues will be understood well enough that personal devices will be able to decipher if a person has contracted a disease.


Okay… Wouldn’t you want to know if you had a disease?


Of course I would. But who owns that data?

In this case, let’s stray from data privacy and for a moment apply this same logic to other biometric queues. Think of deciphering biometrics like a complex algorithm: inputs from the body are given to a technology, and the output is the biological status of a human. Any technology that is able to do this will have the ability to decode other biological phenomena outside of disease: anger, sadness, sleepiness, etc.


This is astonishing, both in that these technologies are being actively researched and that people are already providing massive amounts of data to the companies doing the research. (For example, the Apple Watch.)


We have yet to scratch the surface on the ethical, political, or personal issues that could arise as this technology improves. For the sake of this article, picture a world in which anger was being monitored en masse during a presidential speech. It would be possible for centralized computing to process the data and provide a list of people that had a negative reaction to the speech -- all made possible by the same technology that looks to diagnose COVID-19. It is the perfect reason, easy enough for the general population to rationalize, for requiring biometric tracing. In the wrong political environment, this could be devastating.


It is my hope that history does not write a sequel to Brave New World.


 

If you enjoyed this article and want to explore various thought experiments on the topic, check out another one of my blog posts: https://www.davidaugustiniak.com/post/ethical-dilemma


Thanks for joining!

bottom of page