It’s hard to broach this topic without sounding like a fatalist, but here it is:
In the second half of the 21st century, personal authority will shift to algorithms with the help of biotechnology.
In truth, technological advancement has been great for society at large. In no way am I calling for the need to stop advancement. Rather, I feel it necessary to iterate some of the dangers that come with the good. By shining a light on some very real potentials that could unfold as our technology gets more sophisticated, we may be better equipped to make decisions that service humanity at large. Before getting into the current state of affairs, I’d like to present three short thought experiments to get us all thinking in parallel.
Thought Experiment One:
A T-shirt company with the goal of making child rearing easy. Simply place a specialized band around the child's wrist, and voila -- his or her biochemistry is now being monitored. The accompanying T-shirt will now shift colors according to the child's mood. If the child's cortisol spikes, the shirt will turn red. If the child is happy, it turns green. Take a look at the various moods within the software, and decide what should be displayed. It is now possible to gauge what version of the child you are dealing with at any given moment. Harmless enough, right?
Thought Experiment Two:
A biochemistry band is distributed to the population of a fascist regime. (Think North Korea) During any given speech from their leader, the data from these bands is sent to a centralized database to be processed. If cortisol levels rise during the speech, this would indicate rebellious intentions. This person is removed from the population, further cementing lineages that are in agreement with their current leader.
Not so harmless.
Thought Experiment Three:
Eye-tracking software is installed in our phones or glasses (think Google Glass) to predict user preferences. This is not a feature, nor a worry for the general consumer. Over the years, this software is honing in on consumer preferences: when at the beach, it is tracking the type of people this person is attracted to. When at the store, it is tracking what products this person is allured by.
As more time is spent wearing or utilizing the software, advertisements become more specialized to the consumer. (Unbeknownst to them.) A Coca Cola ad, for example, may feature the man or women of your dream holding the can of soda. The next time you are at the store, you will unconsciously purchase Coca Cola due to this association.
Ethics 101 courses just got juicy.
The Infrastructure is in Place:
This is not out of a fairytale. Start listening for the following phrases when going about your day: “I was just thinking of [product], and it appeared on my Facebook feed!”
“Instagram somehow always knows what clothes I want to buy!”
It goes deeper than advertisements, though. We already trust and depend on algorithms in our everyday life, without the intersection of biotechnology. All too often, people refute these claims on the basis that “technology could never be programmed in a way to be omniscient” -- but here’s the kicker:
Technology doesn’t have to be omniscient. It just has to be better than humans at predicting or performing tasks.
And surprisingly, this is not difficult.
We trust that our maps app will get us to our destination. If the app predicts traffic, we trust that it is accurate and take an alternate route. If the app says go right, but you think it’s left, which way do you go? Be honest.
We trust that Google provides us with a correct answer to our query within milliseconds.
We trust that Amazon will provide the package within two days. (This is more automated than you may think.)
We trust that the weather app knows the current climate.
We trust that our heart rate has spiked when our Apple watch declares it so.
We trust the financial software to distribute funds on pay day.
We trust that the robot at the hospital correctly diagnosed a cancer patient.
Wait, did that last example just stipulate that a robot could diagnose a cancer patient? Not only can a robot diagnose a cancer patient, it does so more accurately than a human. (This is not Sci-Fi. This is information processing in 2013.) In a recent study, IBM’s Watson’s success rate for lung cancer diagnoses was 90%, compared to 50% from humans. Software such as this will improve diagnostics, reduce overhead cost, and save lives. (While removing jobs.) From a humanitarian and financial perspective, this appears to be a no-brainer.
The reality is this: Any job that, at its core, requires (or is) information processing or is a repeated task will be eliminated or altered by an algorithm. A human being cannot pour through millions of records, cross reference them, and share its findings with millions of other nodes in the same capacity that a robot can. In so far that we will never have the ability to process this amount of data ourselves, we are forced to trust it.
In some not-so-distant dystopia, algorithms such as this will “help” us decide (based on our preferences and willingness to provide copious amounts of data) who we should marry, what car to buy, and who our friends should be -- all in the name of data processing. And who are we to say it's wrong? Even if, by some chance, it was wrong -- this would be taken into account and the software would be further calibrated. Chances are, the technology of the future will know you better than you know yourself.
There are a myriad of sectors in which this same architecture and thinking can be applied:
Truck driving and delivery services (Often thought of as the first industry that will be disrupted.)
The automotive industry in general
Agriculture
Education
Customer Service
Software Development
Healthcare and Diagnostics
The list is expansive and growing. The question is not whether this technology will exist; It is a matter of how we will implement and utilize it as a society.
As of this writing, we are not prepared as a society, nor are we granting this topic the respect that it deserves. This post merely scratches the surface on the potential implications of AI.
I’ll leave you with one last thought experiment that has been pondered by psychologists for centuries, to which an answer will soon be needed.
The Trolley Dilemma and AI:
A classic dispute between action, intention, and consequence. Take a moment to review the Trolley Dilemma, and think about how you would react. Up until recently, this thought experiment has had no real world impact. It’s easy to debate morality when no one is in actual danger.
Now, picture the same dilemma with a different frame: An autonomous vehicle is driving down a street with the driver asleep in the back when two children run into the road. The vehicle has two options: 1) Drive off the road, killing the driver 2) Kill the two children, saving the driver
The difference between the trolley problem and the autonomous vehicle problem is that an answer is needed for the latter -- an engineer will have to program the car to react in a certain way. While psychologists have had centuries to debate this, engineers (rather, stakeholders) are far less patient. The simple answer would be to let the free market decide. In which case, both the Tesla Altruist and Tesla Egoist would hit the market. In either case, the ethical dilemma goes deeper: what happens when we see an exponential decrease in vehicular deaths due to the proliferation of autonomous vehicles? We are then left to contemplate who is at fault for such a death -- and in the case of less death, it becomes hard to argue against the use of such a vehicle.
If we are to steer the AI train in the direction that benefits society at large, these conversations must start taking place.
Comments