Between the Hows and Whys of Artificial Intelligence [feat. The Good, The Bad, and the Ugly]

Written by eduardotio | Published 2019/10/31
Tech Story Tags: ai | npl | ml-fairness | machine-learning | neural-networks | artificialintelligence | robotics | latest-tech-stories | web-monetization

TLDR Between the Hows and Whys of Artificial Intelligence [feat. The Good, The Bad, and the Ugly] read by Eduardo Tió. The Good: AI is always talking about the simulation of human thought by a mechanical process. There are very practical applications for this type of research now. Companies ranging from tech giants to automated house-cleaner manufacturers are building businesses on the back of AI. Others are looking to it for everything from cutting payroll costs to correlating data on consumer behavior.via the TL;DR App

Artificial intelligence can mean a lot of things. It’s been used as a catch-all for various disciplines in computer science including robotics, natural language processing, or artificial neural networks. That’s because, generally speaking, when we talk about artificial intelligence we’re always talking about the simulation of human thought by a mechanical process.
In fact, AI research began with this idea. As early as 1939, the Church-Turing thesis proposed that a computer could simulate any process of formal thinking. Formal being a key word here. The assumption is that there are systems that describe the way we think, act, and relate to each other and can be explained step-by-step to a computer. The implications of this have puzzled us for decades.
By the 1950’s this field went on to enjoy an academic flourishing before the advent of what is now called the “AI Winter.” Most of these projects would lose funding because no practical applications seemed to come from them. Moreover, the philosophical questions about AI weren’t too relevant back then.
This has changed dramatically with current advances in technology. There are very practical applications for this type of research now. Companies ranging from tech giants to automated house-cleaner manufacturers are building businesses on the back of AI. Others are looking to it for everything from cutting payroll costs to correlating data on consumer behavior. To say nothing about its appeal to governments and other institutions.
AI’s pragmatic shift in focus has opened many doors for different kinds of developments and implementations. Some argue some of these should remain closed.

The Good

One of the tasks AI excels at is pattern recognition when there is a predetermined goal in mind. In this way, it allows a machine to develop its own form of “learning.” These algorithms have the capacity to sift through inordinate amounts of data and find important correlations, links, and quirks that can then lead to previously unreachable insights.
Tools like these come in very handy. They’ve already outperformed medical risk assessments such as pooled cohort equations, aiding the peace of mind of some heart disease and cancer patients.
They’re helping farmers improve harvest quality with better weather modeling and crop sensors. They might even become live translators as speech recognition efforts continue to improve.
They’ve also beat us at our own games. The most seemingly innocuous AI applications are seen in the form of search engines, recommendation algorithms, videogames, and creative experiments.
Arguably, these give people new and better tools for intellectual discoveries, leisure, or creative work. In the case of recommendations and advertising, some even toy with the idea that machines can know us better than we know ourselves.
Tech giants like Google, Facebook, and Amazon have implemented AI into their digital platforms, making subtle changes that now affect entire industries ranging from Media to eCommerce. This trend has started to raise questions about whether these quality improvements create other problems in and of themselves.

The Bad

For many, the fact that machines are performing certain tasks better than us is a cause for concern. Especially when labor that used require human judgment is handed over to automation. While it may be seen as a boon, the tradeoffs are worth evaluating. AI implementation could have consequential effects on employment, civil representation, and privacy.
Both blue-collar and white-collar workers face increasing employability challenges as whole industry sectors are replaced by robotic solutions and automated service providers.
The growing threat of this shift is well documented in any news publication on the subject, with even famous historians like Yuval Noah Harari claiming that “no element of the job market will be 100 percent safe from AI and automation.”
But it’s not just cashiers and middle managers who could be blindsided. Self-teaching algorithms have become increasingly pervasive in our day-to-day lives. Not only behind trivial online interactions, but more importantly, in their promotion to central roles in high-stakes decisions in finance, medicine, and criminal justice.
These implementations have made the discussion around Machine Learning Fairness a crucial topic on the ethics of AI. One of the most important things to understand about algorithms is that they’re dependent on representations. The way a problem or data set is presented to a machine determines everything about the results it returns.
Because computers are necessarily restricted to replicating formal forms of thought, the one thing they cannot do is to “think outside the box.” A skill that most would agree is essential to navigating situations intelligently and humanely.
This becomes eerily relevant when the subject turns from “likes” and “follows” to one of loan-worthiness, health policies, or criminal recidivism. All of which could be skewed by unwittingly biased or poorly designed algorithms. A recent example being the failure of self-driving cars. A study at the University of Michigan shows that automated vehicles are more than twice as likely to be involved in an accident.

The Ugly

Then there are other applications where AI crosses the threshold of long-term questions and becomes downright questionable as a practice. Far from being scare stories, studied phenomena like the ELIZA effect have been fair warnings for years.
In his 1979 seminal book, Gödel, Escher, Bach, Douglas Hofstadter describes how early experimentation with an automated teller machine caused people to confuse computer processes with human behavior. An early foreshadowing of how these technologies are now used for scams and some forms of malicious advertising.
The use of AI for harmful purposes doesn’t stop there. Activist groups are already ringing the bell on the development of autonomous weapons and their consequences.
Moreover, facial recognition software has become a key talking point on the issue of mass surveillance. China has notoriously leveraged this and other AI innovations in the development of its social credit system. A move that has sparked protests and international controversy.
Altogether, different advances in AI come back to the same philosophical questions that were abandoned in its beginnings. It’s still a set of mechanized rules set up by someone, somewhere, with a specific goal in mind. In that sense, AI can be cautiously helpful, somewhat limited, and inadvertently harmful. It can find better ways to sort things and increase the chances of desired outcomes, just as it can become a predatory tool or a killer robot. 
Now that we have these tools, perhaps the philosophical questions have become the ones that matter most. We’ve answered many hows but we’re left to answer more whys.

Written by eduardotio | Blockchain enthusiast and music producer
Published by HackerNoon on 2019/10/31