paint-brush
3 Things You MUST Know About AI Technologyby@kadanstadelmann
228 reads

3 Things You MUST Know About AI Technology

by Kadan StadelmannDecember 8th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

John McCarthy is commonly known as a co-founder of AI and more. He coined the term ‘AI’ in 1955 at Dartmouth College. He worked alongside early big names in AI, including Ray Solomonoff, Oliver Selfridge, Trenchard More, Allen Newell, and Herbert A. Simon.
featured image - 3 Things You MUST Know About AI Technology
Kadan Stadelmann HackerNoon profile picture


"I don't see that human intelligence is something that humans can never understand."


~ John McCarthy, March 1989


The fact that early on AI programs could play checkers, prove theorems, and so on led to a lot of optimism about the future of AI. But, AI also faced challenges early on. Computer researchers tell a story about an early computerized language translation system that was tested first from English into Russian and then back into English.


The system ultimately translated “The spirit is willing, but the flesh is weak” into “The vodka is good, but the meat is rotten.” This type of hurdle hampered AI’s early growth. Moreover, computing power was simply not back then what it is today. Below, are a few things you ought to know about AI.

1. A Brief History Of AI

In 1943,  Warren McCulloch (neuroscientist) and Walter Pitts (logician) proposed the first computational model of a neuron. The pair formulated a theory of artificial neural networks. They looked at neurons and logic and demonstrated a mathematical connection between the two.


Much of the early work in this area was around artificial neural networks from a mathematical perspective. At the time, computers didn’t exist, so they could not train models.


McCulloch and Pitts were the first to use Alan Turing’s notion of computation to understand the neural, and thus cognitive, activity.


John McCarthy is commonly known as a co-founder of AI and more. He coined the term “AI” in 1955 at Dartmouth College where he worked alongside early big names in AI, including Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Allen Newell, and Herbert A. Simon.


In 1959, perceptron was discovered, which demonstrated that a machine could be taught to perform certain tasks using examples. In 1969, Marvin Minsky and Seymour Papert published Perceptrons, an analysis of the computational capabilities of perceptrons for specific tasks.


Perceptrons were a historic turn in artificial intelligence, revisiting the idea that intelligence might emerge from networks of neuron-like entities. The book featured much mathematical analysis to prove its findings.


In the 1980s, researchers rediscovered the backpropagation algorithm, which today remains an important step in a common method used to train neural network models.

2. What Is Machine Learning?

If you want to build AI for self-driving cars or diagnose diseases, starting at the terminal might not be the best approach. AI is trying to solve complicated problems by creating software or hardware that does things. AI researchers often model before they begin to build.


A model takes the real word and builds a mathematically precise simplification so you can experiment with it on the computer.


The basics of AI begin with machine learning, which is an important building block for building AI models. The central tenet of machine learning is you input data into a model. It’s been a driver of a lot of successes in AI. Machine learning does, however, require some trust.


One can build out the mechanics and train a model, but at its core, it must be quite generalized. The industry has formalized this process with probability theory and statistics.


An example of a basic machine learning model are reflex models, which require a fixed set of computations. These include linear classifiers, deep neural networks, and most models used in machine learning.

3. What Is the Goal Of AI?

Generally speaking, the goal of AI is to create software that can reason over inputs and explain in the form of output. AI could possibly make human-like software interactions possible. Diving a little deeper, one might view AI as agents or as tools.


One view asks how can man recreate intelligence. Another view asks how can we use technology to benefit society. Although the two are overlapping, the latter ought to be the focus of modern-day AI researchers, and it is.


That’s why the scope of AI is much narrower today. Researchers focus on solving specific problems, such as diagnosing diseases, etc.