The Problem With The Human Element and AI — TechVirtuosity

Brandon Santangelo
5 min readJul 6, 2020
[Copyright : Denis Ismagilov] © 123RF.com

People are Indecisive

The human element and AI don’t always mix well together. It’s not always a bad thing that we don’t know what we want at times but AI can’t work like that. We are continually striving to create something that thinks like us, but better. However, there’s a problem with this way of doing things!

If we have a mere moment of indecisiveness it can create a void in the way that we want an AI to think. As many scientists and companies strive to create realistic artificial intelligence, they are also incorporating some of our own flaws.

Here’s my opinion on why it’s a bad idea to combine the human element and AI!

We Might Need an AI Smart Enough to Create Another

People are inherently imperfect. We don’t always know what’s best for us, or we don’t always listen. In the same ways that our humanity makes us special it’s also a major fault when adding it to an AI.

We often contemplate what it means to think but AI doesn’t think they way that we do … We create AI to simplify our lives or to solve complex problems. This has shown great progress and innovation but it still leaves us needing something more and something new.

For this reason, we may need an AI that is capable of creating another intelligence. The way we think tends to not be compatible with the way machines need to process things. They are great at doing things like driving or other functions we program them to do but they fail at several other areas.

Artificial intelligence is not the same as organic intelligence…

Striving to Create Organic Intelligence

When we create AI we often try to think of most of the scenarios it will face. This has been the truth in regards to self-driving cars or other automated processes. We can only program so many scenarios into it. When we make a self-driving car recognize people it can use a specific criteria, which fails on other kinds of people. Children may not be detected at all, or anything else that varies from the norm.

Organic intelligence would allow a machine to actually think and solve this problem. However, it’s not as simple as that. The moment a machine becomes capable of thinking like us, it may no longer be the better option. We shouldn’t necessarily be striving to create machines with organic intelligence.

We need is an AI that fits perfectly between those two types of intelligence.

[Copyright : sakkmesterke] © 123RF.com

Artificial Intelligence Can Refine Itself

Right now we are in the early stages of artificial intelligence. Many may argue this but what we call “AI” is often just parameters built into programming with reactions to determine success. Yes, it’s much more complex than that but in the end it’s not too far off.

Currently we have programs that gradually improve themselves. Depending on the programming it can either be somewhat directed or completely random. Think of it like a maze. The AI could try possible general directions or it could try every possible direction. It’ll then determine what the best route is, but it isn’t “thinking”, it’s doing as we made it to do.

Now imagine if we could create something with the human element and AI. Intelligence that is capable of refining itself without outside interference by us. There are several advantages to this and at least one major downfall.

Is Self-Taught Truly Better?

AI can be great as it is now, but should an AI teach or create another AI? Is such a thing even possible? Some believe AI should learn the way that children do and that could be one way of looking at it!

If we are to assume that we can accomplish an AI that “aged” with increasing abilities and recognition skills, then could that AI teach another one? Or is this approach also another failed attempt to try and get something that is smarter than us?

At what point does an AI become similar to us, teaching future generations and gaining poor qualities or faults. And that is why the human element and AI are not always a good idea. The method of slow and gradual improvement can take thousands of random guesses by an AI to do so.

We took thousands of years in recent history to reach where we are now. Even though a machine is faster we could still be looking at a lot of time spent to make something learn the way that we do.

The Complicated Problem of Becoming Intelligent

In the end, we aren’t really trying to create something that has our faults. But at the same time we only know of one way to try and create something as smart as we are. That is why we often strive for the human element to coincide with machines.

What we need to do is to look beyond this. We need to create something that doesn’t think like us, but solves most of our problems. Our brains simply don’t think in a way that is ideal for a machine that we want to fix our problems. After all, if the machine had the thinking capability of us, wouldn’t it be plagued by our same faults?

Taking Little Steps

One approach to this problem is to simply avoid it for now. Just because we can make somewhat smart artificial intelligence, doesn’t mean we can create one that fulfills all of our current needs.

Many companies on this list “ America’s Most Promising Artificial Intelligence Companies “, focus on the smaller aspects rather that a single complete solution for all things. These companies are striving to perfect each area that they focus on. Some help the medical field while others create visual AI solutions.

By breaking down larger problems we may be able to get as close as we can to machines creating other machines.

[Copyright : David Pereiras Villagrá] © 123RF.com

Ending Thoughts on The Human Element and AI

AI often does its best when we aren’t trying to implement our way of thinking entirely. In some cases we do have a need to do so, such as AI companions or social solutions but in general it doesn’t help much.

But again, these are all opinions as well as problems that face many developers who are creating long term AI solutions. Depending on the complexity of the problem some AI solutions will never need to truly think.

Overall it might be easier to make a whole new term for AI that is progressive and futuristic, compared to AI that we use now which isn’t actually thinking but just doing what it’s programmed for.

What do you think about AI and implementing the human element? How should we create a smarter more future proof AI?

Originally published at https://techvirtuosity.com on July 6, 2020.

--

--

Brandon Santangelo

I'm a writer, blogger, technician and software developer. Writing is my passion that I do on the side. I love all things technology.