Does AI Need to be Equal to Humans?

by Chris McGinty

This is a question that I’ve heard addressed a few times. If we never create AI that has the same type of intuition and intelligence that humans have, or the same type of memory and recall that humans have, did the entire AI experiment fail?

I’m wondering, because what people fear with AI is it will become too human like, and out of a sense of survival will end up killing humankind. That wouldn’t matter as much if AI could never get to the point where it fears for its survival.

People are trying to get AI to meet or beat human ability in most cases. They want AI to be able to write entertaining novels. They want AI to be able to write listenable music. They want AI to practice law. They want AI to do everything for humans, without needing that pesky human oversight to make sure that everything is going okay.

To me, it seems that you would want something more like the difference between scribes and an inkjet printer. The scribes weren’t supposed to be writing the books. They were supposed to be copying them. If you had a printer and rather than printing up your car insurance card it decided to give you an impressionistic painting of your insurance card, you would probably be kind of pissed when the cop was looking at you like what is this.

I can see how having AI trained to think and reason creatively in fields that require progress can be helpful for progressing at a higher rate, but it feels like with anything where there’s creativity and progress, you also want oversight. Maybe for that reason it doesn’t matter if we ever create AI that can do it all. Maybe there’s still a part of the process that we want humans to do better.

Leave a Reply