A lot of recent research on artificial intelligence has involved building neural networks that resemble the structure of the human brain.  The question that keeps coming up is: “How do we think?”  Perhaps to create true artificial intelligence, the question should be: “How do we learn?”

Automated systems branded as “artificial intelligence” are often large databases with information and rules that have been carefully written and inserted by programmers.  They have been built to think and communicate like a human adult, instead of a machine that has learned due to its own curiosity.

In order for a machine to think, communicate, and even feel as a human does, I believe its knowledge and behaviors must have been learned, not inserted.  The first true form of artificial intelligence will likely be very different from what has been depicted in science fiction movies, or even the systems that are branded as artificial intelligence today.  It will likely resemble a human infant or toddler.

I believe that a computer program will need to have three basic characteristics before it can be defined as true artificial intelligence:

1. A sense of curiosity that draws it to unfamiliar objects in its environment.

2. The ability to receive and identify positive and negative feedback from external sources.

3. The desire and capability to modify its behavior to increase positive feedback signals, and decrease negative feedback signals.

One type of true artificial intelligence could be a program that teaches itself to play a game, without any prior knowledge of the game’s rules.  An example would be a program that can learn to navigate a maze without knowing what a maze is, or what the goal is.  The program could initially be programmed to simply move forward.  When it runs into a wall of the maze, it can receive a negative feedback signal which causes it to move in the opposite direction of the wall and insert a new rule into the software that stops the movement whenever it’s about to hit a wall again.

Positive and negative reinforcements can come from a virtual process that is external to the artificial intelligence process, or it can come from a human acting as the computer’s user.  Manually providing reinforcements to artificial intelligence is a technique that I’ve named “Parental Programming”.  In this situation, the software is a curious and innocent young mind, while the human is a parent who teaches the software right from wrong.  The software can either restrict or enforce each of its actions based on the type of feedback the parent provides for that action.  Eventually, that same piece of software may be capable of taking the role of a parent to a child process, allowing computers to teach true artificial intelligence to other computers.