Okay so, I had this theory and this ongoing battle with the artificial intelligence community that they could not get an AI chat bot to survive a one-on-one phone interview to secure a job or position at a corporation, or convince someone who was a coordinator for a think tank to allow the AI bot to join. If you look at the chat bots we have today, it appears that the grad students programming the software are getting quite good at what they're doing. In fact, often they are able to fool the person on the other side of an Internet forum, or e-mail for a good number of exchanges.
However, it becomes readily apparent that whatever is on the other side, whether it be a human or an AI chat bot simply doesn't have the ability to think, rather it can only combine information, and use rhetorical debate points that a seventh grader might use in debate class at school or in their debate club. In other words, they haven't passed the "Winslow AI test" which I named after myself of course.
Yes, the Turing Test by Alan Turing is often considered the standard, however there are now artificial intelligent software programs that work at help desks and can generally solve the problem of the other person on the line over 36% of the time, or they transfer them to a human at that point. Not long ago, in the New York Times there was an article discussing speech recognition software, and this important point.
One of the questions and takeaways was; does the human on the other line have the right to know that it is not talking to another human rather it's just talking to an artificial intelligent program? Indeed, that is a very good point however, if the human on the other side is fooled, then that artificial intelligent program has indeed passed the Turing Test. What happens when the conversation goes in six or eight different directions?
The reason I ask, is typically and I speak as a coordinator for a think tank which happens to operate online - we engage the individual applicants in an ongoing dialogue to see if they can come up with original thinking or original thoughts. If after several exchanges back and forth they can do that, then obviously it makes no sense to have them as a member of a think tank, at least not for us.
Another question might be; if you are a human resource director would you hire a chat bot to do the work of the human at perhaps a call center? If the chat bot could fool you, then most likely given the correct voice inflection, accent, and knowledge of the types of topics that people would be discussing when they called in, then you just might seriously consider it.
Still, even if the chat bot is programmed not to argue, but to offer constructive suggestions, will it still be able to handle a wide variety of off topic conversations, which may or may not be related to the individual question which was posed at the beginning of the phone call? I liken this to creating a video game which starts out as perhaps a virtual football game, and then one of the players is asked to get in their car and drive to the airport and fly away in an airplane, which would be more similar to an aviation simulator.
It's not that it is impossible to do this - it's just that it sure gets a lot more complicated, and it might even take artificial intelligence and knowledge the size of Watson to make it work, and guarantee that the Matrix is ready for that contention. Furthermore the artificial intelligent system would have to be able to learn to cope and deal with different environments where ever it was placed. Humans can do that, and the human mind is quite good at it, unfortunately right now artificial intelligence isn't, when it will be is anyone's guess, and quite frankly my estimate is that it might be soon. Indeed I hope you will please consider all this and think on.