tags | |
---|---|
|
Historically there hasn't been only one type of AI. This is because everyone may have a different interpretation of what intelligence really is: for some it may be just a mimicry of human behavior, to some others it might be just something that acts in a rational way.
Rationality may also have different interpretations, for some is a property of the thought process and reasoning, while for others it may just be an intelligent behavior.
The Turing test was designed as a thought experiment that would sidestep the philosophical vagueness of the question Can a machine think? A computer would pass the test if a human interrogator, after posing some questions, cannot tell whether the response was human made or not.
Note
This would require a lot of complexity:
- natural language processing
- knowledge representation
- automated reasoning
- machine learning
The notion of the Turing test has not been focus of studies since it's more important to study the underlying principle of intelligence.
The Greek philosopher Aristotle was one of the first to attempt to codify right thinking, that is, irrefutable reasoning process. His syllogisms provided patterns for argument structures that always yielded correct conclusions when given correct premise. With these laws, the field of [[Logic]] was born and after the years many systems to describe relations about the objects in the world were created.
Logic requires knowledge of the world that is certain, a condition that may not always be true. Here the theory of [[Probability theory]] fills in the gaps. This, however, is not enough to develop an AI -- what we constructed here is effectively just a machine that predicts the future -- To enact these calculation we need [[Agents]] that can actually perform these tasks.
The problem with a perfect AI is that perfect rationality is not always feasible, there are constraints on the computations [[Complexity of an algorithm]] that are just too high.
The standard model for AI research has been a useful guide since its inception, but it has been proven not to be the optimal one. For artificially defined tasks like chess (so with an objective built in) the standard model is applicable, but when this objective is not so well defined then it will fall short.
[!example] What would be the objective for a self-driving car?
The problem of achieving agreement between our true preferences and the objective we put into the machine is called the value alignment problem. Sometimes these objective, if defined badly, may also be harmful, what's to say that the best way to play chess is to threaten the life of our opponent? This would still be a valid in the pursue of winning the game but it's clearly not beneficial. We want AI to pursue our objectives, not theirs.