ˇ DeTombe, D.J. (1994) Defining complex interdisciplinary societal problems. A theoretical study for constructing a co-operative problem analyzing method: the method COMPRAM. Amsterdam: Thesis publishers Amsterdam (thesis), 439 pp. ISBN 90 5170 302-3
Dorien J. DeTombe, Ph.D.
Chair Operational Research Euro Working Group
Complex Societal Problems
http://www.geocities.com/doriendetombe
chapter 4/9
4 THE COMPUTER AS A TOOL FOR THE ANALYSIS AND DEFINITION OF COMPLEX INTERDISCIPLINARY SOCIETAL PROBLEMS
4.0 Introduction
In the previous chapters we discussed some of the issues surrounding problems and human problem handling. We concluded that handling complex interdisciplinary societal problems is not easy. One of the reasons for this is the special nature of the problems. Realizing that handling complex interdisciplinary societal problems is difficult, and confronted with the societal urge to guide these problems, there is a need to assist the problem handling process. Of all the means that might be able to facilitate the problem handling process, we shall select particular problem handling tools and problem handling methods.
In this chapter we focus on tools that support the problem handling process. In chapters seven and eight we discuss a method that supports the problem handling process.
There are many tools that support the process of problem handling. What tool is required depends on the kind of problem, the domain(s), the person(s), the complexity of the problem and the moment in the problem handling process[1]. There are such tools as, for instance, paper and pencil, white boards, flip overs, overheads, data recorders, videos, telephones and computers[2]. There are tools for individual support and tools for group support. Of all the tools that can assist the problem handling process we discuss the computer in more detail.
Looking at the contemporary performances of a computer in assisting the problem handling process, we see two extremes along the line from assisting the human being to replacing the human being. At one end there are computer programs that solve problems 'on their own', while at the other end there are programs that assist the human being in problem handling. These two extremes have been developed on the basis of different theoretical concepts. The first is based on the paradigm[3] of Artificial Intelligence, the second on conventional programming ideas[4]. Generating intelligence by machines, is the subject of research of scientists in the field of Artificial Intelligence.
Problem solving is one of the areas of attention of Artificial Intelligence. In this field the computer itself is used as a problem handler. For the purpose of our discussion it is interesting to know for what kind of problems the computer can replace the human being in problem handling and how these problems are related to complex interdisciplinary problems. The expectation of how a computer can be used in supporting the process of handling complex interdisciplinary societal problems is expressed in expectation three:
the computer can be a useful tool in assisting the human being in the process of problem handling, but it cannot replace the human being
Research questions based on this expectation are research question 3a:
3a why can computer tools not replace the human being in the process of handling complex interdisciplinary societal problems?
In this chapter we discuss problem handling in Artificial Intelligence, where the computer is regarded as a problem handler in itself. The paradigm of Artificial Intelligence is based on the idea that it is possible to create artificial intelligence by machines, by means of 'intelligent' computer programs. These artificial intelligence programs may, in the end, exceed human performance in some areas and even replace[5] the human being in the performance of certain tasks[6] or by solving certain types of problems.
We can operationalize a part of research question 3a with the research questions:
3a-1 what kind of problems are handled by programs built according to the paradigm of Artificial Intelligence?
3a-2 how are the problems that Artificial Intelligence programs focus on related to complex interdisciplinary societal problems?
In answering these questions we select the Artificial Intelligence research areas on 'general problem solving' programs and on 'knowledge based systems'.
4.1 The computer as a tool for problem handling
Mankind has always developed tools to assist in problem handling, for instance tools for transport, for building houses and for preparing food. Improvements in transport ranged from the invention of the wheel to the space-shuttle. Facilities for building houses have been improved by tools ranging from simple buckets carrying sand and water to high-technological tools like drag liners and computerized cranes. Since the invention of the computer, this instrument has been used as a tool to facilitate a diversity of tasks. The computer, starting as a computing device, has developed from an instrument for scientific purposes to an indispensable component of our modern, technology based society. In the last fifty years the computer has developed from a slow and very expensive mathematical tool of very limited power used by scientists, to a tool used by many people in almost every discipline for a wide range of purposes. The computer is now a common instrument for supporting the human being in the process of handling all kinds of problems in all areas of society. The speed, capacity, price and use of the computer has changed rapidly, but the main principle of processing data via the stored-program principle, the Von Neumann principle[7], remains the same[8].
4.2 Introduction to Artificial Intelligence
4.2.1 The history of Artificial Intelligence
Artificial Intelligence is an interdisciplinary field of research based on the paradigms of cognitive psychology and computer science. Artificial Intelligence belongs to cognitive science. Cognitive science is a generic term of sciences that study the relation between mind and brain (Steels, 1992, p. 54). This interdisciplinary science contains elements of computer science, cognitive psychology, philosophy, linguistics, neuro-physiology, mathematics and the science of teaching.
The research field of Artificial Intelligence was founded at a conference in 1956, at the Dartmouth College in the USA (Newell & Simon, 1972, p. 884). At this conference a small group of scientists discussed the construction of intelligent computer programs, which they called 'artificial intelligence' programs in contrast to 'natural' intelligence. Among them were McCarthy, Minsky, Shannon, Newell & Simon, well-known names in the field of Artificial Intelligence today. The idea of building intelligent programs had already been suggested before by others as, for instance, Turing and Shannon. Loevinger (1949) suggested getting computers to administer justice, Shannon (1950) suggested constructing computers that could play chess and Turing suggested, in an article published in 1950, that it should be possible to build machines that could think. The idea of building machines that could think is an old dream of mankind.
The goals of Artificial Intelligence were very ambitious. The researchers at the Dartmouth College thought that within ten to twenty years programs could be constructed that could think and talk like human beings and solve all kinds of problems, that it should be possible to build systems that would solve real world problems by using intelligence and that it should be possible to develop machines that could do the kind of things that are considered intelligent (Minsky[9], 1961; Charniak & McDermott, 1985, p. 9-11).
Of the goal of Artificial Intelligence Charniak & McDermott write (1985, p. 6, 7):
"(The) goal is to duplicate mental faculties of 'ordinary' people, such as vision and natural language........ The ultimate goal of Artificial intelligence research ......is to build a person or, more humbly, an animal."
Although not all researchers in Artificial Intelligence would go that far, it does indicate the direction of Artificial Intelligence. According to Steels (1992, p. 102), however, this is no longer the goal of Artificial Intelligence. Research has shown that there are technological[10], knowledge acquiring[11] and epistemological constraints[12], limits concerning formalizing[13] and limited rationality[14] (Simon, 1969), that make it clear that the goal of Artificial Intelligence should not be seen as the synthetically (re)construction of the human mind, but rather as getting enough insight in intelligence to develop knowledge systems.
Luger & Stubblefield (1989) distinguished the following areas of attention in Artificial Intelligence research (p. 14-23): game playing, automated reasoning and theorem proving, expert systems, natural language understanding and semantic modelling, modelling human performance, planning and robotics, programming languages and software development environments, and machine learning.
We concentrate the discussion on general problem solvers and knowledge based systems.
General problem solvers belong, according to the division of Luger & Stubblefield (1989) to the area of automated reasoning and theorem proving and knowledge based systems to the area of expert systems.
Other researchers distinguish five main areas (Charniak & McDermott, 1985) of attention in the field of Artificial Intelligence: the area of vision, speech and touch, of natural language, of problem solving, of machine learning and of manipulating objects. In this division both general problem solvers and knowledge based systems are part of the area of problem solving.
We concentrate the discussion here on problem solving based on tasks. To see problem solving as performing a task is an idea of Newell & Simon (1972). In discussing these programs we focus mainly on scientific and practical problems[15] and concentrate on the knowledge level[16].
4.2.2 Intelligence
Artificial Intelligence is only one of the sciences that study intelligence. On the one hand intelligence is studied in the field of Artificial Intelligence in order to be able to create artificial intelligence, to generate intelligence by machines, while on the other hand artificial intelligence is developed in order to study intelligence.
Charniak & McDermott (1985, p. 6) define Artificial Intelligence as follows:
"Artificial Intelligence is the study of mental faculties through the use of computational models."
In defining Artificial Intelligence, Luger & Stubblefield (1989, p. 593) emphasize search techniques. In their definition,
"Artificial Intelligence is the study of symbol systems for the purpose of understanding intelligent search."
Steels (1992, p. 4 ) defines the goal of Artificial Intelligence as:
"....the science that occupies itself with getting insight into the phenomenon of intelligence in order to create artificial intelligence."
What is intelligence? Intelligence is a concept that is still not completely defined[17]. The description Newell (1989, p. 399-440) gives is:
"1 Flexible behavior as an environment function
2 Showing adaptive behavior (rational and goal directed)
3 Acting in here and now
4 Acting in more or less complex and special environments
4.1 Perception of an enormous amount of changing details
4.2 Applying an enormous amount of knowledge
4.3 Control of the motor system, the voluntary movements
5 Using symbols and abstraction
6 Applying language, natural as well as artificial language
7 Learning from the environment
8 Acquisition of skills by development
9 Living an autonomous life in a social environment
10 Giving evidence of a consciousness and self consciousness"
All these points, except point nine, refer to cognitive abilities. In point nine Newell refers to a social ability, implying in his view, intelligence should not only refer to cognitive abilities, but should also refer to social ability.
Steels (1992, p. 5) distinguishes two kinds of intelligence: behavioral intelligence and cognitive intelligence[18]. Of the two kinds of intelligence, Artificial Intelligence is mainly concerned with cognitive intelligence.
"Cognitive intelligence focuses on the concept of knowledge. Having knowledge implies the possibility of developing models, to extent these via data and to explain why certain solutions are best; to make new descriptions and learn new steps of distraction and to teach others the knowledge that is necessary for the solution of the problem[19]." (Steels, 1992, p. 5)
Artificial Intelligence programs do not have to operate in the same way as human beings do, but the result must be equal or even better. However, some researchers claim that the brain works the same way in which the computer functions.
"....there is an unspoken premise ... that the way brains work is something like the way computers work." (Charniak & McDermott, 1985, p. 6)[20]
Penrose (1989) discussed a philosophical question: "Do computers have a mind?". He concluded that computers do not have a mind and therefore are unable to think. He asked himself the question whether a computer is capable of intelligent behavior. He defines intelligent behavior as being self-conscious[21] and shifts the question to: "Has a computer a self-consciousness?". He denies this and so concludes that computers are not capable of intelligent behavior. Put this way we agree with him, computers do not possess self-consciousness, but we do not want to restrict intelligence to self-consciousness[22]. Coming back to the original question: "can a computer perform intelligent behavior?". Our answer to the question is that the contemporary computer can perform things which, when performed by humans, could be called intelligent, like drawing and performing statistical analysis. According to this, the computer is capable of intelligent behavior. Meanwhile the same computer cannot perform things, which when performed by human beings, would be considered as rather easy, for example, being able to understand a message which contains small but noticeable mistakes, although in, for instance, word recognition involving small mistakes, some progress has been made. An other example is, being able to avoid obstacles[23]. However, whether a computer can perform intelligent behavior depends on the way in which intelligent behavior is defined and on the tasks to be performed[24].
4.3 Artificial Intelligence principles
The principles that are used in Artificial Intelligence for building computer programs are:
1 Physical symbol hypothesis
The guiding principle of the representational Artificial Intelligence methodology is the physical symbol system hypothesis. This means that concepts can be physically implemented: Physical symbols are a necessary and sufficient condition for intelligence (Newell & Simon, 1972).
The physical symbol hypothesis implies that symbols can be translated into a code that is understandable for the computer[25]. For describing these symbols, Artificial Intelligence often uses programming languages which have been specially developed for writing Artificial Intelligence programs like Prolog or LISP[26] (Steels, 1992, p. 59-104).
2 The state-space-search hypothesis
The state-space-search hypothesis states that problem solving is a matter of a search in a problem space[27] (Newell & Simon, 1972). The problem space is defined by Newell & Simon as the space in which the solution of the problem can be found.
"Search is a problem solving technique that systematically explores a space of problem states, i.e., successive and alternative stages in the problem-solving process."
(Luger & Stubblefield, 1989, p. 14).
3 The power versus knowledge hypothesis
The power versus knowledge hypothesis leads us back to a discussion on problem solving described in chapter three (Perkins, 1989). The question there was: "What is most needed for problem solving: general problem solving techniques like several search techniques[28] or domain knowledge?". With regard to the power versus knowledge hypothesis the question is whether problem solving is more a matter of very powerful search techniques or more a matter of domain knowledge. The idea that a powerful search technique is a fruitful mechanism for solving a problem comes from early research in chess, which was regarded as a search into an 'artificial world'. Although this research concluded that a powerful search technique is more important than domain knowledge, later research corrected this view[29]. The power approach regards intelligence as:
"...having a superior general referential mechanism, a superior memory or a superior speed." (Steels, 1992, p. 95).
The search hypothesis is applied to theorem proving and to general problem solvers.
The knowledge hypothesis states that domain knowledge is necessary for problem solving. A (human) problem solver needs situation or application specific methods to avoid unnecessary search. This can be done by using a shorter way in the search space or skip specific parts of the search space that are extremely unrealistic (Steels, 1992, p. 94-96). Support for this hypothesis also comes from chess research (Perkins, 1989; De Groot, 1978). Although, this time from later research in the chess field. According to the knowledge paradigm a chess master does not use exhaustive searches into the problem space to examine all the possibilities but uses large amounts of domain knowledge or schemata[30], that are possible solutions in that particular situation (Steels, 1992, p. 95-96).
"The knowledge paradigm regards intelligence aspossessing a large amount of practical domain knowledge ." (Steels, 1992, p. 95)
The knowledge hypothesis is used in expert systems. Choosing between the search or the knowledge hypothesis is not an either-or matter, but more a matter of emphasis. Artificial Intelligence uses both search and knowledge for solving problems. Luger & Stubblefield (1989, p. 14) say that:
" ...the two most fundamental concerns of Artificial Intelligence researchers are knowledge representation and search."
4.4 Problem solvers based on the paradigm of Artificial Intelligence: general problem solvers and expert systems
4.4.1 The first general problem solvers
Researchers in the field of Artificial Intelligence who focused on general problem solving were inspired by two sources: the discussion in the field of psychology and learning theory on using problem solving techniques, and the development of programs in the field of theorem proving (Newell & Simon, 1972, p. 884, 885).
In the fifties and sixties there was a discussion in the field of psychology and learning theory on how to spend the teaching time as efficiently as possible for teaching problem solving techniques. Should problem solving be taught by teaching domain knowledge or by using general heuristics? Early research in the domain of chess pointed out that specific domain knowledge is needed but is less important. Certain basic rules will suffice. This inspired researchers in the field of Artificial Intelligence to build programs based on these principles (Perkins, 1989)[31].
The development of programs for automated reasoning and theorem proving, the oldest branches of Artificial Intelligence, was influenced by Whitehead and Russel, who wanted to treat mathematics as the purely formal derivation of theorems from basic axioms (Luger & Stubblefield, 1989, p. 15). Theorem-proving research was responsible for much of the early work in formalizing search algorithms and developing formal representation languages. This research led to the development of Newell & Simon's Logic Theorist[32], and to programs that are called 'general problem solvers'. The 'General Problem Solver' (GPS) of Newell & Simon (1961, 1963b) is a very well-known example of a general problem solver. The 'General Problem Solver', is based on the problem space hypothesis of Newell & Simon (1972) in which a problem is regarded as the difference between the current situation and the desired situation (Winston, 1984, 146-147)[33]. The current state is where we are now, the desired situation is the goal[34]. In the 'General Problem Solver', procedures are selected according to their ability to reduce the observed difference between the current state and the goal state. This approach is known as the means-end-analysis (Newell, Shaw & Simon, 1957; Ernst & Newell, 1969). The 'General Problem Solver' uses heuristics to guide the search of the problem space in the most promising direction (Boden, 1988, p. 152).
"GPS is a metaphor denoting a particular control strategy built on top of the means-end-analysis idea." (Winston, 1984, p. 147).
The 'General Problem Solver' is still the foundation of most Artificial Intelligence programs today.
Boden (1988, p. 153) states:
"As a simulation of human thinking, the General Problem Solver was very limited. It could solve only problems having a simple logical structure. It could tackle the small-scale problems only: complex problems of the same general type were insoluble, partly because of the demands made on its memory and goals stack.........The most radical family of the General Problem Solver approach was the assumption implicit in the term 'general problem solver', that all problems can be represented by a state space and that all solutions consist of search in a state space."
Boden (1988, p. 154) continues:
"Nevertheless, our still incomplete understanding of the origin and importance of the different types of problem representation owes much to the questions raised by the general problem solver and by Newell & Simon's more recent work."
4.4.2 Recent general problem solvers: SOAR and ACT*
Newell continued working in the field of general problem solvers. He wanted to construct a general theory of cognition and combined this with research onintelligent machines that could be used as research instruments. He tried to develop a general problem solver to prove his general theory of cognition. Newell pretended that a single set of mechanisms was capable of producing all human cognitive behavior. The program based on this concept is called SOAR[35] (Waldrop, 1988; Newell, 1989). The program SOAR is based on the problem space hypothesis of Newell & Simon (1972), and is a combination of specific knowledge and general heuristics. SOAR is a psychological model of natural intelligence combined with an expert system. SOAR is a developing system capable of intelligent behavior, that can learn and can handle many small problems such as, for instance, the eight-puzzle[36].
Another researcher working in this field is Anderson, a cognitive psychologist. Anderson looked for a unified theory of cognition, which he called the ACT* theory. He developed a framework named ACT[37] based on these ideas (Anderson, 1983).
ACT is a computer program intended as a general problem solver for skill acquisition. ACT* theory is a model of cognition in which stored task-knowledge can be procedurally compiled. It represents knowledge as production systems[38]. The framework ACT can handle small Artificial Intelligence problems such as the water bucket problem[39], or the so-called 'cannibals and missionaries' puzzle[40]. The program ACT pretends to be a skill-independent problem solving instrument, based on a general psychological theory of skill-learning and pretend to be able to predict the observed behavior. The ACT* theory is not based on serial functioning of information processing systems, but rather on parallel processing (Boonman & Kok, 1986). The theory of Anderson, the ACT* theory, was criticized by, among others, Marr (1982)[41]. Anderson developed another theory based on this criticism. Anderson named his new theory the rational analysis theory[42], described in his book 'The adaptive character of thought'. This theory can be considered according to Anderson as:
"...a new architecture theory within the ACT framework." (Anderson, 1990, p. 3)
On this basis he tried to prove that the ACT* theory was wrong, with a new production system (a computer program) called PUPS, but Anderson says:
"I failed to shake the old theory." (Anderson, 1990, p. 2, 3)
Comment
Both programs SOAR and ACT* try to imitate the brain functions of human beings in the problem solving task. The programs are based on the paradigm of cognitive psychology in which the human being is regarded as an information processing system. It is very interesting that both programs can learn. Rules are reinforced or strengthened as long as they are not found to lead to errors. When learning occurs, the program can jump to conclusions more rapidly. This is a form of machine learning.
The programs are developed as research instruments for looking for a unified theory of cognition. Based on these computational theories, both programs claim to be general problem solvers that can solve all kinds of problems. But until now, they have only been able to solve some well-defined problems in a well-defined problem space and they can only operate on very small and specific problems, such as geometry proofs and some textbook like translations from English into French (Boden, 1988) and some small Artificial Intelligence problems. Whether SOAR and ACT* are general heuristic problem solving instruments is too early to tell. Both programs are still in the developmental phase.
Whether it is possible to find a unified theory of cognition is uncertain. A unified theory means integrating and explaining all the different small-scale cognitive theories. Until now we only have different small-scale cognitive theories to explain the human cognitive functions. The theories of Newell and Anderson have not (yet) reached that level.
Luger & Stubblefield state about the limits of Artificial Intelligence programs (1989, p. 600):
"Although the use of AI[43] techniques to solve practical problems seems well on its way, the use of these techniques to found a general science of intelligence is much more problematic. A number of researchers (Winograd & Flores, 1986; Weizenbaum, 1976) claim that the most important aspects of intelligence are not and, in principle, cannot be modelled, particularly not with a symbolic representation. These areas include learning and understanding and producing speech acts. Winograd & Flores criticism are based on issues raised from the phenomenological viewpoint (Gadamer, 1976; Heidegger, 1962).
...Heidegger's and Gadamar's writings, however, represent an alternative approach to understanding intelligence. They attempt to show how the fundamentally unrepeatable nature of everyday life and human existence gives reality a significance that cannot be understood in terms of representation.
This is also the opinion of Searle (1980), Dreyfus (1985), and others.
...This context (the context of the everyday world) and human functioning within it, is not something explained by propositions and understood by theorems. ....We are inherently unable to place our knowledge and most of our intelligence behaviour into language, either formal or natural.
...There are many activities outside the realms of the scientific method that play an essential role in responsible human interaction; these cannot be reproduced by or abrogated to machines."
Boden (1988, p. 170) also asks:
"..........can there be a theory of problem solving where problem solving includes not only logically closed puzzles such as crypt arithmetic but scientific and everyday problems too.......?"
Fodor (1983), as a modularity theorist, denies that there can be a unified theory of cognition because problem solving also involves cognitively penetrable phenomena. This means that we not only solve problems rationally but also non-rationally (Tversky & Kahnemann, 1974; Cohen, 1981)[44]. These latter studies suggest that we very rarely employ the laws of logic in the reasoning used for problem solving. Boden concludes that strict proofs of the computational adequacy of a given model for solving a problem can be provided only for relatively well-understood problems.
4.5 Expert Systems
Once it became clear in the mid sixties that the results of developing general problem solvers were somewhat disappointing, at least for the near future, many researchers in this field sought for a new challenge. They found inspiration in the ongoing discussion about problem solving in the field of psychology and learning theory, where discussions continued on the basis of new research pointing out that the chess masters not only used general heuristics but also much domain specific knowledge, in this case schemata[45] (Chase & Simon, 1973; De Groot, 1978). This means that in solving a problem one not only uses general problem solving techniques but also domain specific knowledge[46].
Artificial Intelligence researchers concluded that considerable domain knowledge is needed for solving a problem and that domain knowledge is more important than general problem solving techniques. Many researchers in the field of Artificial Intelligence therefore switched from developing general problem solvers to developing expert systems[47]. In the beginning, many researchers were convinced that they could build programs that could equal or even replace the human expert in solving domain specific problems (Perkins, 1989).
The knowledge technology works mostly with the domain knowledge hypotheses, which is known as the knowledge paradigm. By 'knowledge' is meant facts and the relations between facts, represented in facts and rules[48]. This knowledge is placed in the knowledge base. For the representation of knowledge several techniques are used. Artificial Intelligence either uses knowledge representation techniques such as are production rules, semantic nets, frames with slots, or represents the knowledge by using predicate logic[49].
The knowledge level is an abstraction level for studying knowledge and expertise, and concerns knowledge and knowledge applications. Steels distinguishes three components of the knowledge system on the knowledge level: tasks, models and methods (Steels, 1992, p. 111). The tasks are the activities the problem solver must perform. Steels states that there is mostly one main task, for instance the planning of a developing process in a factory. This task can be decomposed again and again into smaller sub-tasks down to the level of implementation. Problem solving is regarded here as modeling. According to this hypothesis, problem solving is a modeling activity. The methods are responsible for the organization of the model construction process.
Artificial Intelligence programs in this area are of two kinds of development: one that is more scientific and deals with how to represent knowledge as symbols, how to elicit knowledge from experts and other sources, while the other is the application side of Artificial Intelligence programs (fundamental versus applied research). Chapter five gives some examples of the application side of Artificial Intelligence.
In addition to being a science Artificial Intelligence is also a technology. These two fields were originally closely connected, which resulted in a fruitful mutual influence, regrettably the application side now seems to have only little or no connection with the scientific side (Steels, 1992)[50]. The technology is used for practical goals. One of the practical goals is to distribute, or make more available, the scarce knowledge that only a few experts possess. In developing expert systems for practical use, the technological approach does not always follow the scientific prescriptions. A so-called expert system in practice is often only an upgraded database[51].
Luger & Stubblefield (1989, p. 16) define an expert system as follows:
"Expert systems are constructed by obtaining (this) knowledge from a human expert and coding it into a form that a computer may apply to similar problems. Expert knowledge is a combination of a theoretical understanding of the problem and a collection of heuristic problem-solving rules that experience has shown to be effective in the domain."
Some examples of expert systems[52]
One of the earliest systems to exploit domain-specific knowledge in problem solving was DENDRAL (Luger & Stubblefield, 1989, p. 16). DENDRAL was developed at Standford University (Lindsay, Buchanan, Feigenbaum & Lederberg, 1980). It was a successful program that could, with only a few trials, find the correct structure of organic modules out of millions of possibilities.
A very well known expert system is MYCIN, a system for medical diagnosis and therapy of bacterial infection of the blood and spinal meningitis. MYCIN was developed between 1970-1980, also at Stanford University (Shortliffe, 1976). The goal of constructing MYCIN was twofold: a scientific goal and a practical goal (Steels, 1992, p. 18, 19). The motive to build an expert system for medical diagnosis was that research on prescribing medicines for infection indicated that only 13% of the prescriptions by the doctors were based on rational arguments, whereas 66% were based on non-rational arguments and the rest was disputable[53]. MYCIN works as follows: the program asks the doctor questions about the condition of the patient. The doctor gives answers based on his or her observations of the patient. In an interactive dialogue with the doctor the program analyzes the symptoms and provides diagnosis. Only a medical expert can use the program. MYCIN was never used in real life. The development of the program stopped at the prototype level. Researchers still refer to MYCIN because MYCIN established the methodology of contemporary expert systems development (Buchanan & Shortliffe, 1984). It was one of the first programs that addressed problems of reasoning with uncertain or incomplete information.
Another interesting expert system is the Dipmeter Advisor, an expert system for interpreting geophysical measurement (Smit, 1984b). The program gives advice for analyzing a potential reservoir for oil or gas based on a graphic display. The data are visually represented. Analyzing the data offered by the computer is difficult, since only a few experts in the world can interpret the graphic material produced by the Dipmeter. The Dipmeter is capable of reasoning and interacting with the expert to discover possible interpretations of the data (Steels, 1992, p. 26-32). The Dipmeter Advisor was built by Schlumberger, starting in 1977, and is still in use. The Dipmeter is an active notebook that allows the expert to browse through the data resulting from measurements. Not only can the expert zoom in on certain zones, but the program can also bring interesting zones to his or her attention.
4.5.1 Knowledge based systems
a From expert systems to knowledge based systems
Development is in progress in the field of expert systems. In scientific research on expert systems the focus is partly on testing methods and techniques and partly on constructing cognitive models. The technological part is developing applications, knowledge based systems for practice[54].
It became clear that the goals set out in Artificial Intelligence in the late fifties were too ambitious, at least for the near future. The goal of building an expert system was to build programs that could solve problems as well as or even better than an expert. In only a few cases this standard has been met. The development of the programs went much slower than expected. Developing programs that can act intelligently seems to be far more difficult and complicated than it appeared in the beginning[55]. The programs that have been built cannot beat the expert and can seldom replace human expertise (Steels, 1989)[56].
"An expert system is a computer program originally designed to assist the human expert in a limited but difficult real-world domain." (Steels, 1987, p. 9,10)
That is one of the reasons why the name 'expert system' was changed to 'knowledge based system'. Another reason was that the knowledge in the knowledge base is not only derived from experts, but also from other sources, such as books and brochures. The third reason is that a knowledge based system does not always focus on expert knowledge, sometimes it focuses on middle level knowledge or even low level knowledge[57], everyday knowledge[58]. Although the term 'expert system' and 'knowledge based system' are both in use, often to indicate the same thing, we prefer the word 'knowledge based system', because it describes the kind of tool better than the term 'expert system'.
The idea of a knowledge based system is to have knowledge available combined with an inference engine[59], that is, the reasoning rules with which it is possible to analyze a problem. The starting point for developing a system is not the problem, but the knowledge needed for handling the problem. The reasoning of a knowledge based system is modeled after the reasoning of a human expert.
"The reasoning of an expert system is modelled after the reasoning of a human expert. An expert system is an active notebook based on a model of the human expert." (Steels, 1989, p. 10)
The knowledge based system can be approached by asking questions. The knowledge based system keeps a record of the reasoning and the reasoning process. By using a user-friendly interface and/or a natural language interface the knowledge based system can be used directly by people working in the field.
A knowledge based system can be used to raise the level of expertise of an individual or to ensure that the expert does not overlook things. The system does not make a person a super expert, but can give a person the status of a semi-expert. A knowledge based system can help the new expert to enhance his or her knowledge of the domain (Steels, 1989).
A knowledge base consists of facts and rules in a well-defined part of a domain in a well-defined problem space. In knowledge based systems, problem solving is regarded as a search through a space of possible solutions. Problem solving is, in this case, trying to find the most probable path(s) within the problem space.
A knowledge based system often gives an answer with a probability added to it, such as the probability that there is oil in the ground[60].
In 1982 Newell mentioned the fact that too little attention was paid to the knowledge level in the field of building knowledge based systems. All the attention was directed to the representation of the knowledge and the implementation. There is a need for a methodology of structured knowledge acquisition; knowledge technology and knowledge engineering.
For a knowledge based system a model of knowledge must be developed, a model of domain theories and strategies, meta-knowledge and general problem solving strategies. Making a model of the knowledge must be seen separately from implementing the knowledge[61]. A method that combines some of the research techniques of social science for the elicitation of knowledge for knowledge based systems, is 'Kads'.
The Kads methodology developed by Wielinga & Breuker is an attempt to base knowledge based systems on more theoretical concepts. Kads, or CommonKads as it is now called, originally stood for Knowledge Acquisition and Data Structuring, although the acronym no longer has any meaning (Wielinga & Breuker, 1986; Schreiber, 1993). It is a structured methodology for the development of knowledge based systems. It is also a set of tools to support the structured development of knowledge based systems. Knowledge is structured by way of conceptualization and formalization. The Kads methodology is a catalog of general interpretation models used to approach classes of domains. The goal of Kads is to specify a conceptual model. To acquire knowledge of a domain, and to model the expertise, several different knowledge elicitation techniques are used: selecting and interviewing experts, sorting cards, making a questionnaire, using thinking-aloud protocols and observation protocols. It provides a collection of software tools, data structures for representing conceptual structures and problem solving techniques. Kads is a kind of general heuristic for analyzing problems. The Kads methodology assumes that many applications have the same conceptual structure and the same problem solving strategy. The constructors of the Kads methodology assume that there are stable inferences that certain problem solving tasks have: like inference techniques, strategies, tasks, only the domain knowledge changes.
There is, for instance, an analogy in problem analyzing in physiotherapy and in analyzing faults in audio-systems/appliances (Breuker & Wielinga, 1988; Winkels, 1992; Bredeweg, 1993).
Comment:
A knowledge based system cannot perform tasks or solve problems that human beings do not know how to tackle, although it can make inferences which are very difficult for the human experts because of the many data involved (Den Haan, 1992).
A knowledge based system can provide some specific knowledge on a very small part of the domain, but cannot add new knowledge automatically to the domain.
b How does a knowledge based system work?
The problem solving proceeds according to the problem space hypothesis of Newell & Simon (1972). The program works as a search for a path from the initial state to a goal state. The initial state and the goal state are given. The program must find a path from the initial state to the goal state. The system uses the special Artificial Intelligence search techniques. The search through the knowledge base is done with a search system that excludes alternatives. First it checks the knowledge base on if..then procedures. In the searching of a knowledge based system there is no reflection on previous decisions. When the knowledge is not useful at that moment, it is no longer used.
4.6 Constructing the conceptual model in a knowledge based system
a Knowledge acquisition
In knowledge acquisition, two aspects can be distinguished: knowledge elicitation and knowledge analysis and structuring. According to Lenat (1983) knowledge acquisition is trying to make the underlying structure of a rule explicit. Knowledge acquisition takes place by extracting knowledge from several sources like human experts and literature. By 'literature' we mean articles, books, papers, manuals etc. Knowledge acquisition is a difficult part in the construction of a knowledge based system. The elicitation of knowledge from the expert and the interpretation of the knowledge into a formal knowledge base seems to be very difficult. In order to elicit the knowledge one can use all kinds of social science research techniques for data collection, such as interview techniques, observation techniques, the thinking-aloud method[62], the Delphi method (van Dijk, 1992; Vennix, Gubbels, Post & Poppen, 1989).
The reasoning of a knowledge based system is modeled after the reasoning of the human expert, although a knowledge based system does not exactly copy the way that humans solve a problem. The conceptual model is influenced by the mental idea of the knowledge engineer(s)[63]. This mental idea is the starting point on which the knowledge engineer will base his or her idea(s), and from which (s)he will ask questions of the experts or analyze literature. The mental idea of the knowledge engineer will change moderately during the knowledge elicitation period.
b Knowledge based systems and changing knowledge
Knowledge is not a static object. An example of this concerns the well-known expert system XCON from Digital Equipment Corporation (DEC). XCON[64] has been one of the largest knowledge engineering projects. Initiated in the late seventies, the system has been in use since 1981 and development is continueing. XCON is an expert system that designs the computer configuration scheme[65] of the VAX. It contains over 5000 rules with 20,000 components. About 50% of the knowledge[66] in the knowledge based system changes every year (Soloway, et al., 1985). It is not easy to change the knowledge, in this case the production rules, in a knowledge based system because of the complexity and interconnections of the rules.
c Concerning problem solving techniques
We discussed before what kind of problem solving techniques experts use to solve problems. An expert system focuses on specific domain knowledge and specific problems in the domain. But what can be done with a-typical problems in the domain? Perkins (1989) says that research indicated that experts confronted with a-typical problems in their field not only used domain knowledge but had to switch to general heuristics closely related to the domain to be able to solve the problem.
It seems therefore that domain knowledge and general heuristics related to the domain should be a good combination in handling problems. Some support for this statement comes from the field of reading comprehension. Palinscar (1986) and Baker & Brown (1984) enhance the reading comprehension ability of poor readers with their reading method called 'Reciprocal Teaching' (Palinscar, 1986). The reason for the success of the program is that it not only teaches reading comprehension but also meta-cognitive skills closely related to the domain. These meta-cognitive skills can be regarded as general domain related heuristics. The answer Palinscar (1986) and Baker & Brown (1984) give is that most experts use domain knowledge and problem solving techniques and, when needed, general heuristics closely related to the domain. In Artificial Intelligence, on the contrary the inference engine uses general meta-heuristics[67] such as the problem space hypothesis of Newell & Simon (1972). It also uses some general search techniques like the algorithm of generate and test[68] and the search techniques breadth-first and depth-first[69] as general domain independent problem solving techniques. This kind of search rapidly becomes too exhaustive. The generate and test search is not feasible for anything other than the smallest search spaces. For larger search spaces one can change to a heuristic search. In a heuristic search, domain knowledge is used to search into the state-space. A solution to the problem is not guaranteed, but there is a reasonable chance of finding a solution. Hill-climbing[70] is a common heuristic search technique: out of all possibilities the best possibility is selected. The difficulty, however, is to know what is the best.
d Problem solving with knowledge based systems in relation to reality
A knowledge based system is a tool for problem solving. It supports a person in handling a problem. We will discuss two assumptions that underlie knowledge based systems. One assumption is that by building a knowledge based system the knowledge, the argumentation and the solutions of an expert can be built into a system. This assumption calls for some questions. What kind of knowledge does an expert use to handle the problem and can this knowledge be put into a knowledge base? Is it possible to elicit knowledge even when the knowledge is elicited by using the combined methods as is done in structured methodologies such as, for instance, Kads? Is the expert willing and able to verbalize all his or her knowledge[71]. Is the expert willing to give away all his or her secrets. Can the knowledge be put into facts, rules and relations which will have the same impact as the expert meant it to have. Often the expert reports that he or she uses intuition[72] to handle a problem (Crombag, 1984; Bree, 1989). Can one formalize intuition into if...then rules, or into a different kind of formalization of Artificial Intelligence? Knowledge based systems use a closed search system in which there is no place for intuition, although part of intuitive thinking can be put into the knowledge base in advance in the form of a heuristic search. Does an expert always solve the same problems in the same way? Experts often have different opinions on how to solve the same problem. What can be done if some knowledge is in contradiction with other knowledge or when someone is not quite sure about the knowledge. A knowledge based system uses strict rules where a human being would use vague rules, especially when it comes to qualitative terms such as a 'large' person, a product of 'good quality', this goes 'very fast' etc. (Negoita, 1985).
A knowledge based system supports rational decision making. Knowledge based systems assume that derivations based on rational arguments are better than derivations based on other arguments. As we have seen in chapter three[73], many of the decisions in medical diagnosis, were based on non-rational arguments[74] (Crombag, 1984; Snoek, 1989). In reality, many of the decisions made are based on rules of thumb (Bree, 1989). Implementing decisions based on rules of thumb into a knowledge based system, gives more robustness, more absoluteness to the decision rules. The consistency of the decisions supported by a knowledge based system is considered as one of the advantages of a knowledge based system. But the construction of a knowledge based system makes the often arbitrary reference mechanism absolute in the rather arbitrary constructed knowledge based system.
A second assumption of knowledge based systems is that the knowledge and the environment in which the knowledge operates will remain more or less stable, eventhough perhaps some components change[75].
There is the possibility that users of a knowledge based system will use the program as if there is universal, time independent, context free knowledge[76]. Many knowledge based systems are used as static instruments. Another point of consideration is that a knowledge based system is created in connection with a particular situation at a particular time. The knowledge base implies that other situations to which the system will be applied, will react in more or less the same way. But other situations can be different in such a way that the advice of the knowledge based system is not applicable there. Even the original situation can change so much that the advice of the knowledge based system can no longer be applied.
These questions make it doubtful that all the knowledge of an expert can be put into a knowledge based system. What can be done is make sophisticated check lists, putting part of the knowledge in a knowledge based system. Some questions are answered, many questions remain open, waiting to be answered. However, when high level knowledge such as that of a doctor, a lawyer etc. are concerned it is not yet possible to put all the knowledge of an expert into a knowledge based system[77].
Weizenbaum (1976) questions the right to use a knowledge based system by questioning the morality of letting machines make decisions in such ethically vulnerable domains as law, politics and business. In the Netherlands the law strives for justice. This is different to applying the rules of law. Often, merely applying the rules leads to great injustice[78].
Knowledge engineer[79]
Not all the knowledge of a domain can be put into a knowledge base. One has to omit certain aspects of the domain. This selection is made by the knowledge engineer, based on the conceptual model of the problem. In order to do this the knowledge engineer must be able to decide what is relevant and what is not relevant. However, it remains difficult to cut off a part of the domain in a way that it will be adequate to cover the problem and at the same time not go to far beyond the comprehension of the designers.
4.7 First and second generation knowledge based systems
Although the first expert systems were rather successful, it was later realizedon that the knowledge that was implemented was rather shallow. The production rules were directly implemented, without considering which theory the model was based on.
The first knowledge based systems, later called the first generation knowledge based systems by Steels (1987), work with first level knowledge, facts and rules from a small and specific part of some domain. Evaluations of the first generation knowledge based systems show that facts and rules alone are not always enough to provide good answers to questions asked of the system, and that deeper knowledge is also needed in addition to commonsense knowledge[80]. This knowledge can be very difficult to analyze. The improved systems are called second generation systems (Steels, 1989).
The second generation of knowledge based system implies domain knowledge and domain related problem solving techniques, sometimes general problem solving techniques and domain theories.
There is a distinction between support knowledge, procedural knowledge, and strategic knowledge. The production rules however hide these different kinds of knowledge. These rules are limited in their transparency, they do not give further explanation and cannot be regarded outside the domain. They are not built based on a theory.
4.8 Theories about knowledge based systems
Theory formation on expert systems has stagnated since 1975.
"The knowledge of expert systems is seldom studied from a scientific point of view." (Steels, 1989)
The recent developments in the field of developing knowledge based systems, show a tendency to trivialize the problems involved in building a knowledge based system. Because the building of a knowledge based system is very difficult there is a tendency to skip a part of the hard labor that is needed for knowledge acquisition and to focus not on reasoning but on a
"...set of decision steps which is completely determined in advance." (Steels, 1989)[81]
This can lead to false expectations of the use of the system. It can also lead to false expectations concerning the developing and the programming of the system and results in a not very strong knowledge based system. Steels (1989) says that some Artificial Intelligence researchers see knowledge engineering as just a fancier word for programming.
A recent trend in building knowledge based systems is the connection of several knowledge based systems. Until now only one system for some specific kind of problem exists. An idea for the future is to link several systems in order to assist, for instance, a whole production plan of a factory, from marketing to selling and repairing (Steels, 1992, p. 46). Other ideas are building knowledge based systems in which the expert can change the knowledge base him or herself (Steels, 1992, p. 48).
4.9 Summary and conclusions
Programs developed with the Artificial Intelligence paradigm in relation to handling complex interdisciplinary societal problems
The research questions discussed in this chapter are:
3a why can computer tools not replace the human being in the process of handling complex interdisciplinary societal problems?
3a-1 what kind of problems are handled by programs built according to the idea of Artificial Intelligence?
3a-2 how are the problems that Artificial Intelligence programs focus on related to complex interdisciplinary societal problems?
In answering these questions we have limited ourselves to the kind of Artificial Intelligence programs that are specially developed for problem handling. General problem solvers perform remarkably well and can learn from experience, they solve different problems and are general in that they are able to solve problems unknown to the system. However, until now they have been anle to handle only small, very artificial, well-defined problems, in a well-specified domain that have already been solved. One should be very careful in using research evidence, collected from a search in a limited artificial situation, such as chess, for evidence on how to handle problems in situations that are far more complicated and where the problem space is much wider or more diffuse. The idea of a search in a space can only be based on the state-space-search hypothesis in which, in principle, the solution is included in the problem space. One must be aware that the problem space in chess is, although large, very limited. One cannot merely use this situation as an argument for searching in a far more complicated situation of the empirical problems. Luger & Stubblefield (1989, p. 14):
"Much of the early research in state-space-search was done using common board games such as checkers, chess and the 16-puzzle[82]."
"Most games are played using a well-defined set of rules; this makes it easy to generate the search space and frees the researcher from many of the ambiguities and complexities inherent to less structured problems. The board configurations used in playing these games are easily represented on a computer, requiring none of the complex formalisms needed to capture the semantic subtleties of the real-world problem domains."
Artificial Intelligence and chess problems differ enormously from the complex interdisciplinary problems we are focusing on in our research[83]. These programs are not capable of supporting, let alone of replacing the human being in its problem handling capacity in relation to complex interdisciplinary societal problems.
Knowledge based systems focus on domain related, small problems, within the boundaries of a domain, and mostly only on a very small part of that domain[84] (Jackson, 1986; Steels, 1989).
Luger & Stubblefield (1989, p. 17) say that:
"... most expert systems have been written for relatively specialized, expert level domains. These domains are generally well studied and have clearly defined problem-solving strategies. Problems, that depend on a more loosely definition notion of 'common sense' are much more difficult to solve by these means."
Knowledge based systems work with the idea that one can compare similar situations. Little attention is given to the context of the knowledge, or to the idea of living in changing situations in a changing world.
Building a knowledge based system can take a long time[85], during which it is likely that the problem and/or the environment will change as part of the development process. The result can be that the system cannot be used for the goals it was supposed to serve[86], since the knowledge based system is not fit to work in a totally different situation[87].
How do these problems relate to complex interdisciplinary societal problems? Knowledge based systems cannot work with problems outside their problem space, which is often the case with new and unexpected problems as complex interdisciplinary societal problems. For policy problems they are therefore not suitable. These problems are too broad, too complex and undefined. At first sight, the knowledge of several experts put into a knowledge base for handling a problem could be compared with several experts discussing handling a problem. On closer inspection, however, several major differences become apparent which are very important. First, knowledge based systems only focus on problems in a special (part) of a domain. Second, the experts are not confronted with each other's knowledge in talking about the problem. The knowledge will be structured by the mental idea of the person(s) who elicited the knowledge from the experts. Third, not all the knowledge of the expert concerning the problem can be put into the knowledge based system[88]. When something unusual happens for which the program is not prepared, the program cannot give an answer. Fourth, a knowledge based system loses flexibility after implementation.
For handling complex interdisciplinary societal problems a knowledge based system can, at most, give some help with a part of the problem (DeTombe, 1989).
Anderson (1990) comments critically on the kind of problems Artificial Intelligence, including his own work, focuses on. Anderson realizes that Artificial Intelligence is very much preoccupied with small artificial and academic problems, problems which differ considerably from the everyday problems a person is confronted with. Anderson says (1990, p. 192):
"Problem solving is a major field of research in artificial intelligence and a substantial field in cognitive psychology. From an adaptionist perspective, both areas have chosen a strange set of tasks to focus on: There are the puzzles and games - such as chess, Tower of Hanoi, Rubics cube, and the eight puzzle - and there are the academic activities, like mathematical and scientific problem solving (to which I have devoted much of my research). Such problem solving has little adaptive value, and one can question whether our problem-solving machinery has evolved to be adapted to such tasks.
This is not to argue that research on such domains is not without value. There is applied value in understanding academic problem solving (even if academic problem solving has no relationship to adaptation)."
After acknowledging this he remarks that real life, everyday problems (Anderson, 1990, p. 193):
".....provide a startling contrast with what is usually studied in research on problem solving. They involve problems in domains where we typically induce causal rules rather than are explicitly told them. ....They also differ from research problems in that the rules tend to be probabilistic and the problem solving involves fairly explicit considerations of varying costs and benefits."
Research on Artificial Intelligence programs is still in the developmental phase. It might be interesting to speculate whether future Artificial Intelligence programs will be able to solve some complex interdisciplinary societal problems. If so, what conditions should these programs fulfil?
In looking at the phases of handling complex interdisciplinary societal problems, as described in chapter three, we see that one should first be aware of a problem. By using a general problem solver or a knowledge based system it is always the human being who notices that there is a problem and then turns to the computer for help. There are some situations in which the computer itself notices the problem, as, for instance, in a strictly defined world like the artificial world of controlling a machine, the computer can notice and recognize when something is going wrong (monitoring systems). In this case the computer can be the first to register that there is a problem. However this can only be the case in a strictly defined artificial world. In the real world, things are not so strictly defined as in the artificial world of controlling a machine. For awareness of a problem one needs a human being. One cannot construct programs that draw attention to problems that cannot be anticipated. Awareness is the first phase of the first sub-cycle of problem handling. The other phases of the first sub-cycle work toward defining the problem. Artificial Intelligence problems start with already defined and solved problems. The second sub-cycle is about finding interventions that can change the problem, whereas a computer program cannot find a solution unless the answer is already in the program. For complex interdisciplinary societal problems the answers and the solutions are not known in advance. Anderson says, that one can question whether our problem-solving machinery has evolved to be adapted to such tasks.
We do not believe that the Artificial Intelligence approach to problem handling will evolve smoothly into handling complex interdisciplinary societal problems. In our view, handling complex interdisciplinary societal problems needs a different approach.
We may conclude that the programs we have discussed based on the paradigm of the Artificial Intelligence are not able to replace the human being in the problem handling process of complex interdisciplinary societal problems. At most these programs can assist the human being in some parts of the process of handling complex interdisciplinary societal problems, as will be seen in chapter five.
Artificial Intelligence research does give some interesting insight into the phenomena of intelligence and problem solving and can handle small artificial problems but the question of whether those programs can replace the human being in handling complex interdisciplinary societal problems must be answered negatively, for the following reasons:
- Artificial Intelligence programs solve domain specific, and until now small problems;
- Artificial Intelligence programs solve problems that have already been solved before;
- Artificial Intelligence programs imply that the world is static, that the problem and the environment will not change over a certain period of time.
[1] See chapters two and three.
[2] By 'computer' we refer to a programmable automate, a machine that performs according to a program. The machine cannot operate without a program.
[3] A paradigm according to Kuhn (1970) is the standard framework that is the basis for theory development in a certain field.
[4] Conventional programs are also called standard programs by Steels (1989/1992).
[5] One of the main questions in Artificial Intelligence is whether it is possible to generate intelligence by machines, by a programmable automate such as a computer. The original goal of Artificial Intelligence was to replace the intelligence of human beings needed for performing some tasks by intelligently performing machines. Although contemporary researchers in this field are not directly aiming at the idea of replacing the human being by machines, the original goals of Artificial Intelligence and some applications in the field of vision (vision is a research area within Artificial Intelligence) indicate that it is not unthinkable if Artificial Intelligence succeeds in creating intelligence by machines, for that part of human activity to be replaced by machines.
In contemporary Artificial Intelligence research one can distinguish two streams of researchers: the 'hard' stream and the 'soft' stream, a distinction based on the idea of the extent to which the intelligence of a human being can be replaced. The 'hard' stream assumes that it is possible to imitate the human brain by a kind of artificial brain.
The 'soft' stream is of the opinion that it does not matter whether we can reproduce the human intelligence identically way or not, but studies the possibility of modeling intelligence. There can be Artificial Intelligence systems that work quite differently from the human brain and still be successful (Luger & Stubblefield, 1989, p. 3-25).
[6] The computer already exceeds and replaces the human being in many tasks, for instance in numerical tasks and searching databases.
[7] The Von Neumann principle is the idea that a program can be coded and stored in the main memory the same way as the data (Neumann, 1963). When the control unit is designed to extract the program from the memory, decode the instructions and execute them, a computer's program can be changed by changing the contents of the computer's memory instead of rewiring the control unit, as was done in the first computers, the Mark I in 1944 and the Eniac in 1946. The data and instructions are processed sequentially (Brookshear, 1991).
[8] Although there are interesting attempts at simulating parallel processing based on the ideas of connectionism (McClelland, St.John & Taraban, 1989; Rumelhart, McClelland & The PDP Research group, 1986).
[9] "Marvin Minsky's well-known essay, Steps toward Artificial Intelligence, ...reflects very well the general body of knowledge in artificial intelligence that was pooled at the Dartmouth conference...." (Newell & Simon, 1972, p. 884).
[10] Technological constraints refer to the constraints of a computer.
[11] The constraints of knowledge acquisition refers to the 'learning' of the knowledge system itself concerning new knowledge (Steels, 1992, p. 101).
[12] Knowledge theoretical (epistemological) constraints refer to the possibilities of a physical system (human or machine) to use knowledge that operates in this world (Steels, 1992, p. 86).
[13] Constraints of formalization refer to the limitations on putting something into a formal language, as Prigogine (1987) and Gödel showed (Hofstadter, 1970).
[14] Limited rationality concerns not only human beings but also mechanical systems, that are limited in relation to time, space, to develop true theories and limited perception.
[15] In developing Artificial Intelligence programs one encounters, according to Steels (1992, p. 53-56) different fundamental problems. Problems in the field of philosophy, scientific problems, technical and organizational problems.
- A philosophical question is: 'Is it possible to generate artificial intelligence?'
- Scientific questions: 'What is intelligence; what is knowledge; what is problem solving; how is knowledge structured; how is knowledge acquired'.
- Technical questions: 'How can we make computers that translate programs in a higher programming language or machine language'.
- Organizational questions: 'How can we make organizations use knowledge based systems'.
[16] Like many other sciences, Artificial Intelligence distinguishes three levels of describing intelligence and intelligent systems: the physical level, the symbol level and the knowledge level.
The physical level is the level of machine language (level of bits) or neurones, the symbol level is the level of software. The knowledge level is the level of models of knowledge, methods and tasks. This level is independent of imlementation. In knowledge based systems, this is concerned with knowledge of the domain, problem solving methods, problem solving tasks.
[17] In the field of learning theory there is much discussion overwhat intelligence is and what constitutes intelligent behavior.
[18] Cognitive intelligence focuses on knowledge. Cognitive intelligence implies, according to Steels, being able to develop models, to explain this on the basis of data, explaining why a certain solution is the best, learning new descriptions and new steps of derivation and teaching others the knowledge necessary for problem solving. Behavioral intelligence implies only very weak models of reality, models that can only be applied one way like an ant-colony or some automated action. This aspect of intelligence is mainly studied by the field of Cybernetics, pattern recognition and connectionism.
Behavioral intelligence will lead to solving problems according to simple behaviorist pattern functions. There is often a simple connection between sensors and actuators At this moment it is not always clear which human tasks are behavioral and which tasks are cognitive. Behavioral intelligence occurs mainly in senso-motoric activities when there is no time or attention to think, cognitive intelligence occurs mainly in problem solving activities like writing a text, making a medical diagnosis or in legal decision making. Many tasks demand behavioral as well as cognitive intelligence, like driving a car and rowing (Steels, 1992, p. 5 and further).
[19] Translation by the author.
[20] We do not subscribe to the idea that the computer works the way the human brain works (section 2.9.3.1, Rumelhart, 1989).
[21] See also point ten of Newell in this chapter. Of all the aspects Newell mentions concerning the description of intelligence, Penrose selects only this point. However, if he were to include some of the other aspects Newell mentioned, he might conclude, regarding those points, that a computer is capable of intelligent behavior.
[22] We subscribe to the more extended definition of Newell about intelligence in section 4.2.2.
[23] In robotics some progress has also been made. However, in avoiding obstacles, a computer directed robot is not (yet) able to vacuum clean our house.
[24] See also Steels, 1992, p. 87.
[25] For the computer, an understandable code consists of bytes: a string containing a combination of zeros and ones.
[26] Prolog and LISP are third generation languages (see Brookshear, 1991).
[27] See section 2.9.3.2.
[28] See section 4.6.c.
[29] See for results of early and later research in the field of chess section 4.6.c (Perkins, 1989).
[30] See section 3.3.3.
[31] See discussion power versus knowledge hypotheses in section 4.3, item 3 and section 4.5.
[32] The Logic Theorist proved many of the theorems that were described in Whitehead & Russel's Principia Mathematica (Whitehead & Russell, 1925/27).
[33] See also section 2.1.
[34] See also the definition of problem solving in section 2.1.
[35] SOAR means State Operate And Result
[36] See section 2.9.3.2.
[37] ACT stands for Adaptive Control of Thought.
[38] Production systems consist of a set of production rules, a working memory and an inference mechanism. Production rules are if - then rules (condition - action pairs).
[39] These problems are two well-known problems used to train students in the Artificial Intelligence way of problem solving.
The water-bucket problem is about how to pour exactly four liters of water into a bucket. There are only two buckets, one can hold three liters of water and one five liters of water. There is a tap and a sink.
[40] Another example of the cannibals and missionaries type of problem is the problem of the goat, the cabbage and the wolf. The description of the problem is as follows: there is a goat, a cabbage and a wolf that have to cross a river with a boat that can only contain two elements, but the goat, when left alone with the cabbage will eat the cabbage, the wolf when left alone with the goat will eat the goat. The problem is to get these three across the water without any being eaten.
[41] See for this discussion also Boden, 1988, p. 145 and further.
[42] In his book "The adaptive character of thought" Anderson (1990) emphasizes the rational analysis of human cognition. His thoughts were inspired by the highly critical words of Marr (1982) concerning production systems architectures like ACT*. Anderson's former ideas and new ideas as stated in the books from 1983 and 1990 respectively are incompatible. Anderson says:
"....(the) old book on cognitive architecture and the new on rational analysis did not mix,...(1990, p. X)....The problem is that such a rational analysis is potentially a challenge to the cognitive architecture approach that I have largely followed... There is danger in following these ideas, that they might lead to a renunciation of what I have spent most of my professional life building...Although I have not yet come to a resolution of the architecture-versus-rational analysis juxtaposition, I doubt that it will be resolved as a decision between the two: One should begin with a rational analysis to figure out what the system should be doing and then worry about how the prescriptions of a rational analysis are achieved in the mechanisms of an architecture. I think the rational analysis in this book provides a prescription for developing a new architectural theory within the ACT framework and one that will not be much different from ACT*."(1990, p. XI).
[43] AI= Artificial Intelligence.
[44] See also section 3.6.
[45] See section 3.3.3.
[46] See discussion in section 4.3 item 3 and in section 4.4.1.
[47] Later also called knowledge based systems.
[48] The knowledge is defined here as first order knowledge. See section 3.8.2.
[49] An example of a semantic net is: a specification of an animal might be a mammal. A specification of a mammal may be a cat. An instantiation of a cat can be Rosa, and Rosa is the name of a specific cat. All the derived concepts inherit all the general aspects of concepts on higher levels. So Rosa has all the qualities of a cat, of a mammal and of an animal in general. A semantic net can be represented as: animal--> mammal--> cat--> Rosa.
A semantic net should be clearly distinguished from a semantic network as it is defined in cognitive psychology. See chapter three.
By frame is meant ('objects?') entities, with attributes and values. Frames have slots. The frame 'cat' can have several slots: leg, eye, tail. The value of the slots are respectively four, two and one. Attributes can be hunger, healthy, kittens.
Predicate logic works as follows. When the fact is that Anna is the mother of Nina and Anna is the daughter of Tara, it can then be derived that Nina is the granddaughter of Tara.
[50] See also chapter five.
[51] See also chapter five.
[52] In giving some examples of expert systems we focus on those expert systems that were a source of inspiration for other researchers in this field.
[53] See also Crombag (1984) in section 3.3.4.
[54] See chapter five.
[55] Many components of intelligent behavior are based on actions that are very difficult to define within the constraints of a computer program. For instance, one of the difficulties of a translation program is that words in a natural language are not strictly defined. For instance in a knowledge based system it is very difficult to handle common sense knowledge.
[56] Comparing a knowledge based system with the performance of an expert it is sometimes possible that a knowledge based system performs better than an expert (Den Haan, 1992). But as soon as knowledge or skills are required that go a little bit beyond the strictly defined part of the domain an expert performs better than a knowledge based system.
[57] See chapter five, the 'Saving System'.
[58] By 'everyday knowledge' we mean knowledge in a certain field that is used everyday. This must not be confused with common sense knowledge.
[59] The inference engine is a program, consisting of problem solving strategies and rules, that is able to make inferences on the basis of the available knowledge.
[60] See section 5.2, the Oil Company.
[61] Although the way in which the knowledge is structured, the final implementation is already taken into account.
[62] See section 2.9.2.
[63] See also section 3.3.
[64] See also section 5.2.
[65] This is the computer and the peripherals. Peripherals are printer, modem etc.
[66] Here knowledge is used more as data.
[67] An example of a meta-heuristic is: "Test procedure first.". A meta-heuristic is a (higher level) heuristic on using heuristics.
[68] Generate and test consists of the following steps: Generate a possible solution, test to see if this is actually the solution by comparing the chosen point or the end point of the chosen path to the set of acceptable goals. If a solution has been found, quit, otherwise return to generate a possible solution (Rich, 1991).
[69] Breadth-first search looks for the destination, in a search tree, among all nodes, at a given level before proceeding to the branches descending from those nodes. Depth-first search explores one path in the search tree in depth first.
[70] Hill-climbing is a variant of generate-and-test, but the test function is augmented with a heuristic function that provides an estimate of how close a given state is to a goal state.
[71] See Snoek, 1989, section 2.9.1.
[72] There is an on-going discussion about the definition of intuition. Is it experience combined with knowledge, and deep insight? Is it heuristics or automated problem solving techniques (see De Groot, 1965).
[73] See section 3.6.
[74] This was one of the reasons for building the medical expert system MYCIN.
[75] See XCON section 4.6.b.
[76] See section 3.8.1.2.
[77] See section 5.5, the Chemical Company.
[78] An example was given by Overbeke (1992). He describes a case in which a relatively poor woman had been given too high a salary for some months. When the firm recognized the mistake they demanded the money back. Strictly, according to the application of the law, the firm was right. However, because the woman used the money for daily support of her family there was not much money left. Demanding the money back would create more injustice than letting the woman keep the money. The case was settled in favor of the woman who only had to pay back a symbolic amount of money.
[79] The knowledge engineer elicits the knowledge and analyzes and structures the knowledge.
[80] An example of commonsense knowledge is, for instance, knowing that the cup of coffee is on the table and the table is not on the cup of coffee; or that in a restaurant, the restaurant script, ordering a table does not mean that one wants to buy a table. See also section 3.8.1.1.
[81] See also chapter five.
[82] The 16-puzzle is in principle the same as the 8-puzzle, see section 2.9.3.2.
[83] See chapter one.
[84] See also chapter five.
[85] See chapter five.
[86] An exception can be formed by so called time and environment independent problems, problems that will not change, such as in chess.
[87] See also chapter five.
[88] See also chapter five.
See for more publications of Dorien J. DeTombe
See for lectures of Dorien J. DeTombe
Ó Dorien J. DeTombe, All rights reserved, update September 2004