## Interval-type and affine arithmetic-type techniques for handling uncertainty in expert systems.(English)Zbl 1106.68101

Summary: Expert knowledge consists of statements $$S_j$$ (facts and rules). The facts and rules are often only true with some probability. For example, if we are interested in oil, we should look at seismic data. If in 90% of the cases, the seismic data were indeed helpful in locating oil, then we can say that if we are interested in oil, then with probability 90% it is helpful to look at the seismic data. In more formal terms, we can say that the implication “if oil then seismic” holds with probability 90%. Another example: a bank $$A$$ trusts a client $$B$$, so if we trust the bank $$A$$, we should trust $$B$$ too; if statistically this trust was justified in 99% of the cases, we can conclude that the corresponding implication holds with probability 99%.
If a query $$Q$$ is deducible from facts and rules, what is the resulting probability $$p(Q)$$ in $$Q$$? We can describe the truth of $$Q$$ as a propositional formula $$F$$ in terms of $$S_j$$, i.e., as a combination of statements $$S_j$$ linked by operators like &, $$\vee$$, and $$\neg$$; computing $$p(Q)$$ exactly is NP-hard, so heuristics are needed.
Traditionally, expert systems use technique similar to straightforward interval computations: we parse $$F$$ and replace each computation step with corresponding probability operation. Problem: at each step, we ignore the dependence between the intermediate results $$F_j$$; hence intervals are too wide. Example: the estimate for $$P(A\vee \neg A)$$ is not 1. Solution: similar to affine arithmetic, besides $$P(F_j)$$, we also compute $$P(F_j\& F_i)$$ (or $$P(F_{j_1}\&\cdots\& F_{j_d}))$$, and on each step, use all combinations of $$l$$ such probabilities to get new estimates. Results: e.g., $$P(A\vee\neg A)$$ is estimated as 1.

### MSC:

 68T35 Theory of languages and software systems (knowledge-based systems, expert systems, etc.) for artificial intelligence 68T37 Reasoning under uncertainty in the context of artificial intelligence 65G30 Interval and finite arithmetic

### Keywords:

interval computations; fuzzy logic; geoinformatics
Full Text:

### References:

 [1] S. Chopra, Affine arithmetic-type techniques for handling uncertainty in expert systems, Master’s Thesis, Department of Computer Science, University of Texas at El Paso, 2005. [2] Cormen, Th.H.; Leiserson, C.E.; Rivest, R.L.; Stein, C., Introduction to algorithms, (2001), MIT Press Cambridge, MA [3] de Figueiredo, L.H.; Stolfi, J., Self-validated numerical methods and applications, Brazilian mathematics colloquium monograph, (1997), IMPA Rio de Janeiro, Brazil [4] Hoefkens, J.; Berz, M.; Makino, K., Controlling the wrapping effect in the solution of ODEs for asteroids, Reliable comput., 9, 1, 21-41, (1999) · Zbl 1219.70003 [5] Jaksurat, P.; Freudenthal, E.; Ceberio, M.; Kreinovich, V., Probabilistic approach to trustideas, algorithms, and simulations, () [6] Jaulin, L.; Kieffer, M.; Didrit, O.; Walter, E., Applied interval analysis: with examples in parameter and state estimation, robust control and robotics, (2001), Springer London · Zbl 1023.65037 [7] Lin, F.; Zhao, J., On tight logic programs and yet another translation from normal logic programs and propositional logic, (), 853-864 [8] Neumaier, A., Taylor forms—use and limits, Reliable comput., 9, 1, 43-79, (2003) · Zbl 1071.65070 [9] Nguyen, H.T.; Walker, E.A., First course in fuzzy logic, (1999), CRC Press Boca Raton, Florida [10] Russell, S.; Norvig, P., Artificial intelligence: A modern approach, (2002), Prentice-Hall Englewood Cliffs, NJ [11] Trejo, R.; Kreinovich, V.; Baral, C., Towards feasible approach to plan checking under probabilistic uncertaintyinterval methods, (), 545-550 [12] Walley, P., Statistical reasoning with imprecise probabilities, (1991), Chapman & Hall New York · Zbl 0732.62004
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.