## Algorithms for approximate linear regression design with application to a first order model with heteroscedasticity.(English)Zbl 1471.62069

Summary: The basic structure of algorithms for numerical computation of optimal approximate linear regression designs is briefly summarized. First order methods are contrasted to second order methods. A first order method, also called a vertex direction method, uses a local linear approximation of the optimality criterion at the actual point. A second order method is a Newton or quasi-Newton method, employing a local quadratic approximation. Specific application is given to a multiple first order regression model on a cube with heteroscedasticity caused by random coefficients with known dispersion matrix. For a general (positive definite) dispersion matrix the algorithms work for moderate dimension of the cube. If the dispersion matrix is diagonal, a restriction to invariant designs is legal by equivariance of the model and the algorithms also work for large dimension.

### MSC:

 62-08 Computational methods for problems pertaining to statistics 62K05 Optimal statistical designs 62J05 Linear regression; mixed models
Full Text:

### References:

  Atwood, C. L., Convergent design sequences, for sufficiently regular optimality criteria, Ann. Statist., 4, 1124-1138, (1976) · Zbl 0344.62064  Fedorov, V. V., Theory of optimal experiments, (1972), Academic Press New York  Fletcher, R., Practical methods of optimization, (1987), Wiley New York · Zbl 0905.65002  Gaffke, N.; Heiligers, B., Second order methods for solving extremum problems from optimal linear regression design, Optimization, 36, 41-57, (1996) · Zbl 0863.90117  Gaffke, N.; Heiligers, B., Approximate designs for polynomial regression: invariance, admissibility, and optimality, (Ghosh, S.; Rao, C. R., Handbook of Statistics 13, Design and Analysis of Experiments, (1996), Elsevier Amsterdam), 1149-1199 · Zbl 0910.62072  Gaffke, N.; Mathar, R., On a class of algorithms from experimental design theory, Optimization, 24, 91-126, (1992) · Zbl 0817.90075  Graßhoff, U.; Doebler, A.; Holling, H.; Schwabe, R., Optimal design for linear regression models in the presence of heteroscedasticity caused by random coefficients, J. Statist. Plann. Inference, 142, 1108-1113, (2012) · Zbl 1236.62086  Hager, W. W.; Pardalos, P. M.; Roussos, I. M.; Sahinoglou, H. D., Active constraints, indefinite quadratic test problems, and complexity, J. Optim. Theory Appl., 68, 499-511, (1991) · Zbl 0697.90059  Higgins, J. E.; Polak, E., Minimizing pseudoconvex functions on convex compact sets, J. Optim. Theory Appl., 65, 1-27, (1990) · Zbl 0672.90093  Pardalos, P. M., Polynomial time algorithms for some classes of constrained nonconvex quadratic problems, Optimization, 21, 843-853, (1990) · Zbl 0714.90082  Pukelsheim, F., Optimal design of experiments, (2006), Society for Industrial and Applied Mathematics Philadelphia · Zbl 1101.62063  Rockafellar, R. T., Convex analysis, (1972), Princeton University Press Princeton, New Jersey, second printing · Zbl 0224.49003  Wu, C.-F., Some algorithmic aspects of the theory of optimal designs, Ann. Statist., 6, 1286-1301, (1978) · Zbl 0392.62058  Wynn, H. P., The sequential generation of D-optimum experimental designs, Ann. Math. Statist., 41, 1655-1664, (1970) · Zbl 0224.62038  Wynn, H. P., Results in the theory and construction of D-optimum experimental designs, J. Roy. Statist. Soc. Ser. B, 34, 133-147, (1972) · Zbl 0248.62033
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.