Summary: Many classical dimension reduction methods, especially those based on inverse conditional moments, require the predictors to have elliptical distributions, or at least to satisfy a linearity condition. Such conditions, however, are too strong for some applications. {\it B. Li} and {\it Y. Dong} [Ann. Stat. 37, No. 3, 1272--1298 (2009:

Zbl 1160.62050)] introduced the notion of the central solution space and used it to modify first-order methods, such as sliced inverse regression, so that they no longer rely on these conditions. We generalize this idea to second-order methods, such as sliced average variance estimation and directional regression. In doing so we demonstrate that the central solution space is a versatile framework: we can use it to modify essentially all inverse conditional moment-based methods to relax the distributional assumptions on the predictors. Simulation studies and an application show a substantial improvement of the modified methods over their classical counterparts.