zbMATH — the first resource for mathematics

Examples
Geometry Search for the term Geometry in any field. Queries are case-independent.
Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact.
"Topological group" Phrases (multi-words) should be set in "straight quotation marks".
au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted.
Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff.
"Quasi* map*" py: 1989 The resulting documents have publication year 1989.
so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14.
"Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic.
dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles.
py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses).
la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.

Operators
a & b logic and
a | b logic or
!ab logic not
abc* right wildcard
"ab c" phrase
(ab c) parentheses
Fields
any anywhere an internal document identifier
au author, editor ai internal author identifier
ti title la language
so source ab review, abstract
py publication year rv reviewer
cc MSC code ut uncontrolled term
dt document type (j: journal article; b: book; a: book article)
Fixed point and Bregman iterative methods for matrix rank minimization. (English) Zbl 1221.65146
The authors propose a fixed point continuation algorithm and a Bregman iterative algorithm for solving the linearly constrained nuclear norm minimization problem. The convergence of the fixed point iterative algorithm is established. A Monte-Carlo approximate singular value decomposition procedure is incorporated into the fixed-point continuation algorithm to improve the speed and its ability to recover low-rank matrices. Some numerical results are presented to show the effectively of the proposed algorithm.

MSC:
65K05Mathematical programming (numerical methods)
90C25Convex programming
90C06Large-scale problems (mathematical programming)
93C41Control problems with incomplete information
68Q32Computational learning theory
65C05Monte Carlo methods
Software:
SDPT3; SeDuMi; SDPLR
References:
[1]Bach F.R.: Consistency of trace norm minimization. J. Mach. Learn. Res. 9(Jun), 1019–1048 (2008)
[2]Bertalmío, M., Sapiro, G., Caselles, V., Ballester, C.: Image inpainting. In: Proceedings of SIGGRAPH 2000, New Orleans, USA (2000)
[3]Borwein J.M., Lewis A.S.: Convex Analysis and Nonlinear Optimization. Springer, New York (2003)
[4]Bregman L.: The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 7, 200–217 (1967) · doi:10.1016/0041-5553(67)90040-7
[5]Burer S., Monteiro R.D.C.: A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization. Math. Program. (Ser. B) 95, 329–357 (2003) · Zbl 1030.90077 · doi:10.1007/s10107-002-0352-8
[6]Burer S., Monteiro R.D.C.: Local mimima and convergence in low-rank semidefinite programming. Math. Program. 103(3), 427–444 (2005) · Zbl 1099.90040 · doi:10.1007/s10107-004-0564-1
[7]Cai, J., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. Preprint available at http://arxiv.org/abs/0810.3286 (2008)
[8]Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. (2009)
[9]Candès, E.J., Romberg, J.: 1-MAGIC: recovery of sparse signals via convex programming. Technical Report, Caltech (2005)
[10]Candès E.J., Romberg J., Tao T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006) · Zbl 1231.94017 · doi:10.1109/TIT.2005.862083
[11]Candès, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. Preprint available at http://arxiv.org/abs/0903.1476 (2009)
[12]Dai, W., Milenkovic, O.: Subspace pursuit for compressive sensing: closing the gap between performance and complexity. Preprint available at arXiv: 0803.0811 (2008)
[13]Donoho D.: Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006) · Zbl 05454299 · doi:10.1109/TIT.2006.871582
[14]Donoho, D.L., Tsaig, Y.: Fast solution of 1-norm minimization problems when the solution may be sparse. Technical Report, Department of Statistics, Stanford University (2006)
[15]Donoho, D., Tsaig, Y., Drori, I., Starck, J.C.: Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit. IEEE Trans. Inf. Theory (2006) (submitted)
[16]Drineas P., Kannan R., Mahoney M.W.: Fast Monte Carlo algorithms for matrices ii: computing low-rank approximations to a matrix. SIAM J. Comput. 36, 158–183 (2006) · Zbl 1111.68148 · doi:10.1137/S0097539704442696
[17]Fazel, M.: Matrix rank minimization with applications. Ph.D. thesis, Stanford University (2002)
[18]Fazel, M., Hindi, H., Boyd, S.: A rank minimization heuristic with application to minimum order system approximation. In: Proceedings of the American Control Conference, vol. 6, pp. 4734–4739 (2001)
[19]Figueiredo, M.A.T., Nowak, R.D., Wright, S.J.: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 1(4) (2007)
[20]Ghaoui, L.E., Gahinet, P.: Rank minimization under LMI constraints: a framework for output feedback problems. In: Proceedings of the European Control Conference (1993)
[21]Goldberg K., Roeder T., Gupta D., Perkins C.: Eigentaste: a constant time collaborative filtering algorithm. Inf. Retr. 4(2), 133–151 (2001) · Zbl 0989.68052 · doi:10.1023/A:1011419012209
[22]Goldfarb, D., Ma, S.: Convergence of fixed point continuation algorithms for matrix rank minimization. Technical Report, Department of IEOR, Columbia University (2009)
[23]Hale, E.T., Yin, W., Zhang, Y.: A fixed-point continuation method for 1-regularized minimization with applications to compressed sensing. Technical Report, CAAM TR07-07 (2007)
[24]Hiriart-Urruty J.B., Lemaréchal C.: Convex Analysis and Minimization Algorithms II: Advanced Theory and Bundle Methods. Springer, New York (1993)
[25]Horn R.A., Johnson C.R.: Matrix Analysis. Cambridge University Press, Cambridge (1985)
[26]Keshavan, R.H., Montanari, A., Oh, S.: Matrix completion from a few entries. Preprint available at http://arxiv.org/abs/0901.3150 (2009)
[27]Kim S.J., Koh K., Lustig M., Boyd S., Gorinevsky D.: A method for large-scale 1-regularized least-squares. IEEE J. Sel. Top. Signal Process. 4(1), 606–617 (2007) · doi:10.1109/JSTSP.2007.910971
[28]Linial N., London E., Rabinovich Y.: The geometry of graphs and some of its algorithmic applications. Combinatorica 15, 215–245 (1995) · Zbl 0827.05021 · doi:10.1007/BF01200757
[29]Liu, Z., Vandenberghe, L.: Interior-point method for nuclear norm approximation with application to system identification. Preprint available at http://www.ee.ucla.edu/vandenbe/publications/nucnrm.pdf (2008)
[30]Natarajan B.K.: Sparse approximation solutions to linear systems. SIAM J. Comput. 24(2), 227–234 (1995) · Zbl 0827.68054 · doi:10.1137/S0097539792240406
[31]Osher S., Burger M., Goldfarb D., Xu J., Yin W.: An iterative regularization method for total varitaion-based image restoration. SIAM MMS 4(2), 460–489 (2005)
[32]Recht, B., Fazel, M., Parrilo, P.: Guaranteed minimum rank solutions of matrix equations via nuclear norm minimization. Preprint available at http://arxiv.org/abs/0706.4138 (2007)
[33]Rennie, J.D.M., Srebro, N.: Fast maximum margin matrix factorization for collaborative prediction. In: Proceedings of the International Conference of Machine Learning (2005)
[34]Rudin L., Osher S., Fatemi E.: Nonlinear total variation based noise removal algorithms. Physica D 60, 259–268 (1992) · Zbl 0780.49028 · doi:10.1016/0167-2789(92)90242-F
[35]Spellman P.T., Sherlock G., Zhang M.Q., Iyer V.R., Anders K., Eisen M.B., Brown P.O., Botstein D., Futcher B.: Comprehensive identification of cell cycle-regulated genes of the yeast saccharomyces cerevisiae by microarray hybridization. Mol. Biol. Cell 9, 3273–3297 (1998) · doi:10.1091/mbc.9.12.3273
[36]Srebro, N.: Learning with matrix factorizations. Ph.D. thesis, Massachusetts Institute of Technology (2004)
[37]Srebro, N., Jaakkola, T.: Weighted low-rank approximations. In: Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003) (2003)
[38]Sturm J.F.: Using SeDuMi 1.02, a Matlab toolbox for optimization over symmetric cones. Opt. Methods Softw. 11(12), 625–653 (1999) · Zbl 0973.90526 · doi:10.1080/10556789908805766
[39]Tibshirani R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B 58, 267–288 (1996)
[40]Tropp J.: Just relax: convex programming methods for identifying sparse signals. IEEE Trans. Inf. Theory 51, 1030–1051 (2006) · Zbl 05455043 · doi:10.1109/TIT.2005.864420
[41]Troyanskaya O., Cantor M., Sherlock G., Brown P., Hastie T., Tibshirani R., Botstein D., Altman R.B.: Missing value estimation methods for DNA microarrays. Bioinformatics 17(6), 520–525 (2001) · doi:10.1093/bioinformatics/17.6.520
[42]Tütüncü R.H., Toh K.C., Todd M.J.: Solving semidefinite-quadratic-linear programs using SDPT3. Math. Program. Ser. B 95, 189–217 (2003) · Zbl 1030.90082 · doi:10.1007/s10107-002-0347-5
[43]van den Berg E., Friedlander M.P.: Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31(2), 890–912 (2008)
[44]Wen, Z., Yin, W., Goldfarb, D., Zhang, Y.: A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation. Technical Report, Department of IEOR, Columbia University (2009)
[45]Yin W., Osher S., Goldfarb D., Darbon J.: Bregman iterative algorithms for 1-minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1(1), 143–168 (2008) · Zbl 1203.90153 · doi:10.1137/070703983