×

zbMATH — the first resource for mathematics

tick: a Python library for statistical learning, with an emphasis on Hawkes processes and time-dependent models. (English) Zbl 06982970
Summary: This paper introduces tick, is a statistical learning library for Python 3, with a particular emphasis on time-dependent models, such as point processes, tools for generalized linear models and survival analysis. The core of the library provides model computational classes, solvers and proximal operators for regularization. It relies on a C++ implementation and state-of-the-art optimization algorithms to provide very fast computations in a single node multi-core setting. Source code and documentation can be downloaded from https://github.com/X-DataInitiative/tick.

MSC:
62-04 Software, source code, etc. for problems pertaining to statistics
62J12 Generalized linear models (logistic models)
62N05 Reliability and life testing
60G55 Point processes (e.g., Poisson, Cox, Hawkes processes)
68T05 Learning and adaptive systems in artificial intelligence
90-04 Software, source code, etc. for problems pertaining to operations research and mathematical programming
Software:
Tick; GitHub; Python; Scikit
PDF BibTeX XML Cite
Full Text: Link
References:
[1] M. Achab, E. Bacry, S. Ga¨ıffas, I. Mastromatteo, and J.-F. Muzy. Uncovering causality from multivariate hawkes integrated cumulants. In International Conference on Machine Learning, pages 1–10, 2017.
[2] P. K. Andersen, O. Borgan, R. D. Gill, and N. Keiding. Statistical Models based on Counting Processes. Springer Science, 2012. · Zbl 0824.60003
[3] E. Bacry and J.-F. Muzy. Second order statistics characterization of hawkes processes and non-parametric estimation. arXiv preprint arXiv:1401.0903, 2014. · Zbl 1359.62160
[4] E. Bacry, I. Mastromatteo, and J.-F. Muzy. Hawkes processes in finance. Market Microstructure and Liquidity, 1(01):1550005, 2015.
[5] E. Bacry, T. Jaisson, and J.-F. Muzy. Estimation of slowly decreasing hawkes kernels: application to high-frequency order book dynamics. Quantitative Finance, 16(8):1179– 1201, 2016. · Zbl 1400.62232
[6] L. Buitinck, G. Louppe, M. Blondel, F. Pedregosa, A. Mueller, O. Grisel, V. Niculae, P. Prettenhofer, A. Gramfort, J. Grobler, R. Layton, J VanderPlas, A. Joly, B. Holt, and G. Varoquaux. API design for machine learning software: experiences from the scikitlearn project. In ECML PKDD Workshop: Languages for Data Mining and Machine Learning, pages 108–122, 2013.
[7] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems, pages 315–323, 2013.
[8] E. Lewis and G. Mohler. A nonparametric em algorithm for multiscale hawkes processes. preprint, pages 1–16, 2011.
[9] Y. Ogata. Statistical models for earthquake occurrences and residual analysis for point processes. Journal of the American Statistical Association, 83(401):9–27, 1988.
[10] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011. · Zbl 1280.68189
[11] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss minimization. Journal of Machine Learning Research, 14(Feb):567–599, 2013. · Zbl 1307.68073
[12] H. Xu, M. Farajtabar, and H. Zha. Learning granger causality for hawkes processes. In Proceedings of International Conference on Machine Learning, pages 1717–1726, 2016.
[13] K. Zhou, H. Zha, and L. Song. Learning social infectivity in sparse low-rank networks using multi-dimensional hawkes processes. In AISTATS, volume 31, pages 641–649, 2013.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.