Signal Processing for Machine Learning
Program
Contents
Part 1: Signal Processing Methods and Applications
Definition of signals, signal properties, discrete representations, Fourier transforms, filtering, sampling theory, applications to audio signals and images.
Basics of convex optimization: Convex sets, convex functions, convex optimization problems.
Sparse representations, compressive sensing, application to image recovery.
Matrix completion, application to recommendation systems.
Sparse plus low-rank models, Application to traffic prediction over networks.
References: [1], [2], [3], [4], [5]
Part 2: Graph Signal Processing and Learning
Algebraic graph theory: Graph properties, connectivity, degree centrality, eigenvector centrality, PageRank, betweeness, modularity, spectral clustering.
Independence graphs: Markov networks, Bayes networks, Gaussian Markov Random Fields, inference of graph topology from data. Application to brain functional connectivity inference.
Graph signal processing: Graph Fourier transform, graph filtering, sampling and interpolation of graph signals. Diffusion processes, graph convolutional filters. Application to distributed filtering and signal interpolation.
Deep Learning on graphs: Design of graph neural architectures, pooling, attention mechanisms. Applications to graph and node classification problems.
References: [1], [6], [7], [8], [9], [10]
Part 3: Distributed Optimization and Learning
Distributed consensus optimization: Consensus over networks, distributed gradient descent, convergence analysis, communication architectures, adding structure (e.g., constraints, sparsity, etc.) via proximal operators. Application to distributed target localization in wireless networks.
Federated learning: Federated averaging, examples of federated learning problems, Application to federated support vector machines. Challenges in Federated Learning: Expensive communications, System and Statistical heterogeneity, Privacy.
Federated convex learning: Basics of duality theory, primal-dual optimization methods (dual ascent, method of multipliers, ADMM). Distributed ADMM with splitting across examples and/or features. Application to federated convex learning problems.
References: [1], [5], [11], [12], [13]
Textbooks and resources:
[2] Vetterli, Martin, Jelena Kovačević, and Vivek K. Goyal. Foundations of signal processing. Cambridge University Press, 2014.
[3] S. Foucart and R. Holger, A mathematical introduction to compressive sensing, Basel: Birkhäuser, 2013.
[4] E.J. Candès et al., Exact matrix completion via convex optimization, Foundations of Computational mathematics, 9(6), 717-772, 2009.
[5] S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004;
[6] M.E.J. Newman, Networks: An Introduction, Oxford, UK: Oxford University Press.
[7] Ortega, A., Frossard, P., Kovačević, J., Moura, J. M., & Vandergheynst, P. (2018). Graph signal processing: Overview, challenges, and applications. Proceedings of the IEEE, 106(5), 808-828.
[8] P. Di Lorenzo, S. Barbarossa, and P. Banelli, Sampling and Recovery of Graph Signals, Cooperative and Graph Signal Processing, P. Djuric and C. Richard Eds., Elsevier, 2018.
[9] Isufi, E., Gama, F., Shuman, D. I., & Segarra, S. (2022). Graph filters for signal processing and machine learning on graphs. arXiv preprint arXiv:2211.08854, 2022.
[10] Ruiz, L., Gama, F., & Ribeiro, A. (2021). Graph neural networks: Architectures, stability, and transferability. Proceedings of the IEEE, 109(5), 660-682.
[11] Olfati-Saber, R., Fax, J. A., & Murray, R. M. (2007). Consensus and cooperation in networked multi-agent systems. Proceedings of the IEEE, 95(1), 215-233.
[12] Li, T., Sahu, A. K., Talwalkar, A., & Smith, V. (2020). Federated learning: Challenges, methods, and future directions. IEEE signal processing magazine, 37(3), 50-60.
[13] S. Boyd et al., Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers, Foundations and Trends in Machine Learning, 3(1):1–122, 2011.
CVX software for convex optimization.