Tensor network methods for machine learning: tensorization, privacy, and beyond

Tensor network methods for machine learning: tensorization, privacy, and beyond

Abstract:

Neural networks (NNs) excel across a wide range of machine learning tasks due to their flexibility and scalability, but they also pose challenges in privacy, interpretability, robustness, and efficiency—limitations that can be especially critical for large models trained on sensitive data. To tackle these issues, we propose the use of tensor network models.

In this talk, I will primarily focus on white-box privacy, showing that gradient-based training can leave identifiable patterns from the training data in NN parameters, and that TN models can eliminate such patterns through reparameterization. I will also present Tensor Train via Recursive Sketching from Samples (TT-RSS), an algorithm that transforms pretrained models into TNs using only sample-based access, thereby avoiding retraining. This tensorization acts as an obfuscation mechanism, mitigating the aforementioned privacy risks while also improving interpretability, providing initialization strategies for TN training, and enabling efficient model compression.

Finally, I will briefly discuss recent advances in tensorization within the tensor ring format, black-box privacy amplification, robustness and interpretability of NNs.