Jacob Springer

Jacob Springer

About

Hello! I am a PhD student in the Machine Learning department at Carnegie Mellon University where I am fortunate to be advised by Aditi Raghunathan.

I'm excited about solving mysteries in machine learning. I have recently been thinking about how to train models that are adaptable i.e., easily and robustly fine-tuned to perform new tasks--and especially how optimization can influence this. I am broadly excited about understanding structure of what is learned by neural networks. In the past, I have also spent a lot of time thinking about (adversarial) robustness in neural networks and how we can take insights from neuroscience to improve upon machine learning. Previously, I was an undergrad at Swarthmore College, and I have spent time at Cold Spring Harbor Laboratory, MIT, and Los Alamos National Laboratory, where I worked with many lovely people. Please reach out if you want to chat about anything (I do love talking about research)!

Selected Publications (more)

  1. 2024 – “Repetition improves language model embeddings
    Springer, Jacob Mitchell; Kotha, Suhas; Fried, Daniel; Neubig, Graham; Raghunathan, Aditi
    Preprint.
  2. 2024 – “Sharpness-Aware Minimization Enhances Feature Quality via Balanced Learning
    Springer, Jacob Mitchell; Nagarajan, Vaishnavh; Raghunathan, Aditi
    International Conference on Learning Representations (2024)
  3. 2024 – “Understanding catastrophic forgetting in language models via implicit inference
    Kotha, Suhas; Springer, Jacob Mitchell; Raghunathan, Aditi
    International Conference on Learning Representations (2024)
  4. 2022 – “If you’ve trained one you’ve trained them all: inter-architecture similarity increases with robustness” (Oral)
    Jones, Haydn T; Springer, Jacob M; Kenyon, Garrett T; Moore, Juston S
    Uncertainty in Artificial Intelligence (2022)
  5. 2021 – “It's hard for neural networks to learn the game of life
    Springer, Jacob M; Kenyon, Garrett T
    International Joint Conference on Neural Networks (2021)
  6. 2021 – “Adversarial perturbations are not so weird: Entanglement of robust and non-robust features in neural network classifiers
    Springer, Jacob M; Mitchell, Melanie; Kenyon, Garrett T
    Preprint.
  7. 2021 – “A little robustness goes a long way: Leveraging robust features for targeted transfer attacks
    Springer, Jacob; Mitchell, Melanie; Kenyon, Garrett
    Advances in Neural Information Processing Systems (2021)