Jacob Springer

Jacob Springer

About

Hello! I am a PhD student in the Machine Learning Department at Carnegie Mellon University where I am fortunate to be advised by Aditi Raghunathan.

I'm excited about solving mysteries in machine learning. I'm broadly interested in the science surrounding foundation models, though my current research has a focus around optimization, robustness, and inference-time methods. Most recently, I have been thinking about how to train models that are easily and robustly fine-tuned to perform new tasks, and especially how optimization can influence this. I am broadly excited about understanding structure of what is learned by neural networks. In the past, I have also spent a lot of time thinking about (adversarial) robustness in neural networks and how we can take insights from neuroscience to improve upon machine learning. Previously, I was an undergrad at Swarthmore College, and I have spent time at Cold Spring Harbor Laboratory, MIT, and Los Alamos National Laboratory, where I worked with many lovely people. Please reach out if you want to chat about anything (I do love talking about research)!

Selected Publications (more)

  1. 2025 – “Repetition improves language model embeddings
    Jacob Springer; Suhas Kotha; Daniel Fried; Graham Neubig; Aditi Raghunathan
    International Conference on Learning Representations (2025)
  2. 2024 – “Sharpness-Aware Minimization Enhances Feature Quality via Balanced Learning
    Jacob Springer; Vaishnavh Nagarajan; Aditi Raghunathan
    International Conference on Learning Representations (2024)
  3. 2024 – “Understanding catastrophic forgetting in language models via implicit inference
    Suhas Kotha; Jacob Springer; Aditi Raghunathan
    International Conference on Learning Representations (2024)
  4. 2022 – “If you’ve trained one you’ve trained them all: inter-architecture similarity increases with robustness” (Oral)
    Haydn Jones; Jacob Springer; Garrett Kenyon; Juston Moore
    Uncertainty in Artificial Intelligence (2022)
  5. 2021 – “It's hard for neural networks to learn the game of life
    Jacob Springer; Garrett Kenyon
    International Joint Conference on Neural Networks (2021)
  6. 2021 – “A little robustness goes a long way: Leveraging robust features for targeted transfer attacks
    Jacob Springer; Melanie Mitchell; Garrett Kenyon
    Advances in Neural Information Processing Systems (2021)
  7. 2021 – “Adversarial perturbations are not so weird: Entanglement of robust and non-robust features in neural network classifiers
    Jacob Springer; Melanie Mitchell; Garrett Kenyon
    Preprint.