Jacob Springer


NeuroAI @ Cold Spring Harbor Laboratory


I recently graduated from Swarthmore College. I am interested in understanding how and what neural networks learn. My most recent research focuses on how robustness in neural networks can be used to understand what they learn. I’m also interested in neuroscience and its intersection with machine learning.

Publications and Manuscripts

  1. A Little Robustness Goes a Long Way: Leveraging Robust Features for Targeted Transfer Attacks. Springer, Jacob M; Mitchell, Melanie; Kenyon, Garrett T. NeurIPS 2021.
  2. Adversarial Perturbations Are Not So Weird: Entanglement of Robust and Non-Robust Features in Neural Network Classifiers. Springer, Jacob M; Mitchell, Melanie; Kenyon, Garrett T. 2021.
  3. STRATA: Building Robustness with a Simple Method for Generating Black-box Adversarial Attacks for Models of Code. Springer, Jacob M; Reinstadler, Bryn Marie; O’Reilly, Una-May. 2020. 3rd Workshop on Adversarial Learning Methods for Machine Learning and Data Mining @ KDD. 2021.
  4. It’s Hard for Neural Networks To Learn the Game of Life. Springer, Jacob M; Kenyon, Garrett T. 2020. International Joint Conference on Neural Networks (IJCNN). 2021.
  5. Sparse MP4. Wang, Daniel A; Strauss, Charles MS; Springer, Jacob M; Thresher, Austin; Pritchard, Howard; Kenyon, Garrett T. IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI). 2020.
  6. Classifiers based on deep sparse coding architectures are robust to deep learning transferable examples. Springer, Jacob M; Strauss, Charles S; Thresher, Austin M; Kim, Edward; Kenyon, Garrett T. 2018.
  7. Teaching with angr: A Symbolic Execution Curriculum and CTF. Springer, Jacob M; Feng, Wu-chang. USENIX Workshop on Advances in Security Education (ASE 18). 2018.