Anselm Levskaya is a Staff Research Engineer at Google in San Francisco with 14 years of experience building and scaling numerical and ML infrastructure. He works on JAX and Gemini infrastructure and has driven improvements to Flax, T5X, and the tooling behind PaLM, including performance-focused work on the scalable_shampoo distributed optimizer. His open-source contributions to core JAX and TensorFlow Probability — adding JVP rules for eigh, fixing non-deterministic einsum behavior, and hardening distributed collectives — reveal deep expertise in numerical linear algebra and autodiff. Trained as a PhD biophysicist, he brings experimental rigor and cross-disciplinary problem solving from academia and industry roles ranging from postdoc to chief scientist. He also contributes hands-on backend and DevOps work (Certbot automation, a deobfuscated JSLinux), underscoring a penchant for low-level systems and reproducible research infrastructure.
15 years of coding experience
9 years of employment as a software developer
Bachelor of Science (B.S.), Physics, Bachelor of Science (B.S.), Physics at Cornell University
Doctor of Philosophy (PhD), Biophysics, Doctor of Philosophy (PhD), Biophysics at UCSF
An old version of Mr. Bellard's JSLinux rewritten to be human readable, hand deobfuscated and annotated.
Role in this project:
Back-end Developer
Contributions:68 commits, 2 PRs, 4 pushes in 7 years 2 months
Contributions summary:Anselm appears to be refactoring and deobfuscating code within the JSLinux emulator, focusing on the core CPU emulation routines. They have rewritten parts of the PCEmulator and CPU_X86 code, including the addition of comments and the renaming of symbols. The user's work involves significant changes to fundamental components related to memory access, interrupt handling, and the overall execution flow of the emulator. Additionally, they added the original CPU code for reference.
Flax is a neural network library for JAX that is designed for flexibility.
Role in this project:
Back-end Developer & DevOps Engineer
Contributions:2 releases, 368 reviews, 213 commits in 2 years 11 months
Contributions summary:Anselm contributed to the improvement of the input pipeline for the lm1b dataset, specifically focusing on adding dynamic batching capabilities to enhance the efficiency of the language model training. The user also made several layout and fix improvements across various example files and included general bug fixes and integration of checkpointing functionality. Additional work included merging branch changes from the prerelease branch and incorporating updates.
deep-learningneural-networksneural-networkflaxjax
Find and Hire Top DevelopersWe’ve analyzed the programming source code of over 60 million software developers on GitHub and scored them by 50,000 skills. Sign-up on Prog,AI to search for software developers.
Request Free Trial
Anselm Levskaya - Staff Research Engineer at Google