Summary
Alexander Wan is an undergraduate researcher at UC Berkeley with a passion for machine learning, NLP, and AI policy. He combines hands-on experimentation with rigorous evaluation, co-authoring ICML 2023 findings on data poisoning and adversarial robustness of instruction-tuned LLMs, and presenting first-author work at ACL 2024 on retrieval-augmented models. Currently affiliated with the Stanford Institute for Human-Centered AI and Berkeley Artificial Intelligence Research, he has led studies on LLM detection robustness, gradient-based adversarial attacks, and scaling experiments using Google Cloud TPU and multi-GPU clusters. His early work includes contributions at EleutherAI on prefix-tuning and robust training, and at Michigan State University on fine-grained entity typing with large ontologies. He is based in Berkeley, California, pursuing ML for science, model benchmarking, and AI policy and regulation as part of his broader academic trajectory. He maintains a curious, rigorous approach to evaluating AI systems under real-world constraints, bridging research with practical deployment considerations.
12 years of coding experience
University of California, Berkeley
English, Spanish, spoken cantonese