Summary
Mina Parham is a Machine Learning Engineer with seven years of experience and a four-year focus on LLMs, reinforcement learning, and building scalable deep learning systems. She bridges research and production—during her time at Keatext she implemented PPO-based RLHF, applied LoRA/QLoRA and double quantization to cut GPU memory usage, and added vLLM inference support that yielded a 2.2× speedup. A Université de Montréal-trained researcher and former DataCamp instructor, she excels at translating advanced alignment and prompting techniques into deployable workflows. Based in Toronto and now at Transformer Lab, Mina thrives on ambiguity, learns fast, and prefers falling in love with the problem rather than the first solution.
7 years of coding experience
6 years of employment as a software developer
Amirkabir University of Technology
Master's degree, AI - Polytechnique Montréal, Master's degree, AI - Polytechnique Montréal at Université de Montréal
English