Emily Webber is a Principal Machine Learning Specialist Solutions Architect at AWS, combining hands-on ML engineering with cloud-scale architecture to solve complex customer problems and deploy production ML systems. With a decade of experience, she guides data-driven innovation across big data pipelines, ML models, and policy-informed analytics, backed by an MS in Computational Analysis and Public Policy from The University of Chicago. She actively contributes to open-source ML workflows, including SageMaker Studio Lab notebooks for machine translation with T5 and GPT-2 fine-tuning in AWS samples, and has built tooling like Lambda functions to manage SageMaker resources and data pipelines using S3. Emily also mentors and teaches, having served as Adjunct Professor at IIT and as a Machine Learning Mentor at 1871, helping entrepreneurs monetize data and craft data-driven strategies. Her profile blends engineering leadership with a human-centered ethos—she is a meditator and humanist who frames AI applications to serve people and institutions like the Federal Reserve Bank of Chicago. Based in Washington, DC, she frequently speaks and authors in the AI community, translating complex requirements into practical, auditable ML solutions.
Example notebooks for working with SageMaker Studio Lab. Sign up for an account at the link below!
Role in this project:
ML Engineer
Contributions:105 commits, 8 PRs, 21 pushes in 4 months
Contributions summary:Emily's commits focus on adding and modifying a Jupyter Notebook designed for machine translation. The notebook uses the T5 model from Hugging Face to perform English to Spanish translation on COVID-19 health service announcements. The user is likely exploring and finetuning machine translation models.
Materials for a 2-day instructor led course on applying machine learning
Role in this project:
ML Engineer
Contributions:126 commits, 5 PRs, 84 pushes in 1 year 6 months
Contributions summary:Emily primarily worked on fine-tuning a GPT-2 model within the context of the Amazon SageMaker examples repository. Their contributions involved creating a lambda function to clean SageMaker resources, as well as developing and integrating code for processing data from the repository and then using it to fine-tune a GPT-2 model. The user also defined scripts, requirements and utilized S3 for data storage and model output.
sagemakerdata-scienceleddaymachine-learning
Find and Hire Top DevelopersWe’ve analyzed the programming source code of over 60 million software developers on GitHub and scored them by 50,000 skills. Sign-up on Prog,AI to search for software developers.