Sharon Zhou

email-iconphone-icongithub-logolinkedin-logotwitter-logostackoverflow-logofacebook-logo
Join Prog.AI to see contacts
email-iconphone-icongithub-logolinkedin-logotwitter-logostackoverflow-logofacebook-logo
Join Prog.AI to see contacts

Summary

🤩
Rockstar
Sharon Zhou is Vice President of Artificial Intelligence at AMD and the former CEO and co‑founder of Lamini, a generative AI startup whose team and tech transitioned into AMD after raising over $25M at a $136M valuation. She teaches generative AI to nearly a million learners online and at Stanford, builds courses, and regularly speaks to audiences from top researchers to Fortune 500 leaders. A Stanford CS PhD advised by Andrew Ng, she led a 50+ student research group, published award‑winning work (top 1% NeurIPS), and contributed calibration and Brier‑score improvements to the influential BIG-bench benchmark. Equally comfortable in product, research, and startup execution, she focuses on algorithms and agentic data methods that reduce LLM hallucinations in production. Trained in Classics at Harvard (summa cum laude) and fascinated by ideas that last millennia, she blends technical rigor with a storyteller’s instinct — as likely to tweak benchmarking math as to quote Plato.
code11 years of coding experience
github-logo-circle

Github Skills (10)

workbench10
machine-learning10
testbench10
calibration10
python10
natural-language-processing9
nlp9
pytest8
tensorflow5
tensorflow25

Programming languages (9)

JuliaTypeScriptShellCSSSCSSJavaScriptHTMLJupyter Notebook

Github contributions (5)

github-logo-circle
google/BIG-bench

Jun 2021 - Jul 2021

Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models
Role in this project:
userData Scientist
Contributions:3 reviews, 13 commits, 2 PRs in 1 month
Contributions summary:Sharon focused on improving metrics and calibration within the `big-bench` project, a benchmark for language models. Their contributions primarily involved modifications to the `task_metrics.py` file, implementing and refining Brier score calculations for evaluating model performance. The user added tests, fixed bugs, and improved the documentation related to the scoring functions. They also ordered and normalized target scores to improve calibration metrics accuracy.
bertmachine-learningbenchmarkmeasuringbenchmarks
sharonzhou/generator

May 2019 - Jan 2020

Contributions:62 commits in 8 months
Find and Hire Top DevelopersWe’ve analyzed the programming source code of over 60 million software developers on GitHub and scored them by 50,000 skills. Sign-up on Prog,AI to search for software developers.
Request Free Trial