Hagay Lupesko is a senior technology leader in AI inference, currently SVP of AI Inference at Cerebras Systems and based in San Francisco, with a decade of experience building teams and shipping large-scale training and inference products. He led engineering organizations at MosaicML (acquired by Databricks) and Meta AI, and earlier built deep learning tooling at AWS and product teams at Amazon Music. Hagay combines executive strategy with hands-on deployment chops — he contributed operational improvements to awslabs/multi-model-server, refining Docker/GPU configs, install scripts and nginx setups to make model serving more reliable. Known for hiring and scaling high-performing teams, he bridges research and production to accelerate real-world adoption of deep learning.
Multi Model Server is a tool for serving neural net models for inference
Role in this project:
DevOps Engineer
Contributions:2 releases, 26 commits, 16 PRs in 8 months
Contributions summary:Hagay primarily focused on updating and refining the project's build and deployment process. Their contributions involved modifying Docker configurations for both CPU and GPU environments, updating the install scripts to include necessary packages, and configuring the Nginx setup. Additionally, the user bumped the project's version in setup.py and updated the model server configuration file, highlighting their role in managing the overall deployment lifecycle. The user also fixed a multi-file download issue.
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
Contributions:8 pushes, 11 branches in 1 year 4 months
pythonschedulerdataflowmutationorchestration
Find and Hire Top DevelopersWe’ve analyzed the programming source code of over 60 million software developers on GitHub and scored them by 50,000 skills. Sign-up on Prog,AI to search for software developers.