Summary
Posted: Nov 5, 2024
Role Number:200571375
Do you feel you think differently, you are eager to break status quo, are bold and ambitious, aren’t afraid to take risks and are passionate to build the best of class technology. If yes, what better place to be at and do this than Apple? At Apple, “we think different, we push the boundaries of computing and intelligence. We build products that bring smile to people’s face”. Foundation Model Infrastructure team, within Machine Learning Platform Technologies organization is the back-bone of Apple Intelligence. It builds frameworks, services and tools that power the largest Apple foundation models on servers. Our Infrastructure powers a wide gamut of services at Apple including Apple Search, Apple Music, AppleTV, AppStore, iMessages, Photos & Camera, Spotlight, Safari, Siri and upcoming ever exciting Apple products serving millions of queries every day with incredible low latencies, drawing every ounce of compute from our hardware. As part of this group, you will get a chance to bring Intelligence to billions of users across the world. You will have an opportunity to make a difference in life of people. You will have a chance to work on optimizing billions of parameter language and vision and speech models using state of the art technologies and make it run at scale of Apple.
Description
Work along side Foundation Model Research team to optimize inference for cutting edge model architectures. Work closely with product teams to build Production grade solutions to launch models serving millions of customers in real time. Build tools to understand bottlenecks in Inference for different hardwares and use cases. Mentor and guide engineers in the organization.
Minimum Qualifications
- Demonstrated experience in leading and driving complex, ambiguous projects.
- Experience with high throughput services particularly at supercomputing scale.
- Proficient in running applications on Cloud (AWS, Azure, or equivalent) using Kubernetes and Docker.
- Familiar with GPU programming concepts using CUDA and with popular machine learning frameworks like PyTorch or TensorFlow.
Preferred Qualifications
- Proficient in building and maintaining systems written in modern languages (e.g. Go, Python).
- Familiar with fundamental deep learning architectures such as Transformer models and encoder/decoder models.
- Familiar with NVIDIA TensorRT-LLM, vLLM, DeepSpeed, NVIDIA Triton Inference Server.
- Experience in writing custom CUDA kernels using CUDA or OpenAI Triton.
Pay & Benefits
- At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $166,600 and $296,300, and your base pay will depend on your skills, qualifications, experience, and location.
Apple employees also have the opportunity to become an Apple shareholder through participation in Apple’s discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple’s Employee Stock Purchase Plan. You’ll also receive benefits including: Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses — including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits.
Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program.
Notes: If you’re interested with the above job, please click button [Apply the job @Company’s site] below to brings you directly to the company’s site.
Job Features
Job Category | Engineering |
Job Reference ID | 200571375 |
Job Location | Seattle, Washington, United States |