Sr Staff Machine Learning Platform Engineer (Prisma AIRS)

FULL TIME
lead_staff

Salary

No salary data

vs. Engineering avg

Ghost Score

Better than ~65% of category

Engineering jobs

Freshness

Posted 1 weeks ago

Job Description

Palo Alto Networks is dedicated to protecting the digital way of life through innovative technology. The Sr Staff Machine Learning Platform Engineer will lead the architectural design and strategy for the Prisma AIRS AI platform, overseeing the development of scalable ML inference systems and providing technical leadership to the team. Responsibilities: Lead the architectural design of a highly scalable, low-latency, and resilient ML inference platform capable of serving a diverse range of models for real-time security applications; Provide technical leadership and mentorship to the team, driving best practices in MLOps, software engineering, and system design; Drive the strategy for model and system performance, guiding research and implementation of advanced optimization techniques like custom kernels, hardware acceleration, and novel serving frameworks; Establish and enforce engineering standards for automated model deployment, robust monitoring, and operational excellence for all production ML systems; Act as a key technical liaison to other principal engineers, architects, and product leaders to shape the future of the Prisma AIRS platform and ensure end-to-end system cohesion; Tackle the most ambiguous and challenging technical problems in large-scale inference, from mitigating novel security threats to achieving unprecedented performance goals Qualifications: BS/MS or Ph.D. in Computer Science, a related technical field, or equivalent practical experience; Professional experience in software engineering with a deep focus on MLOps, ML systems, or productionizing machine learning models at scale; Expert-level programming skills in Python are required; Deep, hands-on experience designing and building large-scale distributed systems on a major cloud platform (GCP, AWS, Azure, or OCI); Proven track record of leading the architecture of complex ML systems and MLOps pipelines using technologies like Kubernetes and Docker; Mastery of ML frameworks (TensorFlow, PyTorch) and extensive experience with advanced inference optimization tools (ONNX, TensorRT); Demonstrated expertise with modern LLM inference engines (e.g., vLLM, SGLang, TensorRT-LLM) is required Required Skills: MLOps, Machine Learning Systems, Python, Go, Java, C++, Distributed Systems, Cloud Platforms, Kubernetes, Docker, TensorFlow, PyTorch, Inference Optimization, ONNX, TensorRT, Model Architectures, LLM Inference Engines, CUDA Kernel Development, Data Infrastructure, Kafka, Spark, Flink, CI/CD Pipelines, Jenkins, GitLab CI, Tekton, Mentorship

Ghost Score Breakdown

No salary (mandate state violation)
+ pts
No company logo
+ pts
Very fresh posting (0-3 days)
+ pts
Known scam/ghost company
Reposted listing
Expired deadline
High job-to-employee ratio
Recruiting agency
Overall: 14/100Low Ghost Risk

Application Tips

  • Top skills mentioned: python, java, go. Make sure your resume highlights these.
  • This listing shows strong signals of being a real opportunity — apply with confidence.

Browse More