Lead Engineer, Reinforcement Learning & Scenario Generation
Serve Robotics
Location
Bay Area / Remote, USA (remote), British Columbia (remote), Calgary (remote), Montreal (remote), Toronto (remote)
Employment Type
Full time
Location Type
Remote
Department
Software
Compensation
- $225K – $300K • Offers Equity
The salary range listed in this posting is representative of the range of levels being considered for this position. Total compensation will vary based on geographic location and level. Leveling, as well as positioning within a level, is determined by a range of factors, including, but not limited to, a candidate's relevant years of experience, domain knowledge, and interview performance.
At Serve Robotics, we’re reimagining how things move in cities. Our personable sidewalk robot is our vision for the future. It’s designed to take deliveries away from congested streets, make deliveries available to more people, and benefit local businesses.
The Serve fleet has been delighting merchants, customers, and pedestrians along the way in Los Angeles, Miami, Dallas, Atlanta and Chicago while doing commercial deliveries. We’re looking for talented individuals who will grow robotic deliveries from surprising novelty to efficient ubiquity.
Who We Are
We are tech industry veterans in software, hardware, and design who are pooling our skills to build the future we want to live in. We are solving real-world problems leveraging robotics, machine learning and computer vision, among other disciplines, with a mindful eye towards the end-to-end user experience. Our team is agile, diverse, and driven. We believe that the best way to solve complicated dynamic problems is collaboratively and respectfully.
The Lead Engineer, RL Scaling & Procedural Scenario Generation is responsible for building scalable training pipelines and generating high-fidelity synthetic scenarios. This role designs procedural simulation environments, creates diverse long-tail edge cases, and optimizes RL systems to train robust foundational models. This role sits at the intersection of simulation, machine learning, distributed systems, and content generation and has a high impact on how quickly and safely agents learn in simulation.
Responsibilities
Develop RL algorithms that can help with terrain intelligence and social navigation behaviors.
Design, build, and optimize large-scale RL training pipelines (distributed compute, GPU clusters, containerized workflows).
Implement curriculum learning, domain randomization, and multi-agent RL strategies.
Optimize RL model performance, sample efficiency, and stability across thousands to millions of simulation steps.
Build automated tools for experiment orchestration, rollout collection, and metrics visualization.
Develop procedural generation pipelines for synthetic environments, agents, and dynamic behaviors.
Build tools to generate long-tail scenarios, sudden appearance of objects, traffic behaviors, rare events, and environmental variations.
Create systems for configuration, validation, and scoring of generated scenarios.
Collaborate with autonomy, ML, and safety teams to map real-world failures into repeatable synthetic simulation cases.
Design APIs to connect RL agents, scenario generators, planners, and environment simulators.
Debug and optimize simulation performance (real-time speed, determinism, reproducibility).
Work with 3D assets, traffic models, mapping systems (e.g., Isaac Sim, CARLA, Unity, Gazebo).
Partner with autonomy, data, and modeling teams to define training objectives and scenario requirements.
Translate real-world logs and edge cases into parameterized procedural content.
Document tools, frameworks, and workflows for internal users.
Qualifications
Master’s degree in Robotics, AI, Computer Science, Mathematics, or a related field.
7+ years of professional experience with shipping transformer based AI models handling complex navigation or manipulation tasks in AV or robotics solutions at scale in the real world.
3+ years technical leadership/architecture experience
Strong experience with Reinforcement Learning (PPO, SAC, A3C, DQN, multi-agent RL, or equivalents).
Hands-on experience with distributed training frameworks (Ray RLlib, Accelerate, PyTorch Distributed, Kubernetes, or similar).
Proficiency in Python and C++ for performance-critical simulation or graphics pipelines.
Experience building or modifying simulation environments (Isaac Sim, Unity, Unreal, CARLA, Gazebo, MuJoCo or custom engines).
Experience with procedural generation (noise functions, rule-based systems, agent scripts, behavior trees).
Experience with GPU compute, containers, and cloud infrastructure.
What Make You Stand Out
Background in generative AI (diffusion, LLMs) for scenario synthesis or environment creation.
Experience with traffic simulation (SUMO) or sensor simulation (LiDAR, camera pipelines).
Knowledge of CUDA, graphics engines, physics modeling, or rendering.
* Please note: The base salary range listed in this job description reflects compensation for candidates based in the San Francisco Bay Area. We are also open to qualified talent working remotely across the:
United States - Base salary range (U.S. – all locations): $190k - $230k USD
Canada - Base salary range (Canada - all locations): $160k - $190k CAD
Compensation Range: $225K - $300K