Who We Are
AeroVect is transforming ground handling with autonomy, redefining how airlines and ground service providers around the globe run day-to-day operations. We are a private company founded in 2020 and backed by top-tier venture capital investors in aviation and autonomous driving. Our customers include some of the world’s largest airlines and ground handling providers. For more information, visit www.aerovect.com.
We are looking for a Perception Engineer who knows how to bring best-in-class reliability to autonomous driving systems in structured, low-speed environments.
In this role, you'll own and revise various perception subsystems of the AeroVect Driver at a fast-paced, early-stage startup. Leveraging your experience building production-grade systems, you’ll propel the AeroVect Driver to achieve category-defining driving precision for the airport operational design domain.
Your scope will include designing, implementing, and iterating upon key perception modules comprising computer vision, LiDAR perception, semantic segmentation, point cloud matching, sensor fusion, and scene understanding projects, deployed across all AeroVect vehicles.
This is a hands-on opportunity to help develop a market-defining enterprise product that combines autonomous vehicle technology with a robotics-as-a-service (RaaS) business model. This role reports to our Perception Lead and works closely with the autonomy engineering team.
What You’ll Do
Define, implement, and own hands-on improvements to upgrade the perception module, targeting milestones, and working with internal/external partners
Qualify all subsystems using objective measures, with an eye to functional safety and systems engineering best practices
Collaborate with vehicle engineering to create an integrated system, including sensor/compute selection and integration
Bachelor’s Degree in Computer Science, Math, Electrical Engineering, Mechanical Engineering, Robotics, Physics, or a related field
Prior background (academic or industrial) in development of perception modules for autonomous systems, including strong theoretical expertise.
Knowledge of the modern camera-based object detection models, lane detection models, LiDAR-based object detection models, segmentation models, etc…
Strong C++ (preferred) or Python programming and algorithmic problem-solving skills
Working experience in a Linux based Operating System
Experience using the Robot Operating System (ROS) framework and tools like Rviz, rqt, tf, etc.
Strong reasoning skills and mathematics background including linear algebra, geometry, calculus, optimization, and probability to name a few
Collaborative nature and effective communicator
MS in Computer Science, Math, Robotics, or a related field
In-Depth understanding of DDS frameworks like ROS/ROS2 or other networking middleware
2+ years of focused industry experience on perception for robotic ground vehicles