Soma Capital Portfolio Jobs

Perception Intern

Aerovect

Aerovect

Toronto, ON, Canada · San Francisco, CA, USA
Posted on Tuesday, September 12, 2023

Who We Are

AeroVect is transforming ground handling with autonomy, redefining how airlines and ground service providers around the globe run day-to-day operations. We are a private company founded in 2020 and backed by top-tier venture capital investors in aviation and autonomous driving. Our customers include some of the world’s largest airlines and ground handling providers. For more information, visit www.aerovect.com.

Job Description

We are looking for a Perception Engineer who knows how to bring best-in-class reliability to autonomous driving systems in structured, low-speed environments.

In this role, you'll work on various perception subsystems of the AeroVect Driver at a fast-paced, early-stage startup. Leveraging your experience building production-grade systems, you’ll propel the AeroVect Driver to achieve category-defining driving precision for the airport operational design domain.

Your scope will include designing, implementing, and iterating upon key perception modules comprising computer vision, LiDAR perception, semantic segmentation, point cloud matching, sensor fusion, and scene understanding projects, deployed across all AeroVect vehicles.

This is a hands-on opportunity to help develop a market-defining enterprise product that combines autonomous vehicle technology with a robotics-as-a-service (RaaS) business model. This role reports to our Perception Lead and works closely with the autonomy engineering team.

What You’ll Do

  • Define, implement, and own hands-on improvements to upgrade the perception module, targeting milestones, and working with internal/external partners

  • Qualify all subsystems using objective measures, with an eye to functional safety and systems engineering best practices

  • Collaborate with vehicle engineering to create an integrated system, including sensor/compute selection and integration

  • Understand and keep up-to-date with state of the art

Qualifications

Minimum Qualifications

  • Bachelor’s Degree in Computer Science, Math, Electrical Engineering, Mechanical Engineering, Robotics, Physics, or a related field

  • Prior background (academic or industrial) in development of perception modules for autonomous systems, including strong theoretical expertise.

  • Working knowledge of Deep Learning based 3D object detection models using LiDAR.

  • Knowledge of the modern camera-based object detection models, LiDAR-based object detection models, segmentation models.

  • Knowledge of Deep Learning based frameworks such PyTorch, TensorFlow.

  • Strong C++ (preferred) or Python programming and algorithmic problem-solving skills.

  • Working experience in a Linux based Operating System.

  • Experience using the Robot Operating System (ROS) framework.

  • Strong reasoning skills and mathematics background including linear algebra, geometry, calculus, optimization, and probability.

  • Collaborative nature and effective communicator.

Desired Qualifications

  • MS or PhD in Computer Science, Math, Robotics or a related field

  • Advanced research experience in vehicle autonomy or machine learning.

  • Prior experience developing and supporting all aspects of autonomy for a production ground-vehicle subsystem, or research prototype (end-to-end, full lifecycle development preferred).

  • Prior experiences at an autonomous driving company or an engineering startup.