Staff Software Engineer - Data Infrastructure
About The Role
Rippling is the system of record for employee data - a complete Employee Management System. To solve this broad problem, a variety of applications and datasets need to come together as a graph connected through the employee record at its center.
We need a data platform to make it easy to make all forms of data accessible for different use cases, perform various transformations and query efficiently for a variety of online and offline use cases. You will be working on building this distributed data platform, defining key APIs, designing to scale, high availability, and handling both online, streaming and batch scenarios.
At Rippling, to support various use cases we use Redis, Mongo, Postgres to serve APIs, Kafka for streaming, Apache Pinot and Apache Presto for OLAP, and S3 and Snowflake for data lake and warehousing.
What You'll Do:
Work on distributed processing engines and distributed databases.
Create data platforms, data lakes, and data ingestion systems that work at scale.
Write core libraries (in python and golang) to interact with various internal data stores.
Define and support internal SLAs for common data infrastructure
Design, develop, code, and test software systems, improvements, products and user-facing experiences
Leverage big data technologies like Postgres, Kafka, Presto, Pinot, Flink, Airflow, Mongo, Redis and Spark.
Explore new and upcoming data technologies to support Rippling’s exponential growth
8+ years of professional work experience.
Experience working in a fast paced, dynamic environment.
Experience in building projects with good abstractions and architecture.
Comfortable at developing scalable and extendable core services used in many products.
If you don’t meet all of the requirements listed here, we still encourage you to apply. No job description is perfect, and we might find an even more suitable opportunity that matches your skills and experience.