
Senior Data Engineer
- Sydney, NSW
- Permanent
- Full-time
- Provide technical leadership in the data space, collaborating with cross-functional teams including Architecture, Quality Assurance (QA), Business Analysis (BA), Application Engineering, Data Engineering, Data Science, Analytics, Digital Marketing, and others.
- Act as a key driver of technical delivery, proactively identifying risks and solutions, and coordinating with engineers, BAs, QAs, and Project Managers (PMs) to ensure successful execution of data initiatives.
- Lead technical solution design and troubleshooting, ensuring the team builds scalable and maintainable systems.
- Ensure adherence to data privacy, security, observability, and governance standards.
- Collaborate with architects and senior engineers to design resilient, high-performance data solutions.
- Build scalable data pipelines to ingest, process, and expose data using technologies like Snowflake, AWS Redshift, Kinesis, Lambda, DynamoDB, S3, Glue, Apache Storm, and others.
- Develop batch and streaming data solutions using Apache Kafka, Apache Spark, Airflow, and Delta Lake for real-time and scheduled data processing.
- Design and implement event-driven data architectures with streaming frameworks such as Kafka Streams and Kinesis Data Firehose.
- Implement Data Quality, Data Lineage, and Observability frameworks
- Build CI/CD pipelines using tools such as GitHub Actions, Terraform,
- Collaborate with Site Reliability Engineers to ensure stability and uptime of production pipelines and services.
- Continuously identify opportunities for performance improvement, cost optimisation, and process efficiency.
- Work with Data Scientists to ensure high-quality data for Machine Learning models and integrate them into production-ready pipelines.
- Undergraduate degree focused on Computer Science, Engineering, Data Science or a related discipline.
- 5+ years of hands-on experience in data profiling, source-to-target mapping, ETL development, SQL optimisation, testing, and implementation.
- Proven experience delivering large-scale data ingestion, transformation, and processing using AWS technologies such as Kinesis, Athena, Redshift, DynamoDB, Lambda, and S3.
- Proficient with modern data platforms and tools including Snowflake, Databricks and Apache Storm.
- Strong experience working with relational databases (SQL) as well as NoSQL databases, particularly MongoDB.
- Proven ability to translate complex business requirements into technical solutions through effective cross-functional collaboration, strong communication skills, and adaptable problem-solving approach.
- Exposure to CI/CD tools and practices, including Ansible, CloudFormation, Jenkins, and related automation technologies.
- Experience developing and deploying solutions on cloud platforms such as AWS, GCP, or Azure.
- Jira / Confluence experience
- Experience consuming RET
- Experience to AI/ML technologies.
- Complimentary event tickets
- Birthday and volunteering leave
- Wellbeing discounts & flu vaccinations
- Paid parental leave & free employee support (EAP)
- Global rewards and recognition
- Learning, development & career pathways
- A diverse, inclusive, and passionate team