
Data Engineer
- Sydney, NSW
- Permanent
- Full-time
- Hybrid work set up
- Paid Parental leave - 12 weeks primary, 6 weeks secondary
- Birthday leave, YOU day each year, as well as connecting people leave (up to 2 weeks working from anywhere)
- Inhouse Learning and Development initiatives
- ELMO Social and Diversity clubs
- Wellbeing initiatives
- Mental Health/EAP programs
- Flare Benefits (great discounts, novated leasing, salary sacrifice)
- Reimagine What's Possible - We believe innovation is human at its core. By staying open, fearless, and adaptive, we continuously push boundaries - while keeping people at the heart of everything we do.
- Obsess over Customers - Everything we do is designed to positively impact our customers.
- Help Others Thrive - Be they colleagues, communities or customers, we champion ways to help others thrive.
- Be Fearlessly Optimistic - We bring unwavering positivity to any challenge, as we know it will drive meaningful change.
- Developing and monitoring data pipelines from both internal and external sources
- Data transformation and modeling; combining complex datasets from multiple systems for downstream consumption and visualization
- Work primarily with SQL and Python to automate and optimize data loading and transformation systems.
- Maintain, operate, and tune SnowFlake and DBT
- Work with the AI team to develop and automate processes using RAG technologies
- Monitor and troubleshoot operational or data issues in the data pipelines
- Drive architectural plans and implementation for future data storage, reporting, and analytic solutions.
- Design, develop, and maintain robust ETL pipelines to support data ingestion, transformation, and integration from various sources.
- Write complex and optimized SQL queries for data extraction, analysis, and transformation across large datasets.
- Manage and optimize data storage solutions, including data warehouses (e.g., Snowflake, Redshift).
- Troubleshoot ETL failures and performance issues in SQL scripts or data processes.
- Implement data security, access control, and compliance according to company and legal standards.
- Experience with Kafka
- Strong knowledge of AWS cloud-native infrastructure
- Proficiency with Python Scripting
- Strong Experience with AWS RDS Query Optimisation (Aurora Postgres and Aurora MySQL)
- Experience with workflow management engines (i.e. Airflow, AWS Step Functions).
- Experience working with Snowflake and DBT
- You will be a motivated self starter who enjoys dealing with a myriad of internal stakeholders and will be able to coach senior executives along the data science journey.