
Data Engineer
- Sydney, NSW
- Permanent
- Full-time
- Innovation at Our Core – We challenge the status quo and push boundaries to create better solutions.
- Work with the Best – Collaborate with some of the brightest minds in fintech, financial services, and strategy.
- Make an Impact – Contribute to meaningful projects that shape our business and the future of property finance.
- Grow & Evolve – Develop your skills and advance your career in a fast-moving, purpose-driven environment.
- Design, build, and maintain scalable data pipelines using modern data stack tools (dbt, Airflow, Python).
- Take ownership of troubleshooting and resolving issues in data workflows, ensuring reliable and timely data delivery.
- Work with Databricks and Snowflake environments to support ingestion, transformation, and modelling of data.
- Contribute to development and optimisation of Unity Catalog, Delta Live Tables, and Spark within Databricks.
- Collaborate with senior engineers and analysts to design and implement data models that support analytics and product needs.
- Translate business requirements into technical solutions, including prototyping and proof-of-concepts.
- Apply best practices in data modelling (Kimball, Data Vault, etc.) to ensure scalable and performant solutions.
- Partner with data analysts, engineers, and business stakeholders to clarify requirements and deliver solutions.
- Clearly communicate progress, challenges, and technical considerations to both technical and non-technical stakeholders.
- Maintain up-to-date documentation for pipelines, workflows, and solutions, ensuring knowledge is shared across the team.
- Support data governance initiatives by monitoring data quality, applying standards, and flagging anomalies.
- Proactively identify opportunities to optimise pipelines and improve performance in Databricks and Snowflake.
- Support ongoing platform improvements and contribute to scaling best practices across environments.
- Stay across emerging trends and best practices in cloud data engineering.
- Mentor junior engineers where appropriate and contribute to team knowledge-sharing activities.
- Actively participate in improving team culture, ways of working, and engineering standards.
- 2–4 years’ experience in data engineering or a related role.
- Strong SQL skills and experience with data modelling techniques (e.g., Kimball, Data Vault).
- Hands-on experience building and maintaining data pipelines using dbt, Airflow, or similar orchestration/ETL tools.
- Practical exposure to Databricks (Spark, Delta Live Tables, Unity Catalog) and Snowflake (warehousing, performance tuning, security).
- Proficiency in Python for data processing and automation.
- Understanding of data governance, data quality, and best practices for managing cloud data platforms.
- Strong problem-solving skills with the ability to troubleshoot and resolve data pipeline issues.
- Good communication skills to collaborate with engineers, analysts, and business stakeholders.
- Experience working in cloud platforms (AWS, Azure, or GCP).
- Familiarity with CI/CD, infrastructure-as-code (Terraform, Git, etc.).
- Exposure to BI/visualisation tools (Tableau, Power BI, or similar).
- Previous experience in a mid-size to large data environment, supporting cross-functional teams.
- Interest in mentoring juniors and contributing to team knowledge-sharing.
- A vibrant, relaxed, yet professional culture.
- Hybrid working arrangement designed to support work-life balance, while fostering meaningful connection and collaboration.
- A holistic wellbeing programs offering 24/7 support, including medical, mental health, and financial wellbeing services to enable our workforce to thrive at home and work.
- Generous paid Parental Leave: we celebrate our growing Lendi Group family with 18-26 weeks leave for primary carers and up to 4 weeks for secondary carers.
- An additional week’s Loyalty Leave each year after reaching 3 years’ service.
- Wellness initiatives with a strong focus on psychological safety.