
Data Engineer - Databricks & PySpark
- Sydney, NSW
- Permanent
- Full-time
- Take ownership of an existing PoC and lead its evolution into a production-ready service.
- Build, refine, and scale data pipelines and outputs to support ongoing monitoring and management.
- Collaborate closely with developers and stakeholders to ensure smooth delivery within a tight 6-month timeline.
- Apply best practices in CI/CD and production deployment to ensure stability and maintainability.
- Python: Strong to advanced proficiency, especially working with PySpark.
- Databricks: Solid experience with Unity Catalog and Delta Live Tables (intermediate to advanced).
- RESTful APIs: Advanced experience managing client credential requests using Python.
- CI/CD: Deep understanding of best practices; confident in setting up and maintaining pipelines.
- End-to-End Development: Experience pushing solutions into production environments.
- SQL: Useful for data manipulation and validation.
- Microsoft Graph API: Great if you've worked with it, but general RESTful API experience is fine