Make an impact at a global and dynamic investment organization
One of the fastest growing and most respected institutional investors in the world. HQ in Toronto, with multiple offices around the world.
We are looking for an experienced Full Stack Data Engineer to join our team for building a next generation data platform built on Data Mesh architecture/principles. The ideal candidate should have extensive hands-on experience of building a big data platform, Big Data Technologies, Data Pipelines, backend development using Python, BI/Analytics tools as well as experience with DevOps, AWS.
- Design, build, and maintain scalable and efficient data platform using data engineering technologies such as Glue, EMR, Athena, Redshift, Lake Formation, Apache Spark, Hive, HDFS and Trino.
- Build/manage data pipelines, and common data related cross cutting concerns like data catalog, data lineage, data quality, data profiling, data discovery, metadata management
- Build/manage BI/Analytical dashboard reducing time to insight for the business stakeholders.
- Implement CI/CD pipelines using Terraform, Jenkins, Github actions, Gitflow.
- Write clean, reusable, and efficient code.
- Develop and maintain APIs using Python and ensure API security and best practices are implemented.
- Implement DevOps best practices to ensure efficient application deployment and management.
- Hands on Experience with data engineering technologies such as AWS Glue, EMR, Athena, Redshift, Lake Formation, Apache Spark, Apache Hive, Apache Airflow
- Extensive experience of building data pipelines using orchestration tool like Apache Airflow. Hands on experience on building cross cutting concerns like data catalog, data lineage, data quality, data profiling, data discovery, metadata management
- Proven and extensive experience with Python
- Experience working with RESTful APIs and JSON. Familiarity with microservices architecture.
+ Competitive salary