Data Engineer

Calgary , Alberta, Canada

Apply Now

reference
Job reference 1272722
location
Location Calgary , Alberta, Canada
sector
Sector Energy - Oil & Gas
function
Function IT & Telecoms
type
Employment type Contract
date
Date published May 5, 2026

Airswift is seeking a Data Engineer to work a 12-month contract with one of our major clients in Calgary, AB.

Key Responsibilities

  • Design, build, and optimize ETL/ELT workflows using KNIME Analytics Platform
  • Integrate data from relational databases, APIs, and cloud storage
  • Automate and productionize workflows using KNIME Business Hub
  • Develop and scale data pipelines using Databricks (Apache Spark)
  • Implement data quality checks, anonymization, and governance controls
  • Document workflows, data pipelines, and operational processes
  • Lead the design and delivery of scalable Databricks data pipelines using Spark, Delta Lake, and PySpark
  • Drive Lakehouse architecture across ingestion, transformation, and curated data layers
  • Define technical standards and best practices for batch and streaming data engineering
  • Optimize performance and cost efficiency of Spark workloads
  • Implement enterprise-grade data governance, security, and monitoring (e.g., access controls, catalogs)
  • Act as a senior technical leader and mentor within the data engineering team

What You Bring

  • Post-secondary degree in Computer Science, Software Engineering, or equivalent experience
  • 7+ years of hands-on experience in data engineering
  • Strong experience with Databricks, Spark, and Lakehouse architectures
  • Advanced proficiency in PySpark, Python, and SQL
  • Advanced experience with KNIME Analytics Platform (nodes, components, workflow control)
  • Solid understanding of data modeling and ETL/ELT best practices
  • Experience building and supporting production-grade data pipelines at scale
  • Exposure to cloud platforms (Azure preferred, AWS acceptable)
  • Familiarity with DevOps / CI/CD practices for data workloads

Nice to Have

  • Experience with MLOps and deploying machine learning workloads in Databricks
  • Familiarity with data integration tools (e.g., Azure Data Factory, HVR, or similar)
  • Experience working in regulated or large enterprise environments
  • Exposure to streaming data architectures (Kafka, Structured Streaming)

#LI-GA3

Not the job you are looking for? Search hundreds more

Search Now.