Data Engineer


100% Remote

, United States


Data Engineering

Job type:

Direct Hire





Job ref:


About the Company: 
One of the largest global medical technology companies in the world, with over 65,000 employees.

About the Role:
This position requires extensive hands-on, data system design and coding experience, developing modern data pipelines (AWS Step functions, Prefect, Airflow, Luigi, Python, Spark, SQL) and associated code in cloud/on-prem Linux/Windows environments.

This is highly collaborative position that will be partnering and advising multiple teams by providing guidance throughout the creation and consumption of our data pipelines.

Key responsibilities include:

  • Design and implement conceptual, logical, and physical data workflows that support business needs on Cloud-based systems.
  • Propose architecture that enables integration of disparate enterprise data
  • Build and maintain efficient data pipeline architecture, ingress / egress data pipelines for applications, data layers and Data Lake; resolve cost effective efficient data movement strategies across a hybrid cloud
  • Lead multi-functional design sessions with functional authorities to understand and detail data requirements and use cases
  • Develop and document the data movement standards, best practices, and promote them across department
  • Drive long-term data architecture roadmaps in alignment with corporate strategic objectives
  • Conduct code and design reviews to ensure data related standards and best practices are met
  • Proactively educate others on modern data engineering concepts and design
  • Mentor Junior Members of the team

Ideal Candidate:

  • Candidate MUST have experience in owning large, complex system architecture, and hands on experience crafting and implementing data pipelines across large scale systems.
  • Experience implementing data pipelines with AWS is a must.
  • Production delivery experience in Cloud based PaaS Big Data related technologies (Snowflake, EMR, Data bricks etc.)
  • Experienced in multiple Cloud PaaS persistence technologies, and in-depth knowledge of cloud based ETL offerings and orchestration technologies (AWS Step Function, Airflow etc.)
  • Experienced in stream-based and batch processing applying modern technologies 
  • Database design skills including normalization/de-normalization and data warehouse design 
  • Strong analytical / Debugging / Troubleshooting skills
  • Understanding of Distributed File Systems (HDFS, ADLS, S3)
  • Knowledge and understanding of relevant legal and regulatory requirements, such as SOX, PCI, HIPAA, Data Protection
  • Experience transitioning from on-prem big data installations to cloud is a plus
  • Strong Programming Experience – Python / Spark / SQL etc.
  • Experience in the healthcare industry, a plus
  • Collaborative and informative mentality is a must!


  • AWS
  • Spark / Python / SQL
  • Snowflake/ Databricks / Synapse / MS SQL Server
  • ETL / Orchestration Tools (DBT etc.)
  • Azure /Cosmos / ADLS Gen 2
  • git
  • Power BI/Tableau
  • ML / Notebooks
  • Hadoop 2.0 / Impala / Hive 

Education and experience required:

  • Bachelors or Master’s in Computer science, Information Systems, or an engineering field or relevant experience.
  • 10+ years of related experience in developing data solutions and data movement.

Share job

Latest jobs


About the Company: Our client is a global cosmetics company...


About the Company: Our client is a hospital and healthcare...


About the Company: Our About the Company: Our client is...

Get new jobs for this search by email

By submitting your details you agree to our

Data Engineer

Please complete the form below to apply for this position