Data Engineer
Nimble Solutions
Job Description
Job Description Job Description Description: Data Engineer Chesterfield Office Hybrid or Remote Why You'll Want to Join! Join a leading Revenue Cycle Management (RCM) company dedicated to transforming healthcare data into actionable insights. We leverage cutting-edge technology to streamline financial and operational processes, improving efficiency and patient outcomes.
We are looking for a Data Engineer to help optimize data pipelines and build a next-generation data infrastructure incorporating technologies such as Microsoft Fabric, Azure Synapse, Databricks, and Snowflake . Position Overview Lead the modernization of our data infrastructure as a Data Engineer for nimble. You'll architect scalable cloud-native pipelines using Microsoft Fabric and Databricks to transform healthcare data—claims, EMR/EHR, HL7/FHIR—into actionable insights that drive revenue cycle optimization and clinical outcomes.
Why This Role Matters Healthcare data engineering is mission-critical: clean, governed data flows directly impact financial accuracy, compliance, and the decisions that improve patient care. Your ETL/ELT pipelines enable our analytics and data science teams to unlock the full potential of healthcare data. Key Responsibilities • Design, build, and optimize ETL/ELT pipelines using Azure Synapse, Databricks, and Snowflake • Develop robust data models and schemas for healthcare datasets, including claims, EMR/EHR, HL7, and FHIR standards • Write and optimize SQL queries for performance across large healthcare datasets • Implement data governance, quality frameworks, and HIPAA compliance controls • Collaborate with analytics, data science, and business teams to define data requirements • Monitor and troubleshoot data pipeline health and performance • Develop Python or Scala code for complex transformations and data processing • Support Power BI and analytics teams with data modeling and performance optimization • Document data lineage, transformations, and technical architecture Requirements: • 3+ years of professional data engineering or ETL/ELT development experience • Expert-level SQL skills with proven optimization experience • Proficiency in Python, Scala, or similar data processing languages • Hands-on experience with cloud data platforms (Azure Synapse, Snowflake, Databricks, or equivalent) • Understanding of healthcare data standards (HL7, FHIR, claims data structures) • Strong grasp of data modeling, normalization, and schema design • Experience with data versioning, CI/CD pipelines, and data quality frameworks Preferred Qualifications • Experience with Microsoft Fabric or Azure Data Factory • Knowledge of HIPAA compliance and healthcare data security • Background in healthcare, RCM, or claims processing • Experience with dbt (data build tool) or equivalent transformation frameworks • Exposure to dimensional modeling and data warehousing best practices What Success Looks Like • In 90 days: Deploy first cloud pipeline to production; complete HIPAA training; establish data quality baseline metrics • In 6 months: Reduce data pipeline latency by 30%; expand healthcare data models to include new sources; build reusable transformation components • Ongoing: Maintain 99.5%+ pipeline uptime; mentor junior engineers; drive architectural improvements for scale and performance