Senior Devops Engineer-Databricks
Elitez India
Job Description
Experience Requirements ● Total DevOps Experience: 3-6+ years. ● Databricks Platform Ops: 1+ years of hands-on experience managing Databricks at an enterprise level. ● Enterprise Scale: Proven experience automating multi-workspace environments for large-scale financial institutions or highly regulated industries. Required Certifications ● Mandatory: Databricks Certified Data Engineer Professional. ● Essential: HashiCorp Certified: Terraform Associate. ● Cloud: Professional-level certification in Azure (e.g., Azure DevOps Engineer Expert) or AWS (DevOps Engineer Professional). Core Technical Skills 1.
Infrastructure as Code (IaC) & Provisioning ● Terraform Mastery: Expert proficiency in using the Databricks Terraform provider to manage workspaces, metastores, and complex resource provisioning. ● Workspace Automation: Experience in "Zero-Touch" environment setup, ensuring 100% consistency across Dev, UAT, and Production. ● Cluster Policies: Knowledge of creating and enforcing cluster policies to standardize compute and prevent unauthorized resource sprawl. 2. CI/CD & DataOps Execution ● Databricks Asset Bundles (DABs): Mastery of DABs for packaging code, libraries (.whl), and jobs as a single deployable unit. ● Automated Testing: Integrating pytest and Spark testing frameworks into CI pipelines to ensure code quality before deployment. ● Orchestration Automation: Automating the deployment of Databricks Workflows and task dependencies via code rather than manual UI configuration. 3. Governance & Security Hardening ● Unity Catalog (UC) Automation: Hands-on experience implementing UC governance via IaC, including Service Principals and fine-grained access control. ● Networking Security: Proficiency in configuring Private Link, VNet Injection, and IP Access Lists to ensure a hardened, private perimeter. ● Identity Management: Automating user and group provisioning using SCIM and integration with enterprise Identity Providers (IDP). 4.
Observability & Cost Engineering ● System Tables: Proficiency in querying system.billing.usage and audit log tables to build real-time DBU monitoring dashboards. ● Cost Control: Implementing automated "Guardrails" to optimize DBU consumption and enforce budgets. ● Pipeline Monitoring: Setting up proactive alerting for job failures and performance degradation using Databricks SQL. Preferred Candidate Background ● Distributed Systems Foundations: Understanding of Spark internals to troubleshoot deployment-related performance issues (e.g., library conflicts). ● Automation-First Mindset: Advanced Python skills used for platform automation and custom tooling rather than just simple scripts. ● Financial Services Expertise: Deep knowledge of security protocols and network architecture required for sovereign wealth or banking environments. Key Responsibilities ● Build & Automate: Develop robust CI/CD pipelines using GitHub Actions or Azure DevOps to automate the promotion of data artifacts. ● Platform Hardening: Collaborate with security teams to ensure the Lakehouse architecture meets strict financial compliance standards. ● Cost Management: Monitor and report on cloud spend, proactively right-sizing compute resources to reduce infrastructure costs.