Data Platform | AI / ML | GCP | Direct Hiring
The company you will work for
Enhance situational awareness and decision-making across land, sea, and air through advanced data processing and analysis.
Your New Mission
- Architect and implement scalable infrastructure for data and machine learning workloads, leveraging modern cloud-native and on-premises distributed systems.
- Build and maintain robust, high-performance data pipelines to ingest, process, and store large-scale datasets from diverse sources.
- Administer and fine-tune databases and data warehouses to ensure high availability, data integrity, and optimal performance.
- Consolidate data from various origins—APIs, internal systems, and third-party providers—into unified, analysis-ready datasets.
- Develop tools and interfaces that empower business users and analysts to independently access insights and interact with ML models.
- Contribute to the evolution and scaling of the Data & AI Platform, supporting new use cases and capabilities.
- Uphold data governance standards, ensuring compliance with privacy regulations and implementing robust security practices.
- Establish validation and monitoring mechanisms to maintain data accuracy, consistency, and reliability across systems.
- Partner with data scientists, analysts, and engineering teams to understand platform needs and deliver tailored infrastructure solutions.
- Implement observability tools and performance tuning strategies to ensure system reliability, scalability, and cost-efficiency.
- Maintain clear, comprehensive documentation of data workflows, APIs, schemas, and ML systems to support onboarding and collaboration.
What you need to be succeed
- Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or a related field (Mandatory).
- 5+ years of experience in data & ML platform engineering or a similar role.
- Proficiency in programming languages such as Python, Go, Java, or Rust.
- Strong knowledge of containerization technologies (e.g. Docker) and orchestration systems (e.g. Kubernetes) in production environments.
- Experience with SQL,database management systems (e.g., MySQL, PostgreSQL, SQL Server), data modeling and schema design.
- Experience with cloud platforms and their data services, with a focus on Google Cloud.
- Familiarity with big data technologies (e.g.Spark),data warehousing solutions (e.g., Redshift, Snowflake), data pipeline orchestration (e.g. Airflow), and database systems (SQL and noSQL).
- Design and manage MLOps pipelines to support model training, deployment and monitoring at scale
- Experience implementing observability stacks and logging pipelines (e.g., Prometheus, Grafana, Loki, ELK).
- Experience with IaC frameworks (e.g. Ansible, Terraform),
- Understanding of DevOps best practices and tools: GIT, CI/CD, telemetry and monitoring, etc.
What the company can offer you
- Meal Allowance
- Health Insurance
- Remote Work Model
Next Steps
If you are interested in this opportunity, please send us your updated CV. If you are looking for a different kind of professional challenge, feel free to contact us to discuss other career opportunities — always in complete confidentiality.