Estadio Santiago Bernabéu, Avenida de Concha Espina, Madrid, Spanien
WHAT YOU CAN EXPECT
Develop scalable and efficient cloud-based data integration architectures using Databricks and Azure Data Factory.
Help optimize data storage and transformation using Azure Data Lake Storage Gen2 (ADLS Gen2).
Write clean and efficient code in Python and PySpark to process and analyze datasets.
Collaborate with cross-functional teams to gather data requirements and implement solutions that align with business goals.
Monitor and troubleshoot cloud-based data platforms and pipelines, contributing to performance improvements.
Ensure adherence to data quality, governance, and security standards across data solutions.
REQUIREMENTS OF THE POSITION
Master’s degree in IT, Physics, Mathematics with a minimum of 2-3 years relevant work experience.
Experience working with Databricks and Azure Data Factory for building data pipelines and workflows as well as with Azure Data Lake Storage Gen2 (ADLS Gen2) for data management and storage optimization
Proficiency in programming with Python and PySpark for data engineering tasks.
Familiarity with Azure cloud services and related tools is advantageous.
Strong analytical and problem-solving skills with a focus on performance and scalability.
Exposure to CI/CD pipelines for cloud data solutions is a plus.
WHAT WE OFFER
A secure work environment because your health, safety and wellbeing is always our top priority.
Flexible work schedule and Home-office options, so that you can balance your working life and private life.
Learning and development opportunities
23 holiday days per year
5 additional days (readjustment)
2 cultural days
A collaborative, trustful and innovative work environment
Being part of an international team and work in global projects