Het lijkt erop dat je onze cookies nog niet hebt geaccepteerd. Klik hier om de cookies te aanvaarden en deze inhoud te bekijken.
At TÜV Austria Belgium, we are a leading provider of Testing, Inspection, and Certification (TIC) services and solutions. As part of the rapidly growing TUV Austria Group, founded in Vienna in 1872, we operate globally with over 3,400 employees in 34 countries. Our commitment to professionalism, adaptability, integrity, effortlessness, and innovation ensures we deliver exceptional value to our customers.
TÜV Austria Belgium is seeking a highly skilled and motivated Senior Data Engineer to join our team. In this role, you will design and implement robust data pipelines, ensure data quality and governance, and enable advanced analytics and AI solutions through scalable data architectures. If you are passionate about building reliable data ecosystems and leveraging modern cloud technologies, we invite you to apply.
Key Responsibilities
Data Architecture & Pipelines: Design, develop, and maintain scalable data pipelines and ETL processes to ingest, transform, and deliver structured and unstructured data.
Data Governance & Quality: Implement governance frameworks, data catalogs, and quality assurance processes to ensure compliance, integrity, and security of all data assets.
Data Integration & Storage: Manage data lakes, warehouses, and streaming platforms; optimize storage solutions for performance and cost-efficiency.
Enable AI & Analytics: Prepare and curate datasets for machine learning and advanced analytics projects, ensuring accessibility and reliability for data scientists and AI engineers.
Performance Optimization: Monitor and optimize data workflows for speed, scalability, and resilience in cloud and on-prem environments.
Collaboration: Work closely with cross-functional teams (data scientists, software engineers, project managers) to deliver end-to-end data solutions.
Continuous Improvement: Stay updated on emerging technologies in data engineering, big data, and cloud platforms; drive adoption of best practices.
Client Interaction: Support clients in understanding data requirements, present technical solutions, and provide expertise on data-driven strategies.
Education: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or related field.
Experience: 3–5 years in data engineering, big data, or data platform development.
Technical Skills:
Strong proficiency in Python and SQL; experience with distributed systems (Spark, Databricks).
Cloud Platforms:
Microsoft Azure: Azure Data Factory, Azure Data Lake Storage, Microsoft Fabric,Azure Cosmos DB.
Other Clouds: AWS (Glue, RDS, Athena, Redshift, S3), GCP (BigQuery, Dataflow).
Data Orchestration & Workflow: Airflow, Prefect, Dagster, ADF
Data transformation: dbt
Data Warehousing & Modeling: Snowflake, DuckDB, BigQuery.
Streaming & Real-Time Processing: Kafka, Azure Event Hub.
Visualization & BI: Power BI, Tableau.
Familiarity with CI/CD pipelines (Azure DevOps, GitHub Actions) and containerization (Docker, Kubernetes).
Infrastructure as Code: Terraform
Analytical Thinking: Ability to design efficient data workflows and troubleshoot complex data issues.
Communication: Clear and concise communication skills for technical and non-technical stakeholders.
Ethical Standards: Commitment to data privacy, security, and compliance with relevant regulations.
Bonus: Experience with MLOps and DataOps.
Fluency in both written and spoken Dutch and English is required.
Opportunity to work on cutting-edge data and AI projects with diverse clients.
Collaborative and supportive work environment.
Continuous learning and professional development opportunities.
Competitive salary and benefits package.
Flexible working hours and work-life balance initiatives.
Wij en onze partners gebruiken cookies om informatie op je apparaat op te slaan en te raadplegen. Zo kunnen we je gebruikerservaring verbeteren en relevante content en gepersonaliseerde advertenties tonen.
Je kan de cookies accepteren, beheren of afwijzen. Je kan deze popup via de cookieknop links onderaan op elk moment opnieuw openen om je voorkeuren aan te passen. Voor meer informatie lees je onze <a href=":link" target="_blank">cookie policy</a>.
Jobtoolz verzamelt anonieme gegevens voor het verbeteren van de gebruikerservaring. Lees hier het cookiebeleid van Jobtoolz.
Wij verzamelen met behulp van Jobtoolz, ons online sollicitatieplatform, uw persoonsgegevens om kwaliteitsvolle aanwerving- en selectieprocessen mogelijk te kunnen maken.
Het verstrekken van deze persoonsgegevens is dan ook een noodzakelijke voorwaarde om het sollicitatietraject te doorlopen.
Bij de verzameling en verwerking van uw persoonsgegevens respecteren wij steeds de regelgeving inzake bescherming van persoonsgegevens, evenals de Algemene Verordening Gegevensbescherming (“AVG” of GDPR”).
Voor meer informatie over de manier waarop wij uw persoonsgegevens verwerken, verwijzen wij naar onze privacy policy.
Wil je meer weten over het privacybeleid van Jobtoolz Klik hier