Responsibilities:
- Design, build and maintain data pipelines and ETL processes
- Work with large-scale data in cloud environments (AWS, Databricks)
- Optimize data processing and storage solutions
- Collaborate with data scientists and analysts to deliver data solutions
- Ensure data quality, reliability and performance
Requirements:
- 4+ years of experience in Data Engineering
- Strong experience with Python and SQL
- Hands-on experience with AWS and Databricks
- Experience building and maintaining data pipelines
- Understanding of data modeling and ETL processes
- English level: B2+
Tech stack:
AWS, Databricks, SQL, Python, Spark