Join our team as a Junior Data Engineer and put your skills into practice on impactful projects. You will build and optimize large-scale data pipelines using powerful technologies such as Hadoop, Apache Spark, and Kafka.
This is an excellent opportunity to develop your expertise in a collaborative and forward-thinking environment.
Responsibilities
- Develop and maintain robust data processing pipelines using Hadoop, Apache Spark and Kafka
- Collaborate with international teams to design and deliver scalable IT solutions
- Integrate new data processing solutions into established delivery pipelines
- Participate in code reviews and contribute to our high development standards
- Continuously improve your skills through dedicated mentorship and technical workshops
Requirements
- At least 1 year of experience in software development with a focus on data engineering
- Proficiency in Java or Python for data processing tasks
- Hands-on experience with core technologies, including Hadoop, Apache Spark and Kafka
- Understanding of data processing principles and software engineering fundamentals
- Familiarity with the concepts and tools of CI/CD pipelines
- English language skills at the B1 level or higher to collaborate effectively with our international teams
Ключевые навыки
- Python
- Apache Spark
- Hadoop
- Apache Kafka
- Английский — B2 — Средне-продвинутый
Задайте вопрос работодателю
Он получит его с откликом на вакансию
Где предстоит работать
Баку
Вакансия опубликована 5 января 2026 в Баку