Posted:1 week ago| Platform:
Work from Office
Full Time
Role & responsibilities Design and implement scalable, high-performance batch and real-time data pipelines using Apache Spark, Kafka, Java, and SQL Build and maintain ETL/ELT frameworks handling structured, semi structured, and unstructured data Work on streaming data solutions using Spark Structured Streaming and Kafka Develop and optimize data models, implement data warehousing solutions on AWS / Azure / GCP Automate and orchestrate workflows using Apache Airflow, DBT, or equivalent tools Collaborate with cross-functional teams (Data Science, Product, Engineering) Monitor, troubleshoot, and ensure reliability of data systems Follow best practices in data governance, security, and cloud cost optimization. Preferred candidate profile: 3 to 8 years of hands-on experience in Data Engineering / Big Data Development Strong expertise in: Apache Spark, Kafka, Java (production-grade experience), Advanced SQL, Python/Scala (optional but a plus) Experience with cloud platforms (AWS / Azure / GCP) Familiarity with Git, CI/CD pipelines, and modern data ops practices. Good to Have: Experience with NoSQL (MongoDB, Cassandra, DynamoDB) Exposure to Docker, Kubernetes Domain experience in Banking / FinTech / Financial Services
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Kochi, Bengaluru
INR 12.0 - 22.0 Lacs P.A.
Hyderabad, Pune, Bengaluru
INR 15.0 - 25.0 Lacs P.A.
Hyderabad, Pune, Mumbai (All Areas)
INR 40.0 - 55.0 Lacs P.A.
Pune, Chennai, Bengaluru
INR 0.7 - 1.0 Lacs P.A.
Pune, Chennai, Bengaluru
INR 15.0 - 25.0 Lacs P.A.
Pune, Chennai, Bengaluru
INR 10.0 - 20.0 Lacs P.A.
Bengaluru
INR 12.0 - 22.0 Lacs P.A.
Bhubaneswar, Pune, Bengaluru
INR 8.0 - 18.0 Lacs P.A.
INR 20.0 - 25.0 Lacs P.A.
INR 6.0 - 16.0 Lacs P.A.