Posted:4 days ago| Platform:
Hybrid
Full Time
Role & responsibilities Data Pipeline Development: Design, build, and maintain scalable ETL/ELT data pipelines using PySpark and Python . Process structured and unstructured data from various sources (e.g., APIs, files, databases). Data Integration: Integrate data from internal and external sources into data lakes or warehouses (like AWS S3, Azure Data Lake, or Hadoop HDFS). Database Management: Write optimized SQL queries for data extraction, transformation, and aggregation. Ensure data quality and integrity during ingestion and processing. Preferred candidate profile 8 - 12 years of hands-on experience in Data Engineering or ETL development Strong proficiency in Python , PySpark , and SQL Experience in building and optimizing ETL pipelines for large-scale data Excellent problem-solving and communication skills Ability to work independently and in a team-oriented environment
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
INR 10.0 - 20.0 Lacs P.A.
Mumbai, Hyderabad, Bengaluru
INR 8.0 - 12.0 Lacs P.A.
Hyderabad, Pune, Bengaluru
INR 20.0 - 35.0 Lacs P.A.
Hyderabad
INR 10.0 - 20.0 Lacs P.A.
INR 15.0 - 30.0 Lacs P.A.
Chennai
INR 17.0 - 32.0 Lacs P.A.
INR 6.0 - 8.0 Lacs P.A.
Hyderabad
INR 17.0 - 30.0 Lacs P.A.
Experience: Not specified
INR 3.0 - 8.0 Lacs P.A.
INR 20.0 - 30.0 Lacs P.A.