Big Data Engineer

6 - 11 years

15.0 - 25.0 Lacs P.A.

Pune, Bengaluru, Hyderabad

Posted:2 months ago| Platform: Naukri logo

Apply Now

Skills Required

SCALAHadoopBig Data EngineerSparkAWS

Work Mode

Work from Office

Job Type

Full Time

Job Description

Design, build, and maintain scalable ETL/ELT pipelines using AWS Glue, Lambda, Step Functions, and Apache Spark . Develop and optimize data lake and data warehouse architectures using AWS S3, Redshift, and Athena . Implement data ingestion, transformation, and storage solutions using AWS Glue, Kinesis, and Kafka . Work with structured and unstructured data to develop efficient data pipelines . Optimize query performance in SQL-based and NoSQL databases (Redshift, DynamoDB, Aurora, PostgreSQL, etc.). Ensure data security, compliance, and governance using IAM roles, KMS encryption, and AWS Lake Formation . Automate data workflows using Terraform, CloudFormation, or CDK . Monitor and troubleshoot data pipelines, ensuring high availability and performance. Collaborate with Data Scientists, Analysts, and Engineers to provide robust data solutions. Required Skills: Strong experience with AWS Data Services (Glue, Redshift, S3, Lambda, Kinesis, Athena). Proficiency in Python, SQL, and Scala Experience with ETL/ELT pipeline design and data warehousing concepts. Strong knowledge of relational and NoSQL databases (PostgreSQL, MySQL, DynamoDB). Familiarity with DevOps practices and CI/CD tools (Jenkins, GitHub Actions). Experience with Kafka or Kinesis for real-time data streaming. Knowledge of Terraform, CloudFormation, or CDK for infrastructure automation. Hands-on experience with data security and governance . Strong analytical and problem-solving skills

Data Management
San Francisco

RecommendedJobs for You

Chennai, Pune, Delhi, Mumbai, Bengaluru, Hyderabad, Kolkata

Pune, Bengaluru, Mumbai (All Areas)

Chennai, Pune, Delhi, Mumbai, Bengaluru, Hyderabad, Kolkata

Bengaluru, Hyderabad, Mumbai (All Areas)

Hyderabad, Gurgaon, Mumbai (All Areas)