AWS Data Engineer - Python/PySpark

3.0 years

0.0 Lacs P.A.

Hyderabad, Telangana, India

Posted:1 week ago| Platform: Linkedin logo

Apply Now

Skills Required

awsdatapythonpysparkdesignetlredshiftintegrationpipelinesecuritygovernancesqlarchitecturedevopsanalyticsazuregcpapacheairflowkafka

Work Mode

On-site

Job Type

Full Time

Job Description

Key Responsibilities Design, develop, and maintain scalable data pipelines and architectures using AWS services. Implement ETL/ELT processes using AWS Glue, Lambda, and Step Functions. Work with structured and unstructured data across S3, Redshift, and other AWS data services. Develop data integration workflows to collect, process, and store data efficiently. Optimize performance and cost of data pipelines. Monitor and troubleshoot data pipeline failures using CloudWatch and related tools. Collaborate with data analysts, data scientists, and other stakeholders to ensure data availability and quality. Apply best practices for security and governance of data assets on Skills : 3+ years of experience in Python, SQL, and PySpark. 2+ years of experience with AWS services such as : AWS Glue AWS Lambda Amazon S3 Amazon EC2 Amazon Redshift CloudWatch Experience in building and maintaining ETL pipelines. Knowledge of data lake and data warehouse architecture. Familiarity with DevOps tools and CI/CD pipelines is a plus. Good understanding of data governance and security best practices on AWS. Preferred Qualifications AWS Certified Data Analytics Specialty or AWS Certified Solutions Architect. Experience with other cloud platforms (Azure, GCP) is a plus. Exposure to tools like Apache Airflow, Kafka, or Snowflake is an added advantage. (ref:hirist.tech) Show more Show less

Deqode
Deqode
Not specified
No locations

RecommendedJobs for You