Posted:3 days ago| Platform:
On-site
Full Time
Job Summary As an AWS Data Engineer at Deqode, youll be responsible for designing and building scalable data pipelines and ETL workflows using AWS Glue, Redshift, and other cloud data services. Your role will involve working with large datasets, automating data processes with Python/PySpark, and ensuring data quality and performance across various Responsibilities : Develop, optimize, and maintain ETL pipelines using AWS Glue and PySpark Design and implement robust data workflows and processing systems Work with large structured and semi-structured datasets to extract insights and support data-driven decision making Optimize Amazon Redshift queries and manage data storage on AWS Collaborate with data analysts, data scientists, and other stakeholders to meet data requirements Automate data workflows using scripts and schedule processes for seamless operations Ensure data quality, consistency, and system performance while troubleshooting production Skills & Qualifications : 6 to 8 years of experience as a Data Engineer Strong hands-on experience with Python or PySpark Proficiency in AWS Glue, Redshift, S3, and other AWS services Solid expertise in writing optimized SQL queries and managing large datasets Experience in building and maintaining scalable ETL processes Excellent problem-solving skills and effective communication Skills : Familiarity with workflow orchestration tools like Airflow or AWS Step Functions Experience with CI/CD processes for data pipelines Knowledge of data governance and security best practices on AWS. (ref:hirist.tech) Show more Show less
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Chennai, Tamil Nadu, India
0.0 - 0.0 Lacs P.A.