Data Analytics Engineer

3 - 6 years

7.0 - 14.0 Lacs P.A.

Gurgaon

Posted:2 months ago| Platform: Naukri logo

Apply Now

Skills Required

Redshift DbSQL queriesQlik ReplicateData Lakes using S3Qlik ComposeAirflowPysparkTableauHiveApache Parquet.SparkAWSPython

Work Mode

Hybrid

Job Type

Full Time

Job Description

Data Analytics Engineer Job Description Job Title: Data Analytics Engineer Experience: 3 to 6 years Location: Gurgaon (Hybrid) Employment Type: Full-time Job Description: We are seeking a highly skilled Data Analytics Engineer with expertise in Qlik Replicate, Qlik Compose, and Data Warehousing to build and maintain robust data pipelines. The ideal candidate will have hands-on experience with Change Data Capture (CDC) pipelines from various sources, an understanding of Bronze, Silver, and Gold data layers , SQL querying for data warehouses like Amazon Redshift , and experience with Data Lakes using S3 . A foundational understanding of Apache Parquet and Python is also desirable. Key Responsibilities: 1. Data Pipeline Development & Maintenance Design, develop, and maintain ETL/ELT pipelines using Qlik Replicate and Qlik Compose. Ensure seamless data replication and transformation across multiple systems. Implement and optimize CDC-based data pipelines from various source systems. 2. Data Layering & Warehouse Management Implement Bronze, Silver, and Gold layer architectures to optimize data workflows. Design and manage data pipelines for structured and unstructured data . Ensure data integrity and quality within Redshift and other analytical data stores. 3. Database Management & SQL Development Write, optimize, and troubleshoot complex SQL queries for data warehouses like Redshift . Design and implement data models that support business intelligence and analytics use cases. 4. Data Lakes & Storage Optimization Work with AWS S3-based Data Lakes to store and manage large-scale datasets. Optimize data ingestion and retrieval using Apache Parquet . 5. Data Integration & Automation Integrate diverse data sources into a centralized analytics platform. Automate workflows to improve efficiency and reduce manual effort. Leverage Python for scripting, automation, and data manipulation where necessary. 6. Performance Optimization & Monitoring Monitor data pipelines for failures and implement recovery strategies. Optimize data flows for better performance, scalability, and cost-effectiveness . Troubleshoot and resolve ETL and data replication issues proactively. Technical Expertise Required: 3 to 6 years of experience in Data Engineering, ETL Development, or related roles. Hands-on experience with Qlik Replicate & Qlik Compose for data integration. Strong SQL expertise, with experience in writing and optimizing queries for Redshift . Experience working with Bronze, Silver, and Gold layer architectures . Knowledge of Change Data Capture (CDC) pipelines from multiple sources. Experience working with AWS S3 Data Lakes . Experience working with Apache Parquet for data storage optimization. Basic understanding of Python for automation and data processing. Experience in cloud-based data architectures (AWS, Azure, GCP) is a plus. Strong analytical and problem-solving skills. Ability to work in a fast-paced, agile environment . Preferred Qualifications: Experience in performance tuning and cost optimization in Redshift. Familiarity with big data technologies such as Spark or Hadoop. Understanding of data governance and security best practices . Exposure to data visualization tools such as Qlik Sense, Tableau, or Power BI.

Digital Marketing
San Francisco

RecommendedJobs for You

Chennai, Pune, Mumbai, Bengaluru, Gurgaon

Chennai, Pune, Delhi, Mumbai, Bengaluru, Hyderabad, Kolkata

Pune, Bengaluru, Mumbai (All Areas)