Azure Data Engineer - Immediate joiners

3 - 5 years

6.0 - 10.0 Lacs P.A.

Chennai

Posted:3 weeks ago| Platform: Naukri logo

Apply Now

Skills Required

Azure Data FactoryAzure DatabricksETLAzure Data Lake

Work Mode

Work from Office

Job Type

Full Time

Job Description

Basic Qualifications: Minimum Experience: 2-4 years of experience in designing, implementing, and supporting Data Warehousing and Business Intelligence solutions. Educational Qualification: A bachelor's degree or equivalent in Computer Science, Engineering, Information Systems, or a related field. Certifications: Relevant certifications such as Microsoft Azure Data Engineer, Azure Fundamentals, or equivalent will be a plus. Technical Skills: Data Engineering & Warehousing: Designing, implementing, and supporting Data Warehousing solutions. Experience with hybrid cloud deployments and integration between on-premises and cloud environments. Tools & Technologies: Azure Data Factory (ADF), Azure Synapse Analytics, Azure Data Lake, Azure SQL, Databricks. ETL & Data Pipelines: Experience in creating and maintaining data pipelines using Azure Data Factory, Pyspark Notebooks, Spark SQL, and Python. Data Transformation & Integration : Implementing ETL processes to extract, transform, and load data from diverse sources into data warehousing solutions. Spark : Knowledge in Spark Core Internals, Spark SQL, Structured Streaming, and Delta Lake. Data Security & Compliance: Familiarity with data privacy regulations, ensuring security in cloud-based data operations. Data Analytics: Conceptual understanding of dimensional modeling, ETL processes, and reporting tools. Experience with structured and unstructured data types. Roles and Responsibilities: Data Pipeline Design & Implementation : Design and implement scalable, efficient data pipelines for data ingestion, transformation, and loading processes. ETL Process Management : Build and maintain ETL processes to ensure smooth data extraction, transformation, and loading. Troubleshooting & Issue Resolution : Provide deep code-level analysis of Spark and related technologies to resolve complex customer issues, particularly with Spark internals, Spark SQL, Structured Streaming, and Delta. Performance Monitoring & Optimization : Continuously monitor and fine-tune data pipelines and workflows to improve efficiency and performance, especially for large-scale data sets. Cloud Integration : Manage hybrid cloud deployments, integrating on-premises systems with cloud environments. Security & Compliance : Ensure data security and comply with data privacy regulations during all data engineering activities. Collaboration : Work closely with business stakeholders to understand requirements and ensure the solutions align with business needs and objectives. Best Practices & Documentation : Follow data engineering best practices like code modularity, and version control, and maintain clear documentation for developed solutions.

IT Services and IT Consulting
Pleasanton California +

RecommendedJobs for You

Hyderabad, Pune, Bengaluru