Data Engineer

5.0 - 10.0 years

12.0 - 22.0 Lacs P.A.

Hyderabad, Bengaluru

Posted:1 week ago| Platform: Naukri logo

Apply Now

Skills Required

AirflowPysparkData EngineeringSnowflakeERwinHadoopData BricksSQLHiveSCALASparkData ModelingETLAWSPython

Work Mode

Work from Office

Job Type

Full Time

Job Description

Experience Range : 4 - 12+ Year's Work Location : Bangalore (Proffered ) Must Have Skills : Airflow, big query, Hadoop, PySpark, Spark/Scala, Python, Spark - SQL, Snowflake, ETL, Data Modelling, Erwin OR Erwin Studio, Snowflake, Stored Procedure & Functions, AWS, Azure Databricks, Azure Data Factory. No Of Opening's : 10+ Job Description : We are having multiple Salesforce roles with our clients. Role 1 : Data Engineer Role 2 : Support Data Engineer Role 3 : ETL Support Engineer Role 4 : Senior Data Modeler Role 5 : Data Engineer Data Bricks Please find below the JD's for each role Role 1 : Data Engineer 5+ years of experience in data engineering or a related role. Proficiency in Apache Airflow for workflow scheduling and management. Strong experience with Hadoop ecosystems, including HDFS, MapReduce, and Hive. Expertise in Apache Spark/ Scala for large-scale data processing. Proficient in Python Advanced SQL skills for data analysis and reporting. Experience with cloud platforms (e.g., AWS, Google Cloud, Azure) is a plus. Designs, proposes, builds, and maintains databases and datalakes, data pipelines that transform and model data, and reporting and analytics solutions Understands business problems and processes based on direct conversations with customers, can see the big picture, and translate that into specific solutions Identifies issues early, proposes solutions, and tactfully raises concerns and proposes solutions Participates in code peer reviews Articulates clearly pros/cons of various tools/approaches Documents and diagrams proposed solutions Role 2 : Support Data Engineer Prioritize and resolve Business-As-Usual (BAU) support queries within agreed Service Level Agreements (SLA) while ensuring application stability. Drive engineering delivery to reduce technical debt across the production environment, collaborating with development and infrastructure teams Perform technical analysis of the production platform to identify and address performance and resiliency issues Participate in the Software Development Lifecycle (SDLC) to improve production standards and controls Build and maintain the support knowledge database, updating the application runbook with known tasks and managing event monitoring Create health check monitors, dashboards, synthetic transactions and alerts to increase monitoring and observability of systems at scale. Participate in on-call rotation supporting application release validation, alert response, and incident management Collaborate with development, product, and customer success teams to identify and resolve technical problems. Research and implement recommendations from post-mortem analyses for continuous improvement. Document issue details and solutions in our ticketing system (JIRA and ServiceNow) Assist in creating and maintaining technical documentation, runbooks, and knowledge base articles Navigate a complex system, requiring deep troubleshooting/debugging skills and an ability to manage multiple contexts efficiently. Oversee the collection, storage, and maintenance of production data, ensuring its accuracy and availability for analysis. Monitor data pipelines and production systems to ensure smooth operation and quickly address any issues that arise. Implement and maintain data quality standards, conducting regular checks to ensure data integrity. Identify and resolve technical issues related to data processing and production systems. Work closely with data engineers, analysts, and other stakeholders to optimize data workflows and improve production efficiency. Contribute to continuous improvement initiatives by analyzing data to identify areas for process optimization Role 3 : ETL Support Engineer 6+ years of experience with ETL support and development ETL Tools: Experience with popular ETL tools like Talend, Microsoft SSIS, Experience with relational databases (e.g., SQL Server, Postgres). Experience with Snowflake Dataware house. Proficiency in writing complex SQL queries for data validation, comparison, and manipulation Familiarity with version control systems like Git, Github to manage changes in test cases and scripts. Knowledge of defect tracking tools like JIRA, ServiceNow. Banking domain experience is a must. Understanding of the ETL process Perform functional, Integration and Regression testing for ETL Processes. Validate and ensure data quality and consistency across different data sources and targets. Develop and execute test cases for ETL workflows and data pipeline. Load Testing: Ensuring that the data warehouse can handle the volume of data being loaded and queried under normal and peak conditions. Scalability: Testing for the scalability of the data warehouse in terms of data growth and system performance. Role 4 : Senior Data Modeler 7+ experience in metadata management, data modelling, and related tools (Erwin or ER Studio or others). Overall 10+ Experience in IT. Hands-on relational, dimensional, and/or analytic experience (using RDBMS, dimensional data platform technologies, and ETL and data ingestion). Experience with data warehouse, data lake, and enterprise big data platforms in multi-data-center contexts required. Communication, and presentation skills. Help team to Implement business and IT data requirements through new data strategies and designs across all data platforms (relational, dimensional) and data tools (reporting, visualization, analytics, and machine learning). Work with business and application/solution teams to implement data strategies develop the conceptual/logical/physical data models Define and govern data modelling and design standards, tools, best practices, and related development for enterprise data models. Hands-on modelling in modelling and mappings between source system data model and Datawarehouse data models. Work proactively and independently to address project requirements and articulate issues/challenges to reduce project delivery risks with respect to modelling and mappings. Hands on experience in writing complex SQL queries. Good to have experience in data modelling for NOSQL objects Role 5 : Data Engineer Data Bricks Design and build data pipelines using Spark-SQL and PySpark in Azure Databricks Design and build ETL pipelines using ADF Build and maintain a Lakehouse architecture in ADLS / Databricks. Perform data preparation tasks including data cleaning, normalization, deduplication, type conversion etc. Work with DevOps team to deploy solutions in production environments. Control data processes and take corrective action when errors are identified. Corrective action may include executing a work around process and then identifying the cause and solution for data errors. Participate as a full member of the global Analytics team, providing solutions for and insights into data related items. Collaborate with your Data Science and Business Intelligence colleagues across the world to share key learnings, leverage ideas and solutions and to propagate best practices. You will lead projects that include other team members and participate in projects led by other team members. Apply change management tools including training, communication and documentation to manage upgrades, changes and data migrations.

Magallenic Cloud
Magallenic Cloud
Not specified
No locations

RecommendedJobs for You

Hyderabad, Pune, Bengaluru