7 - 12 years

12.0 - 22.0 Lacs P.A.

Bengaluru

Posted:2 months ago| Platform: Naukri logo

Apply Now

Skills Required

SCALABig DataSpark

Work Mode

Hybrid

Job Type

Full Time

Job Description

Responsibilities About the Team/Project This role is within the HR Data team under the Human & Other Resources (H2R) unit of the CFT. The HR Big Data services center is involved in the transformation by creating an agile data-centric IT architecture for businesses such as HR, Sourcing, and Real Estate. Roles & Responsibilities In-depth understanding of Big Data Ecosystem and their data flow. Expected to have good articulation and presentation skills in both verbal and written. The candidate must have coding and configuration expertise on the below processes: Ingestion: Setting up new pipeline for the ingestion process using NIFI and Kafka. Backend Data Processes: Implementation of Backend Processes using SPARK/SCALA (at least 4 major backend process implementations desired). Extraction Functionality: Develop and manage the extraction function, good knowledge in Hive Query Language to troubleshoot process issues resulting from improper queries. Data Handling: Knowledge on handling the data storage and data manipulation process (Data Processing using SPARK framework). Quarterly Upgradation: Impact analysis, fixes, testing of new features in Big Data Platform. Coding Standards: Hands-on individual responsible for producing excellent quality of code, adhering to expected coding standards and industry best practices. Good to Have Skills Expertise in ELK and Kibana. Expertise in Java APIs. Expertise in Ansible and AWX (YML scripting). Profile Required About the Team/Project This role is within the HR Data team under the Human & Other Resources (H2R) unit of the CFT. The HR Big Data services center is involved in the transformation by creating an agile data-centric IT architecture for businesses such as HR, Sourcing, and Real Estate. Roles & Responsibilities In-depth understanding of Big Data Ecosystem and their data flow. Expected to have good articulation and presentation skills in both verbal and written. The candidate must have coding and configuration expertise on the below processes: Ingestion: Setting up new pipeline for the ingestion process using NIFI and Kafka. Backend Data Processes: Implementation of Backend Processes using SPARK/SCALA (at least 4 major backend process implementations desired). Extraction Functionality: Develop and manage the extraction function, good knowledge in Hive Query Language to troubleshoot process issues resulting from improper queries. Data Handling: Knowledge on handling the data storage and data manipulation process (Data Processing using SPARK framework). Quarterly Upgradation: Impact analysis, fixes, testing of new features in DHR application. Coding Standards: Hands-on individual responsible for producing excellent quality of code, adhering to expected coding standards and industry best practices. Good to Have Skills Expertise in ELK and Kibana. Expertise in Java APIs. Expertise in Ansible and AWX (YML scripting)

RecommendedJobs for You

Chennai, Pune, Delhi, Mumbai, Bengaluru, Hyderabad, Kolkata

Pune, Bengaluru, Mumbai (All Areas)

Chennai, Pune, Delhi, Mumbai, Bengaluru, Hyderabad, Kolkata

Bengaluru, Hyderabad, Mumbai (All Areas)

Hyderabad, Gurgaon, Mumbai (All Areas)