Posted:2 months ago| Platform:
Hybrid
Full Time
Responsibilities About the Team/Project This role is within the HR Data team under the Human & Other Resources (H2R) unit of the CFT. The HR Big Data services center is involved in the transformation by creating an agile data-centric IT architecture for businesses such as HR, Sourcing, and Real Estate. Roles & Responsibilities In-depth understanding of Big Data Ecosystem and their data flow. Expected to have good articulation and presentation skills in both verbal and written. The candidate must have coding and configuration expertise on the below processes: Ingestion: Setting up new pipeline for the ingestion process using NIFI and Kafka. Backend Data Processes: Implementation of Backend Processes using SPARK/SCALA (at least 4 major backend process implementations desired). Extraction Functionality: Develop and manage the extraction function, good knowledge in Hive Query Language to troubleshoot process issues resulting from improper queries. Data Handling: Knowledge on handling the data storage and data manipulation process (Data Processing using SPARK framework). Quarterly Upgradation: Impact analysis, fixes, testing of new features in Big Data Platform. Coding Standards: Hands-on individual responsible for producing excellent quality of code, adhering to expected coding standards and industry best practices. Good to Have Skills Expertise in ELK and Kibana. Expertise in Java APIs. Expertise in Ansible and AWX (YML scripting). Profile Required About the Team/Project This role is within the HR Data team under the Human & Other Resources (H2R) unit of the CFT. The HR Big Data services center is involved in the transformation by creating an agile data-centric IT architecture for businesses such as HR, Sourcing, and Real Estate. Roles & Responsibilities In-depth understanding of Big Data Ecosystem and their data flow. Expected to have good articulation and presentation skills in both verbal and written. The candidate must have coding and configuration expertise on the below processes: Ingestion: Setting up new pipeline for the ingestion process using NIFI and Kafka. Backend Data Processes: Implementation of Backend Processes using SPARK/SCALA (at least 4 major backend process implementations desired). Extraction Functionality: Develop and manage the extraction function, good knowledge in Hive Query Language to troubleshoot process issues resulting from improper queries. Data Handling: Knowledge on handling the data storage and data manipulation process (Data Processing using SPARK framework). Quarterly Upgradation: Impact analysis, fixes, testing of new features in DHR application. Coding Standards: Hands-on individual responsible for producing excellent quality of code, adhering to expected coding standards and industry best practices. Good to Have Skills Expertise in ELK and Kibana. Expertise in Java APIs. Expertise in Ansible and AWX (YML scripting)
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Bengaluru, Hyderabad
INR 3.5 - 8.5 Lacs P.A.
Mumbai, Bengaluru, Gurgaon
INR 5.5 - 13.0 Lacs P.A.
Chennai, Pune, Delhi, Mumbai, Bengaluru, Hyderabad, Kolkata
INR 3.0 - 7.0 Lacs P.A.
Chennai, Pune, Mumbai (All Areas)
INR 5.0 - 15.0 Lacs P.A.
Pune, Bengaluru, Mumbai (All Areas)
INR 11.0 - 21.0 Lacs P.A.
Chennai, Pune, Delhi, Mumbai, Bengaluru, Hyderabad, Kolkata
INR 15.0 - 16.0 Lacs P.A.
Pune, Bengaluru, Mumbai (All Areas)
INR 10.0 - 15.0 Lacs P.A.
Bengaluru, Hyderabad, Mumbai (All Areas)
INR 0.5 - 3.0 Lacs P.A.
Hyderabad, Gurgaon, Mumbai (All Areas)
INR 6.0 - 16.0 Lacs P.A.
Bengaluru, Noida
INR 16.0 - 22.5 Lacs P.A.