No description available.
Not specified
INR 20.0 - 25.0 Lacs P.A.
Work from Office
Full Time
Title Platform EngineerLocation - BangaloreExp. Range - 4 to 6 years.Shift Timing - 7 AM to 4 PM.Must Have SkillsCandidate with help in building automated CI/CD Pipeline.Should know - AWS Services (CloudFormation, S3, CloudWatch, IAM, EC2, ECR, lambda, API Gateway, EMR) , Docker, GitHub Actions, Terraform, AWS CLIGood to Have - Python, EKS, Kubernetes, AWS Sage maker, ArtifactoryRoles and ResponsibilitiesThe Ops engineer partners closely with Engineering and Support. We are responsible for the deployment, and continuous operation of the client platform.You will make sure we automate as many tasks as possible to make diagnostics, scaling, healing and deployments a breeze.You will work on a team responsible for a blend of architecture, automation, development, and application administration.You will develop and deploy solutions from the infrastructure, to the network, and application layers, on public cloud platforms.You will ensure our SaaS platform is available and performing, and that we can notice problems before our customers.You will collaborate with Support and Engineering on customer issues, as needed.Working with distributed data infrastructure, including containerization and virtualization tools, to enable unified engineering and production environments;Developing dashboards, monitors, and alerts to increase situational awareness of the state of our production issues/sla/security incidents.Independently conceiving and implementing ways to improve development efficiency, code reliability, and test fidelity.You will participate in a periodic on-call rotation.Ability to closely work with the on-prem and cloud infrastructure, networking, and development teams to ensure timely deliverables.
Not specified
INR 25.0 - 35.0 Lacs P.A.
Hybrid
Full Time
Roles & Responsibilities:Create and maintain optimal data pipeline architectureBuild data pipelines that transform raw, unstructured data into formats that data analyst can use to for analysisAssemble large, complex data sets that meet functional / non-functional business requirementsIdentify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.Build the infrastructure required for optimal extraction, transformation, and delivery of data from a wide variety of data sources using SQL and AWS Big Data technologiesWork with stakeholders including the Executive, Product, and program teams to assist with data-related technical issues and support their data infrastructure needs.Work with data and analytics experts to strive for greater functionality in our data systems Develops and maintains scalable data pipelines and builds out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using HQL and 'Big Data' technologies Implements processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it Write unit/integration tests, contribute to engineering wiki, and document work Performs root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvementWho You Are: You’re passionate about Data and building efficient data pipelines You have excellent listening skills and empathetic to others You believe in simple and elegant solutions and give paramount importance to quality You have a track record of building fast, reliable, and high-quality data pipelines Passionate with good understanding of data, with a focus on having fun, while delivering incredible business resultsMust have skills:A Data Engineer with 5+ years of relevant experience who is excited to apply their current skills and to grow their knowledge base.A Data Engineer who has attained a degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.Has experience using the following software/tools:Experience with big data tools: Hadoop, Spark, Kafka, etc.Experience with relational SQL and NoSQL databases, including Postgres and CassandraExperience with data pipeline and workflow management toolsExperience with AWS cloud services: EC2, EMR, RDS, RedshiftExperience with object-oriented/object function scripting languages: Python, Java, Scala, etc.Experience with AirflowExperience in AWS/Spark/Python developmentExperience in GIT, JIRA, Jenkins, Shell scriptingFamiliar with Agile methodology, test-driven development, source control management and automated testingBuild processes supporting data transformation, data structures, metadata, dependencies and workload managementExperience supporting and working with cross-functional teams in a dynamic environmentNice to have skills:Experience with stream-processing systems: Storm, Spark-Streaming, etc. a plusExperience with SnowFlake
Not specified
INR 20.0 - 35.0 Lacs P.A.
Hybrid
Full Time
Data Engineer Primary Skill: Scala, Spark & SQL Secondary Skill: Java Must Have:Tech savvy engineer - willing and able to learn new skills, track industry trend8+ years of total experience of solid data engineering experience, especially in Open Source, data-intensive, distributed environments with minimum 2 years of experience in Big data related technologies like Spark, Hive, HBase, Scala, etcProgramming background preferred Scala / Python.Experience in Java (must have), Scala, Spark / PySpark.Experience in migration of data to AWS cloudExperience in SQL and NoSQL databases.Optional: Model the data set from Teradata to the cloud.Experience in Building ETL PipelinesExperience in Building Data pipelines in AWS (S3, EC2, EMR, Athena, Redshift)Self-starter & resourceful personality with the ability to manage pressure situationsExposure to Scrum and Agile Development Best PracticesExperience working with geographically distributed teamsRole & Responsibilities:Build Data and ETL pipelines in AWSSupport migration of data to the cloud using Big Data Technologies like Spark, Hive, Talend, PythonInteract with customers on a daily basis to ensure smooth engagementResponsible for timely and quality deliveries.Fulfill organization responsibilities Sharing knowledge and experience within the other groups in the organization, conducting varioustechnical sessions and training.
Not specified
INR 15.0 - 30.0 Lacs P.A.
Hybrid
Full Time
Roles & Responsibilities:Create and maintain optimal data pipeline architectureBuild data pipelines that transform raw, unstructured data into formats that data analyst can use to for analysisAssemble large, complex data sets that meet functional / non-functional business requirementsIdentify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.Build the infrastructure required for optimal extraction, transformation, and delivery of data from a wide variety of data sources using SQL and AWS Big Data’ technologiesWork with stakeholders including the Executive, Product, and program teams to assist with data-related technical issues and support their data infrastructure needs.Work with data and analytics experts to strive for greater functionality in our data systems Develops and maintains scalable data pipelines and builds out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using HQL and 'Big Data' technologies Implements processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it Write unit/integration tests, contribute to engineering wiki, and document work Performs root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvementWho You Are: You’re passionate about Data and building efficient data pipelines You have excellent listening skills and empathetic to others You believe in simple and elegant solutions and give paramount importance to quality You have a track record of building fast, reliable, and high-quality data pipelines Passionate with good understanding of data, with a focus on having fun, while delivering incredible business resultsMust have skills:A Data Engineer with 5+ years of relevant experience who is excited to apply their current skills and to grow their knowledge base.A Data Engineer who has attained a degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.Has experience using the following software/tools:Experience with big data tools: Hadoop, Spark, Kafka, etc.Experience with relational SQL and NoSQL databases, including Postgres and CassandraExperience with data pipeline and workflow management toolsExperience with AWS cloud services: EC2, EMR, RDS, RedshiftExperience with object-oriented/object function scripting languages: Python, Java, Scala, etc.Experience with AirflowExperience in AWS/Spark/Python developmentExperience in GIT, JIRA, Jenkins, Shell scriptingFamiliar with Agile methodology, test-driven development, source control management and automated testingBuild processes supporting data transformation, data structures, metadata, dependencies and workload managementExperience supporting and working with cross-functional teams in a dynamic environmentNice to have skills:Experience with stream-processing systems: Storm, Spark-Streaming, etc. a plusExperience with SnowFlake
Not specified
INR 20.0 - 35.0 Lacs P.A.
Work from Office
Full Time
Greetings from Clairvoyant an EXL CompanyWe are hiring Lead Data Engineer's at our organization, kindly go through the below job description.Job Description: 8+ years of data management experience. Technically strong in Python, PySpark, Hadoop, Hive Experienced in data integration projects working for large clients using the above tech stacks. Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams and stakeholders. . Replicate and optimize SQL Server stored procedures in Snowflake, ensuring functional accuracy and performance in the cloud environment. Implement and manage automated workflows using Prefect Scheduler to streamline ETL/ELT processes, ensuring reliability, monitoring and alerting for critical workflows. Design and maintain CI/CD pipelines using tools such as Jenkins or GitLab, automating testing, deployment and version control. Collaborate with cross-functional teams, including data scientists, analysts and business stakeholders, to understand data requirements and deliver impactful solutions, simplifying technical concepts for non-technical stakeholders. Use version control systems like GitLab to manage collaborative code development, branching and versioning while fostering best practices in code management. Continuously monitor, troubleshoot and optimize data pipelines for performance, scalability and compliance with SLAs. Develop reconciliation frameworks and implement automated migration validation processes to ensure data consistency and accuracy during cloud migrations. Maintain comprehensive documentation for data pipelines, architectures and processes to enable effective knowledge sharing.
Not specified
INR 10.0 - 14.0 Lacs P.A.
Work from Office
Full Time
Location: Pune / BangaloreExp: 5 to 10 YearsPrincipal Accountabilities and Responsibilities Responsible for requirements gathering, documentation of solution design, documenting and executingtest scenarios and performing a variety of change and implementation management activities. May work across multiple projects and programmes simultaneously May be required to support change management activities spanning from early change planning and audience analysis; through to designing and delivering change interventions (e.g., communications, training, support, organisation alignment); and tracking. Taking actions on change readiness, adoption, and feedback. They are also ultimately responsible for Implementation Management, including planning, controlling, and reporting on implementation of the change product , focusing on accelerating benefits and minimising risk during deployment. Implementation activities will also include managing implementation readiness and managing the early stages of implementation (e.g. pilot). Business Analysts will often play a people / team management role within the projects / programmes they work on. As members of the Global Transformation management team, they will also have line or assignment management responsibility for a group of more junior resources within their resource pool (as related to their job family). Internal clients to facilitate effective data analysis/migration and process change and ensure expectations are effectively manage. A good understanding of the control requirement surrounding data handling will be advantageous in this role Assess the operational risks as part of the analysis and implementation planning and execution in conjunction with delivery managers Adhere strictly to compliance and operational risk controls in accordance with HSBC and regulatory standards, policies and practices; report concerns or observations in terms of control weaknesses, compliance breaches and operational risk impact. Ensure all due diligence is performed to prevent adverse impact to customers and business operations Support documentation of risks, issues and dependencies in the RAID log for allocated projects, and ensure that these are fed into the programme/PMO effectively. Functional Knowledge Strong Business Analyst with Financial Services experience Knowledge of one or more of the following domains (including market data vendors):o Party/Cliento Tradeo Settlementso Paymentso Instrument and pricingo Market and/or Credit Risk" Endorse team engagement initiatives, fostering an environment which encourages learning and collaboration to build a sense of community. Create environments where only the best will do and high standards are expected, regularly achieved and appropriately rewarded; encourages and supports continual improvements within the team based on ongoing feedback. Develop a network of professional relationships across the department and our stakeholders to improve collaborative working and encourage openness - sharing ideas, information and collateral. Encourage individuals to network and collaborate with colleagues beyond their own business areas and/or the Group to shape change and benefit the business and its customers. Must demonstrate strong business knowledge and sound business sense, and stay abreast of the industry, business-wise and technology-wise Stakeholder complexity - Business Analysts will often need gather requirements and agree designs across stakeholders, dealing with different interests and resolving disagreements and conflicts, and sometimes needing to challenge poor requirements and design decisions Solid working experience in data analytics field Demonstration of understanding in technology trends, methodologies, and tools for data analytics Experienced in technical product development and delivery using various technologies and can explain how they achieved this and the technologies used Understand and able to apply project management principles and portfolio management Communicate effectively with all levels of stakeholders, team management and conflict managementskills Experienced in presenting to senior stakeholder in both business and technology Experienced on agile projects and understand the application of agile Understanding of how DevOps works and how to utilize in the agile process. Good in SQL, and SQL programming knowledge as a Data Business Analyst. Knowledge of ETL Process, GCP, Hadoop is must. Knowledge of on Python, Jupyter Notebook / Spyder / Pandas / Numpy / PySpark / Scala is preferred. Domain: Credit Lending Python or Pyspark, SQL Knowledge of Credit Risk Frameworks such as Basel II, III, IFRS 9 and Stress Testing and understanding their drivers - advantageous Retail Credit / Traded Credit knowledge - applications will be considered
Not specified
INR 18.0 - 20.0 Lacs P.A.
Work from Office
Full Time
Position: Sr. AWS Data EngineerLocation: Pune/ Gurgaon/ Hyderabad/ HybridExperience: 6+ yearsResponsibilities: Develop and manage CI/CD deployment of AWS CloudFormation YAML templates for stack creation and resource provisioning. Work extensively with AWS CodePipeline, Bitbucket, and Jenkins pipelines to automate AWS CloudFormation stack creation for various AWS Data Engineering services. Design, build, and maintain AWS Glue-based ETL pipelines, leveraging Athena and S3 for data processing. Utilize AWS Lambda for specific use cases in data workflows. Manage and configure AWS Lake Formation for secure data lakes. Implement IAM policies and roles to ensure security and access control. Monitor and troubleshoot data workflows using AWS CloudWatch. Work with Autosys Jobs to trigger shell scripts, Python code, and AWS Glue jobs. Develop and optimize Spark-based ETL workflows for large-scale data processing. Collaborate with teams to optimize SQL queries and ensure efficient data processing.Required Skills: Strong experience with AWS services: AWS Glue, S3, IAM, Athena, AWS CloudFormation, AWS CodePipeline, AWS Lambda, Transfer Family, AWS Lake Formation, and CloudWatch. Hands-on experience with CI/CD pipelines using Bitbucket, Jenkins, and AWS CodePipeline. Expertise in Python, SQL, YAML, and Spark for ETL development and automation. Basic understanding of Git commands and Jenkins pipelines. Experience working with Autosys Jobs for scheduling and execution.Good to Have Skills: Familiarity with AWS Redshift, DynamoDB, KMS, VPC, and EC2. Knowledge of AWS cost optimization practices.Note: This role focuses on CI/CD automation of AWS CloudFormation stacks, Spark-based ETL development, and security best practices in AWS. If you have expertise in these areas, we look forward to your application!
Not specified
INR 18.0 - 20.0 Lacs P.A.
Work from Office
Full Time
Position: AWS Data EngineerLocation: Pune/ Gurgaon/ Hyderabad/ HybridExperience: 4+ yearsResponsibilities: Develop, optimize and maintain ETL pipelines using PySpark, Python, and SQL, leveraging AWS Glue, S3, Athena, and EMR/Spark for large-scale data processing. Build implement CI/CD pipelines for the EDP Platform using Jenkins and track version control using Git. Manage and configure AWS environments for test, QA, and UAT, ensuring efficient deployments and infrastructure management. Utilize AWS Step Functions and Lambda to automate and orchestrate data workflows. Implement IAM policies for secure access management and governance. Monitor and troubleshoot data workflows using AWS CloudWatch and CloudTrail to ensure smooth operations. Work with AWS Redshift to optimize data processing and storage solutions. Develop shell scripts to automate and integrate various AWS services for data engineering use cases. Leverage Power BI for data visualization and reporting, integrating insights with AWS data solutions.Required Skills: PySpark, Python, SQL - Expertise in developing scalable data pipelines and transformations. AWS Services - Strong experience with S3, Athena, Glue, EMR/Spark, Redshift, Lambda, Step Functions, IAM, CloudWatch, and CloudTrail. CI/CD Version Control - Hands-on experience with Jenkins and Git for automated deployments and version tracking. Infrastructure Management - Experience in setting up test, QA, and UAT environments for AWS-based data platforms. Power BI - Strong proficiency in data visualization, dashboards, and reporting. Shell Scripting - Ability to automate AWS workflows and enhance pipeline efficiency.Good to Have: Experience in AWS cost optimization and best practices. Strong understanding of data governance and security policies in AWS. Exposure to performance tuning for Redshift and other AWS data services. Note: This role focuses on AWS-based ETL development, CI/CD automation and data pipeline orchestration. If you have expertise in these areas, we look forward to your application!
Not specified
INR 20.0 - 35.0 Lacs P.A.
Hybrid
Full Time
Role: Lead Data Engineer Experience: 8-12 years Must-Have: 8+ years of relevant experience in Data Engineering and delivery. 8+ years of relevant work experience in Big Data Concepts. Worked on cloud implementations. Have experience in Snowflake, SQL, AWS (glue, EMR, S3, Aurora, RDS, AWS architecture) Good experience with AWS cloud and microservices AWS glue, S3, Python, and Pyspark. Good aptitude, strong problem-solving abilities, analytical skills, and ability to take ownership as appropriate. Should be able to do coding, debugging, performance tuning, and deploying the apps to the Production environment. Experience working in Agile Methodology Ability to learn and help the team learn new technologies quickly. Excellent communication and coordination skills Good to have: Have experience in DevOps tools (Jenkins, GIT etc.) and practices, continuous integration, and delivery (CI/CD) pipelines. Spark, Python, SQL (Exposure to Snowflake), Big Data Concepts, AWS Glue. Worked on cloud implementations (migration, development, etc. Role & Responsibilities: Be accountable for the delivery of the project within the defined timelines with good quality. Working with the clients and Offshore leads to understanding requirements, coming up with high-level designs, and completing development, and unit testing activities. Keep all the stakeholders updated about the task status/risks/issues if there are any. Keep all the stakeholders updated about the project status/risks/issues if there are any. Work closely with the management wherever and whenever required, to ensure smooth execution and delivery of the project. Guide the team technically and give the team directions on how to plan, design, implement, and deliver the projects.
Not specified
INR 15.0 - 22.5 Lacs P.A.
Hybrid
Full Time
Position - QA EngineerLocation - PuneExperience - 5+ YearsMust Have Skills:Strong SQL skills (not just basic knowledge)Experience in automating ETL/Data ValidationGood understanding of Data Warehousing conceptsExperience with Big Data is good to haveProficiency in Python is a plus
Not specified
INR 20.0 - 27.5 Lacs P.A.
Hybrid
Full Time
Not specified
INR 7.0 - 16.0 Lacs P.A.
Hybrid
Full Time
Not specified
INR 20.0 - 25.0 Lacs P.A.
Hybrid
Full Time
Not specified
INR 25.0 - 30.0 Lacs P.A.
Hybrid
Full Time
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Chrome Extension