Grid Dynamics is a digital engineering services company that specializes in providing advanced technology solutions for business transformation. They focus on cloud, data analytics, and AI driven solutions.
Not specified
INR 37.5 - 45.0 Lacs P.A.
Hybrid
Full Time
Experience: 12-16YearsLocation: Hyderabad / BangaloreWe are seeking a strong Technical Delivery Manager with good technical expertise, to be responsible for the quality of a highly scalable and distributed platform for one of the largest world-leading Auto mobile industry.You will be working as a key member of this collaborative team with a group of experts recognized around the world. This is an excellent opportunity to play a role as a key contributor in creating the next generation of network-centric distributed and scalable architectures.Job DescriptionKey ResponsibilitiesProject Leadership: Oversee the delivery of Java-based cloud solutions, ensuring alignment with client requirements, timelines, and quality standards.Team Management: Lead cross-functional teams, including developers, testers, and DevOps engineers, to ensure seamless project execution.Stakeholder Engagement: Act as the primary point of contact for clients, understanding their needs and providing regular updates on project status.Technical Oversight: Guide teams on best practices in Java development, microservices architecture, cloud deployment, and performance optimization.Resource Planning: Manage resource allocation, project budgets, and timelines to ensure efficient delivery.Risk Management: Identify and mitigate risks throughout the project lifecycle.Continuous Improvement: Drive innovation by adopting emerging technologies and refining delivery processes.Experience:Minimum 15 years of experience in IT project delivery, with at least 3 years in a leadership role.Proven experience in Java development and cloud platforms like AWS, Azure, or Google Cloud.Technical Skills:Strong understanding of Java, Spring Framework, and microservices architecture.Hands-on experience with cloud technologies (AWS, Azure, GCP) and CI/CD pipelines.Familiarity with containerization tools (Docker, Kubernetes).Soft Skills:Excellent communication and leadership skills.Strong analytical and problem-solving abilities.Ability to manage multiple priorities and work under pressure.What We OfferCompetitive salary and benefits package.Opportunities for professional growth and certification sponsorship.A collaborative and inclusive work environment.The chance to work on cutting-edge technologies and impactful projects.Must HaveAgile Delivery & SDLCProject Planning & Resource ManagementLeadership & TeambuildingCommunication & InterpersonalInfluencing and NegotiatingConflict ResolutionStakeholder ManagementOperations, Performance & project status reportingReasoning, Analytical SkillsAbout UsGrid Dynamics (Nasdaq:GDYN) is a digital-native technology services provider that accelerates growth and bolsters competitive advantage for Fortune 1000 companies. Grid Dynamics provides digital transformation consulting and implementation services in omnichannel customer experience, big data analytics, search, artificial intelligence, cloud migration, and application modernization. Grid Dynamics achieves high speed-to-market, quality, and efficiency by using technology accelerators, an agile delivery culture, and its pool of global engineering talent. Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the US, UK, Netherlands, Mexico, and Central and Eastern Europe.To learn more about Grid Dynamics, please visit www.griddynamics.com . Follow us on Facebook , Twitter , and LinkedIn
Not specified
INR 16.0 - 18.0 Lacs P.A.
Work from Office
Full Time
Details on tech stack (must) Scala + Java or Python Spark SQL AWS basic knowledge preferably (or knowledge of any other cloud service) Nice to have requirements to the candidate Python and PySpark Airflow Snowflake Knowledge of Apache Iceberg, Flink, Druid Kafka MySQL, PostgreSQL, MongoDB, NoSQL, Cassandra, Hadoop Data Lakes, Data Warehouse AWS(EKS, IAM, S3, SNS, SQS, MSK etc), Docker, Kubernetes Theoretical knowledge of Big Data concepts Commercial experience, working with real clients (not only pet-projects or RnD or Internships) Eager to invest his/her time to education and learn new technologies Essential functions Details on tech stack (must) Scala + Java or Python Spark SQL AWS basic knowledge preferably (or knowledge of any other cloud service) Nice to have requirements to the candidate Python and PySpark Airflow Snowflake Knowledge of Apache Iceberg, Flink, Druid Kafka MySQL, PostgreSQL, MongoDB, NoSQL, Cassandra, Hadoop Data Lakes, Data Warehouse AWS(EKS, IAM, S3, SNS, SQS, MSK etc), Docker, Kubernetes Theoretical knowledge of Big Data concepts Commercial experience, working with real clients (not only pet-projects or RnD or Internships) Eager to invest his/her time to education and learn new technologies Qualifications Scala + Java or Python Spark SQL AWS basic knowledge preferably (or knowledge of any other cloud service) Would be a plus Python and PySpark Airflow Snowflake Knowledge of Apache Iceberg, Flink, Druid Kafka MySQL, PostgreSQL, MongoDB, NoSQL, Cassandra, Hadoop Data Lakes, Data Warehouse AWS(EKS, IAM, S3, SNS, SQS, MSK etc), Docker, Kubernetes Theoretical knowledge of Big Data concepts Commercial experience, working with real clients (not only pet-projects or RnD or Internships) Eager to invest his/her time to education and learn new technologies We offer Opportunity to work on bleeding-edge projectsWork with a highly motivated and dedicated teamCompetitive salaryFlexible scheduleBenefits package - medical insurance, sportsCorporate social eventsProfessional development opportunitiesWell-equipped office
Not specified
INR 35.0 - 40.0 Lacs P.A.
Work from Office
Full Time
RequirementsProven experience architecting and implementing solutionsKnowledge of Integration Patterns:Data Management:Understanding of data integration techniques and technologies (e.g., ETL, data pipelines)Experience with cloud-native architectures and microservices.Excellent communication and collaboration skills.Nice to have requirements to the candidateHands-on experience with VAIS:RFamiliarity with Java or other relevant programming languages.Responsibilities:Solution Design: Lead the technical design (discovery phases) and architecture of Google Integration solutions, ensuring alignment with business requirements and industry best practices.Technology Expertise: Provide deep technical expertise in Google Cloud Platform (GCP) services and APIs relevant to integrations, with a specific focus on Google Search and related technologies.Project Delivery: Contribute to successful project delivery by working closely with development teams, business analysts, and delivery managers
Not specified
INR 25.0 - 27.5 Lacs P.A.
Hybrid
Full Time
Project DescriptionOur data engineering team is building and maintaining mission-critical data infrastructure to power Grid dynamics business intelligence and analytics capabilities. This involves developing robust data pipelines, implementing scalable data architectures, and establishing data governance frameworks to support data-driven decision making across the organization.Details on Tech StackSQL (Primary tool for data manipulation and analysis)Python for data processing and automationBigQuery, Cloud SQLGoogle Cloud Platform (DataFlow, DataProc, Data Fusion)Data warehousing solutionsETL/ELT tools and frameworksMinimum Requirements for the Candidate8-10 years of experience in data engineering or related fieldExpert knowledge of SQLMid-level knowledge of Python (expert preferred)Experience with cloud platforms, particularly Google CloudStrong understanding of data modeling and warehouse conceptsExperience with ETL pipeline design and implementationExperience with big data technologies (Hadoop, Spark)Nice to Have Requirements for the CandidateKnowledge of data governance and security practicesExperience with real-time data processingFamiliarity with BI tools and reporting platformsStrong background in performance optimization and tuningAdvanced debugging and troubleshooting skillsResponsibilitiesDesign and implement efficient, scalable data pipelinesOptimize data storage and retrieval processesEnsure data quality and consistency across systemsCollaborate with cross-functional teams to understand data requirementsImplement data security and compliance measuresProject insights Work with large-scale data in a global automotive companyOpportunity to shape data engineering practicesExposure to modern cloud technologies and data solutionsCollaborative environment with experienced professionals
Not specified
INR 18.0 - 25.0 Lacs P.A.
Work from Office
Full Time
Greetings from Grid Dynamics !We are looking for Strong Java Developers with Cloud experienceMandatory Skills:1) JAVA2)Springboot3)Microservices4)AWS or Azure or GCP
Not specified
INR 7.0 - 12.0 Lacs P.A.
Work from Office
Full Time
Looking for a highly experienced Senior IAM Engineer to lead application integration efforts as part of our SailPoint IdentityIQ integration project. As a Senior IAM Engineer, you will serve as a subject matter expert and lead on integrating complex applications into SailPoint, collaborating closely with application owners, internal teams, and client stakeholders. This position requires deep experience in IAM and expertise with SailPoint IdentityIQ or similar platforms. Essential functions Lead requirements-gathering with application owners to identify integration needs and determine integration pathways for complex or critical applications. Define and document detailed integration specifications in Design Documents for each application. Troubleshoot and address roadblocks during the integration process, guiding both junior engineers and application owners in resolving challenges. Configure SailPoint IdentityIQ connectors for application integration in DEV and QA environments, and perform detailed validation of configurations. Lead code reviews, ensuring adherence to best practices and quality standards in collaboration with internal and client-side teams. Oversee demos to application owners and manage the sign-off process for integrations. Ensure thorough documentation is prepared for audit requirements and manage the handover of applications to the client s internal team for production deployment. Maintain the respective GXCapture records and Azure DevOps tickets for the assigned apps Mentor junior engineers and contribute to continuous improvement efforts within the IAM team. Qualifications 4+ years of experience in Identity and Access Management (IAM), with in-depth expertise in application integration with SailPoint IdentityIQ using JDBC, LDAP, API connectors. Extensive experience with connector configuration, integration processes, and features and capabilities of SailPoint IdentityIQ product. Deep knowledge of IAM concepts such as role-based access control (RBAC), provisioning, and access certification. Strong experience in gathering and analyzing technical and business requirements from diverse stakeholders, such as architects, project managers and IT directors. Demonstrated ability to troubleshoot complex issues and resolve integration roadblocks in collaboration with stakeholders. Excellent communication skills, with the ability to lead client-facing discussions and manage stakeholder expectations Experience with code reviews and peer-to-peer collaboration in a development or engineering environment. Agile project management experience, and familiarity with working in cross-functional teams. Prior experience in a lead role within an IAM team, overseeing junior engineers or managing project timelines. Ability to work in a complex and uncertain customer environment, with customer servicing being central to each activity performed. The candidate understands that they are not only the representatives of Grid Dynamics in front of the customer, but also representative of the customer s SailPoint team in front of the application teams. Would be a plus Knowledge and experience of integrating apps in SailPoint IdentityIQ with other connectors (SAP, Oracle, etc.) is desirable. Familiarity with scripting languages (e.g., Java, PowerShell) for extending IAM solutions. Prior Experience of working in the Finance Industry with interfacing with senior stakeholders on the customer side We offer Opportunity to work on bleeding-edge projectsWork with a highly motivated and dedicated teamCompetitive salaryFlexible scheduleBenefits package - medical insurance, sportsCorporate social eventsProfessional development opportunitiesWell-equipped office
Not specified
INR 30.0 - 45.0 Lacs P.A.
Work from Office
Full Time
Details on tech stackGCP Services: BigQuery, Cloud Dataflow, Pub/Sub, Dataproc, Cloud Storage.Data Processing: Apache Beam (batch/stream), Apache Kafka, Cloud Dataprep.Programming: Python, Java/Scala, SQL.Orchestration: Apache Airflow (Cloud Composer), Terraform.Security: IAM, Cloud Identity, Cloud Security Command Center.Containerization: Docker, Kubernetes (GKE).Machine Learning: Google AI Platform, TensorFlow, AutoML.Certifications: Google Cloud Data Engineer, Cloud Architect (preferred).Proven ability to design scalable and robust AI/ML systems in production, with a focus on high-performance and cost-effective solutions.Strong experience with cloud platforms (Google Cloud, AWS, Azure) and cloud-native AI/ML services (e.g., Vertex AI, SageMaker).Expertise in implementing MLOps practices, including model deployment, monitoring, retraining, and version control.Strong leadership skills with the ability to guide teams, mentor engineers, and collaborate with cross-functional teams to meet business objectives.Deep understanding of frameworks like TensorFlow, PyTorch, and Scikit-learn for designing, training, and deploying models.Experience with data engineering principles, scalable pipelines, and distributed systems (e.g., Apache Kafka, Spark, Kubernetes). Nice to have requirements to the candidateStrong leadership and mentorship capabilities, guiding teams toward best practices and high-quality deliverables.Excellent problem-solving skills, with a focus on designing efficient, high-performance systems.Effective project management abilities to handle multiple initiatives and ensure timely delivery.Strong emphasis on collaboration and teamwork, fostering a positive and productive work environment.
Not specified
INR 20.0 - 35.0 Lacs P.A.
Hybrid
Full Time
Job Description:Details on tech stackProficiency in Python.Competent knowledge of best practices for software development.Strong understanding of Data Science concepts such as supervised and unsupervised learning, feature engineering and ETL processes, classical DS models types and neural networks types, hyperparameters tuning, model evaluation and selection.Proficiency in usage of appropriate GCP services (or similar AWS or Azure services) for building end-to-end ML pipelines, e.g. Vertex AI, BigQuery, Dataflow, Cloud SQL, Dataproc, Cloud Functions, Google Kubernetes Engine.Competent knowledge of MLOps paradigm and practices. Experience with MLOps tools (or appropriate cloud services), including model and data versioning and experiment tracking (e.g., DVC, MLflow, Weights & Biases), pipeline orchestration (e.g., Apache Airflow, Kubeflow). Understanding of deployment strategies for different types of models and inference (batch/online).Knowledge and experience with big data processing frameworks (e.g., Apache Spark, Apache Kafka, Apache Hadoop).Competent SQL skills and experience with databases like MySQL, Postgres, Redis.Experience in developing and integrating RESTful APIs for ML model serving (e.g., Flask and FastAPI).Experience with containerization technologies like Docker and orchestration tools (e.g., Kubernetes).Nice to have requirements to the candidateKnowledge of monitoring and logging tools (e.g., Grafana, ELK Stack or appropriate cloud services).Understanding of CI/CD principles and tools (e.g., Jenkins, GitLab CI) for automating the testing and deployment of machine learning models and applications.Experience with Cloud Identity and Access Management.Experience with Cloud Load Balancing.Knowledge of Infrastructure as Code (IaC) tools such as Terraform and Ansible.
Not specified
INR 30.0 - 35.0 Lacs P.A.
Hybrid
Full Time
A UI Architect is responsible for designing and implementing user interfaces for digital products, ensuring they are visually appealing, user-friendly, and aligned with business objectives. This role involves a combination of technical expertise, design skills, and strategic thinking.Key ResponsibilitiesTechnical Leadership: Provide technical expertise throughout the project lifecycle, from concept development to solution design, implementation, optimisation, and support. Act as a liaison between clients and development teams to translate business requirements into technical specifications.Architecture Design:Propose and implement scalable and resilient UI architectures that address customer business problems. Ensure that the solutions are maintainable and can be built within timelines and budgets.Hands-On Development: Actively participate in coding and implementing UI components using modern frameworks (e.g., Angular, React) and technologies (e.g., JavaScript, HTML5, CSS3). Ensure adherence to best practices in coding standards and performance optimization.Performance Optimization: Drive performance tuning, redesign, and refactoring of UI components to enhance user experience. Investigate innovative approaches to improve software quality and stability.Mentorship:Mentor junior developers on technology concepts, best practices for design and implementation, and foster a collaborative team environment.Collaboration:Work closely with product management teams to align UI designs with user experience goals. Collaborate with UX designers to ensure feasibility and effectiveness of UI designs.Documentation:Create comprehensive documentation for technical designs, processes, and architectural decisions. Ensure that all team members are aligned on project goals and technical requirements.Experience: Typically requires 8+ years of experience in software development with a focus on UI architecture. Proven experience in leading projects and teams is essential.Technical Skills: Strong proficiency in front-end technologies such as Angular, React, JavaScript, TypeScript, HTML5, CSS3, cloud, and architecture.Soft Skills: Excellent communication skills with the ability to articulate complex technical concepts to non-technical stakeholders.This role is crucial for ensuring that digital products not only meet functional requirements but also provide an exceptional user experience through effective interface design.About UsGrid Dynamics (Nasdaq:GDYN) is a digital-native technology services provider that accelerates growth and bolsters competitive advantage for Fortune 1000 companies. Grid Dynamics provides digital transformation consulting and implementation services in omnichannel customer experience, big data analytics, search, artificial intelligence, cloud migration, and application modernization. Grid Dynamics achieves high speed-to-market, quality, and efficiency by using technology accelerators, an agile delivery culture, and its pool of global engineering talent. Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the US, UK, Netherlands, Mexico, and Central and Eastern Europe.To learn more about Grid Dynamics, please visit www.griddynamics.com . Follow us on Facebook , Twitter , and LinkedIn
Not specified
INR 12.0 - 22.0 Lacs P.A.
Hybrid
Full Time
Project Description:Grid Dynamics aims building enterprise generative AI framework to deliver innovative, scalable and efficient AI-driven solutions across business functions.Due to constant scaling of digital capabilities the platform requires enhancements to incorporate cutting-edge generative AI features and meet emerging business demands. Platform should onboard brand new capabilities like Similarity Search (image,video and voice);Ontology and entity managment;voice and file mgmt [text to speech & vice-versa, metadata tagging, multi-media file support];Advanced RAG ; Multi-Modal capabilitiesResponsibilities:As an LLMOps Engineer, you will be responsible for providing expertise on overseeing the complete lifecycle management of large language models (LLM). This includes the development of strategies for deployment, continuous integration and delivery (CI/CD) processes, performance tuning, and ensuring high availability of our LLM services.You will collaborate closely with data scientists, AI/ML engineers, and IT teams to define and align LLM operations with business goals, ensuring a seamless and efficient operating model.In this role, you will:Define and disseminate LLMOps best practices.Evaluate and compare different LLMOps tools to incorporate the best practices.Stay updated on industry trends and advancements in LLM technologies and operational methodologies.Participate in architecture design/validation sessions for the Generative AI use cases with entities.Contribute to the development and expansion of GenAI use cases, including standard processes, framework, templates, libraries, and best practices around GenAI.Design, implement, and oversee the infrastructure required for the efficient operation of large language models in collaboration with client entities.Provide expertise and guidance to client entities in the development and scaling of GenAI use cases, including standard processes, framework, templates, libraries, and best practices around GenAIServe as the expert and representative on LLMops Practices, including:(1) Developing and maintaining CI/CD pipelines for LLM deployment and updates.(2) Monitoring LLM performance, identifying and resolving bottlenecks, and implementing optimizations.(3) Ensuring the security of LLM operations through comprehensive risk assessments and the implementation of robust security measures.Collaborate with data and IT teams to facilitate data collection, preparation, and model training processes.Practical experience with training, tuning, utilizing LLMs/SLMs.Strong experience with GenAI/LLM frameworks and techniques, like guardrails, Langchain, etc.Knowledge of LLM security and observability principles.Experience of using Azure cloud services for MLExperience of using Azure cloud services for MLMin requirements:Programming languages: PythonPublic Cloud: AzureFrameworks: K8s, Terraform, Arize or any other ML/LLM observability toolExperience: Experience with public services like Open AI, Anthropic and similar, experience deploying open source LLMs will be a plusTools: LangSmith/LangChain,guardrailsWould be a plus:Knowledge of LLMOps best practices.Experience with monitoring/logging for production models (e.g. Prometheus, Grafana, ELK stack)We offer:Opportunity to work on bleeding-edge projectsWork with a highly motivated and dedicated teamCompetitive salaryFlexible scheduleBenefits package - medical insurance, sportsCorporate social eventsProfessional development opportunitiesWell-equipped office
Not specified
INR 15.0 - 25.0 Lacs P.A.
Hybrid
Full Time
Project Description:Grid Dynamics wants to build a centralized, observable and secure platform for their ML, Computer Vision, LLM and SLM models. Grid Dynamics wants to onboard a vast number of AI agents, able to cover multiple required skills, ensuring a certain level of control and security in regards to their usage and availability. The observable platform must be vendor-agnostic, easy to extend to multiple type of AI applications and flexible in terms of technologies, frameworks and data types.This project is focused on establishing a centralized LLMOps capability where every ML, CV, AI-enabled application is monitored, observed, secured and provides logs of every activity.The solution consists of key building blocks such monitor every step in a RAG, Multimodal RAG or Agentic Platform, track performances and provide curated datasets for potential fine-tuning.Alignment with business scenarios, PepVigil provides also certain guardrails that allow or block interactions user-to-agent, agent-to-agent or agent-to-user. Also, Guardrails will enable predefined workflows, aimed to give more control over the series of LLM chains.Details on Tech StackJob Qualifications and Skill SetsAdvanced degree in Data Science, Computer Science, Statistics, or a related fieldSetting up Agent Mesh (LangSmith)Setting up Agent communication protocols (JSON/XML etc)Setting up message queues, CI/CD pipelines (Azure Queue Storage, Azure DevOps)Setting up integrations Langgrah, LangFuseKnowledge on Observability tool Arize-Phoenix toolsManaging Agent Registry, Integrating with AgentAuth framework like ComposioSetting up AgentCompute (Sandpack, E2BDev, Assistant APIs)Integration with IAM (Azure IAM, OKTA)Performing/Configuring Dynamic Orchestration and agent permissionsTech Stack Required:MLMLOPsAgent (Agent / Agent Mesh)LangFuse, LanChain, LangGraphDeployments (Docker, Jenkins, Kubernetes)Cloud Platforms: Azure/AWS/GCPWe offer:Opportunity to work on bleeding-edge projectsWork with a highly motivated and dedicated teamCompetitive salaryFlexible scheduleBenefits package - medical insurance, sportsCorporate social eventsProfessional development opportunitiesWell-equipped office
Not specified
INR 25.0 - 32.5 Lacs P.A.
Hybrid
Full Time
Essential functionsStrong Experience with AWS: At least 2 years of AWS experience. Deep understanding of AWS services, including but not limited to Lambda, S3, DynamoDB, Step Functions, and IAM.Microservices Expertise: Proven track record of designing and implementing microservices architectures with RESTful APIs.Workflow Orchestration: Hands-on experience with workflow tools such as Netflix Conductor, AWS Step Functions, or equivalent orchestration frameworks.Programming Proficiency: Strong skills in back-end programming on JavaDatabase Management: Familiarity with relational and non-relational databases, including schema design and optimization.Problem Solving: Ability to troubleshoot complex issues, propose scalable solutions, and optimize workflows.QualificationsJava 8-11, Springboot, AWS, Microservices, REST API, workflow tools.Would be a plusAWS Services: Minimum 3 hands on experience: Lambda, S3, DynamoDB, Step Functions, SQS, SNS and IAM
Not specified
INR 15.0 - 25.0 Lacs P.A.
Work from Office
Full Time
JOB Description: SQL - Mandatory- Python / Scala - Mandatory- AWS / Azure / GCP - Any one cloud Mandatory- Orchestration tool ( Apache Airflow, Control M etc )- Streaming ( Apache Kafka / Flink / Spark streaming / Synapse etc )Good experience in below,Bigdata TechnologiesETL ProcessData MigrationData IntegrationData Warehouse Technologies
Not specified
INR 20.0 - 35.0 Lacs P.A.
Work from Office
Full Time
AI/ML Engineer LLM PII Detection Department: Data Science / AI EngineeringRole Overview We are seeking a skilled AI/ML Engineer to join our team focused on building and fine-tuning local LLM models for PII detection at scale (50+ PB of data). In this role, you will work closely with our AI Expert (Team Lead) to develop, deploy, and optimize state-of-the-art AI models that can detect Personally Identifiable Information (PII) across diverse data sources—including documents, databases, knowledge graphs, code, and more.Key ResponsibilitiesModel Development:Assist in designing, training, and fine-tuning local LLM models for PII detection.Develop generative AI agents that utilize data schema and metadata to enhance PII detection process - reducing workload sizes with smart sampling.Framework Implementation:Utilize frameworks such as Ray, llama.cpp, ollama, vLLM, and PyTorch to build and scale AI models.Implement and support distributed computing frameworks to process and analyze large-scale datasets efficiently.Deployment & Optimization:Support the deployment of AI/ML solutions on both Azure and on-premise infrastructures.Monitor model performance, troubleshoot issues, and iterate to ensure optimal performance with evolving data.Collaboration:Work collaboratively with cross-functional teams—including data engineering, security, and compliance—to align AI solutions with business needs.Participate in code reviews, share insights, and document best practices to maintain high standards across projects.Continuous Improvement:Engage in ongoing learning and development of new AI methodologies.Contribute to the enhancement of model robustness and scalability, ensuring our systems handle 50+ PB of data reliably.Technical Proficiency:Hands-on experience with Python and ML frameworks (especially PyTorch).Familiarity with distributed computing tools such as Ray/Pyspark.Experience with LLM frameworks/tools such as llama.cpp, ollama, and vLLM.Data Handling:Ability to work with very large datasets and implement scalable, distributed solutions.Prior exposure to AI model development and fine-tuning for specialized tasks.Deployment Expertise:Experience deploying AI/ML models on Azure cloud platforms and/or on-premise environments.Communication & Collaboration:Strong communication skills to effectively collaborate with team members and cross-functional stakeholders.A proactive approach to problem-solving and a commitment to quality.Experience in PII detection, data privacy projects, or related fields.Basic understanding of data security and compliance frameworks.Familiarity with generative AI techniques and prompt engineering.Previous exposure to the fintech, payments, or security domains is a plus.
Not specified
INR 25.0 - 30.0 Lacs P.A.
Work from Office
Full Time
Job Summary:We are looking for an experienced Data Scientist to join our innovative and dynamic team. The ideal candidate will have a strong background in machine learning, statistical analysis, and data-driven problem-solving. You will work closely with cross-functional teams to design, build, and deploy machine learning models that drive business decisions and enhance overall operational efficiency.Key Responsibilities:Data Analysis & Modeling:Analyze large, complex datasets to identify trends, patterns, and actionable insights.Design and implement machine learning models for predictive analytics, forecasting, classification, and clustering.Work with both structured and unstructured data sources (e.g., databases, text, images, etc.).Utilize statistical techniques to extract meaningful information from data and improve business processes.Data Engineering:Clean, preprocess, and validate data to ensure high-quality input for models.Build scalable data pipelines and ensure data integrity throughout the process.Optimize existing algorithms and processes for better performance and scalability.Visualization & Reporting:Develop data visualizations and dashboards to communicate findings effectively to non-technical stakeholders.Present insights and recommendations to leadership in a clear, actionable manner.Create reports and documentation for model performance, data processes, and outcomes.Research & Innovation:Stay up-to-date with the latest developments in the field of data science and machine learning.Experiment with and implement new algorithms, tools, and technologies to continuously improve results.Contribute to knowledge sharing and best practices within the data science team.Qualifications:Education:Bachelors or Master’s degree in Computer Science, Data Science, Statistics, Mathematics, or a related field. A Ph.D. is a plus.Experience:3+ years of experience in data science or machine learning-related roles.Strong experience with data analysis, statistical modeling, and machine learning techniques.Proficiency in programming languages such as Python, R, or similar.Experience with libraries like pandas, numpy, scikit-learn, TensorFlow, or PyTorch.Familiarity with big data tools such as Hadoop, Spark, or similar is a plus.Experience with data visualization tools like Tableau, Power BI, or similar.Skills:Expertise in machine learning algorithms and statistical models.Ability to work with both structured and unstructured data.Strong SQL skills for data querying and manipulation.Knowledge of data wrangling and preprocessing techniques.Experience with cloud platforms (AWS, GCP, Azure) is a plus.Strong communication skills and the ability to explain complex technical concepts to non-technical audiences.Desired Attributes:Strong problem-solving skills and attention to detail.Ability to work independently and as part of a team.Curiosity and a passion for learning new techniques and technologies in the field.
Not specified
INR 5.0 - 9.0 Lacs P.A.
Work from Office
Full Time
The objective is to create a scalable, efficient, and robust order fulfillment system using cloud-native technologies. The rewrite emphasizes leveraging AWS services and microservices architecture to improve reliability, performance, and adaptability. The fulfillment module is a critical component that ensures smooth execution of order workflows, and the team will explore the most suitable workflow engine for orchestration, evaluating tools such as Netflix Conductor, AWS Step Functions, and OFOA. Essential functions Strong Experience with AWS: At least 2 years of AWS experience. Deep understanding of AWS services, including but not limited to Lambda, S3, DynamoDB, Step Functions, and IAM. Microservices Expertise: Proven track record of designing and implementing microservices architectures with RESTful APIs. Workflow Orchestration: Hands-on experience with workflow tools such as Netflix Conductor, AWS Step Functions, or equivalent orchestration frameworks. Programming Proficiency: Strong skills in back-end programming on Java Database Management: Familiarity with relational and non-relational databases, including schema design and optimization. Problem Solving: Ability to troubleshoot complex issues, propose scalable solutions, and optimize workflows. Qualifications Java 8-11, Springboot, AWS, Microservices, REST API, workflow tools. Would be a plus AWS Services: Minimum 3 hands on experience: Lambda, S3, DynamoDB, Step Functions, SQS, SNS and IAM We offer Opportunity to work on bleeding-edge projectsWork with a highly motivated and dedicated teamCompetitive salaryFlexible scheduleBenefits package - medical insurance, sportsCorporate social eventsProfessional development opportunitiesWell-equipped office
Not specified
INR 7.0 - 11.0 Lacs P.A.
Work from Office
Full Time
We are looking for a highly skilled Java full stack Engineer with ReactJs to join our team. As a Java full stack Engineer, you will be responsible for designing, building, and maintaining scalable, high-performance Java applications. You will collaborate with cross-functional teams to develop robust solutions, provide technical leadership, and ensure best practices in software development. Essential functions Development of technical solutions that are built for quality, scale and performance. Collaborate with the business, product management and PMO on product roadmaps and quarterly planning sessions. Participate in code and design reviews to minimize rework and catch issues early in the process. Work efficiently as a part of a global team of engineers ensuring effective collaboration, communication, and delivery. Primary Skill Set:- ReactJs, Strong Java, SpringBoot, MySQL Database Qualifications Bachelor s or Master s Degree in Computer Science/Engineering 5+ years of software development experience in designing and building MicroServices , APIs using Java and deploying to cloud environments preferably AWS and associated frameworks. Deep understanding of best design and software engineering practices, design principles and patterns and unit testing. Understanding of server-side technologies (Java) and Microservices architecture (design, REST, monitoring, security) Experience with ReactJS. Architect and design leading solutions with a strong focus on security. SOLID design principles Strong database programming skills, preferably in both SQL and NoSQL Databases. Able to work in a fast paced and dynamic environment and achieve results amidst constraints. Proven experience working in an Agile/Scrum environment. Would be a plus Any cloud experience: AWS/Azure/GCP We offer Opportunity to work on bleeding-edge projectsWork with a highly motivated and dedicated teamCompetitive salaryFlexible scheduleBenefits package - medical insurance, sportsCorporate social eventsProfessional development opportunitiesWell-equipped office
Not specified
INR 8.0 - 13.0 Lacs P.A.
Work from Office
Full Time
Our data engineering team is building and maintaining mission-critical data infrastructure to power business intelligence and analytics capabilities. This involves developing robust data pipelines, implementing scalable data architectures, and establishing data governance frameworks to support data-driven decision making across the organization. Essential functions SQL (Primary tool for data manipulation and analysis) Python for data processing and automation BigQuery, Cloud SQL Google Cloud Platform (DataFlow, DataProc, Data Fusion) Data warehousing solutions ETL/ELT tools and frameworks Qualifications 8-10 years of experience in data engineering or related field Expert knowledge of SQL Mid-level knowledge of Python (expert preferred) Experience with cloud platforms, particularly Google Cloud Strong understanding of data modeling and warehouse concepts Experience with ETL pipeline design and implementation Experience with big data technologies (Hadoop, Spark) Would be a plus Knowledge of data governance and security practices Experience with real-time data processing Familiarity with BI tools and reporting platforms Strong background in performance optimization and tuning Advanced debugging and troubleshooting skills We offer Opportunity to work on bleeding-edge projectsWork with a highly motivated and dedicated teamCompetitive salaryFlexible scheduleBenefits package - medical insurance, sportsCorporate social eventsProfessional development opportunitiesWell-equipped office
Not specified
INR 4.0 - 8.0 Lacs P.A.
Work from Office
Full Time
Grid Dynamics is seeking an AI Expert (Senior Engineer) to spearhead the development of local Large Language Models (LLMs) for Personally Identifiable Information (PII) detection across our vast data ecosystem. This role involves building and fine-tuning custom LLMs that can identify PII in documents, databases, knowledge graphs, source code, and other knowledge-sharing platforms. The models must effectively handle data at scale (50+ PB of data) and remain up-to-date with continuous retraining on a quarterly or monthly basis. As the AI Senior Engineer, you will design and implement these AI solutions using technologies like Ray, llama.cpp, ollama, vLLM, and PyTorch, deploying them on Azure cloud and on-premises infrastructure for efficient, large-scale operation. You will also develop generative AI agents that leverage data schemas and metadata to improve PII detection accuracy. Collaboration is key: you will work closely with data engineering, security, and compliance teams to ensure the AI models meet business requirements and align with industry regulations (GDPR, CCPA, PCI DSS, etc.). Essential functions Design, train, fine-tune, and optimize local LLMs or other NLP models for PII detection across diverse data types (documents, databases, knowledge graphs, code, and other knowledge-sharing formats). Develop generative AI agents (on Autogen/Langgraph) for schema- and metadata-based PII detection to enhance identification of sensitive data. Work with cutting-edge AI frameworks (Ray, llama.cpp, ollama, vLLM, PyTorch) to deploy and scale models efficiently in a distributed environment. Implement and optimize AI/ML solutions on Azure cloud and on-premise infrastructure, ensuring high performance and reliability. Collaborate with data engineering, security, and compliance teams to align AI solutions with business needs and regulatory requirements. Lead a small team of AI engineers, providing mentorship, code reviews, and technical guidance to drive project success. Maintain and monitor model performance, retraining models on a quarterly or monthly basis to handle 50+ PB of evolving data and to improve accuracy over time. Ensure AI models follow best practices and compliance standards, adhering to security requirements and regulations (GDPR, CCPA, PCI DSS, etc.). Qualifications Strong experience with AI frameworks such as Ray, llama.cpp, ollama, vLLM, and PyTorch for building and scaling LLM solutions. Expertise in LLM fine-tuning and prompt engineering, including techniques like Reinforcement Learning from Human Feedback (RLHF) to refine model outputs. Hands-on experience with AI model deployment in Azure cloud environments as well as on-premises servers. Familiarity with large-scale data (50+ PB) and distributed computing paradigms (e.g., using clusters or Ray) to handle massive datasets. Familiarity with MCP (Model Context Protocol) Servers and securing them. Strong programming skills in Python, with experience in machine learning frameworks and libraries. Ability to work cross-functionally with stakeholders in security, compliance, and data engineering to incorporate their requirements into AI solutions. Strong awareness if not implementation experience with Differential Privacy/Federated Learning. Excellent communication skills, with the ability to explain complex AI concepts and results to non-technical teams and leadership clearly. Would be a plus Knowledge of data security and compliance frameworks, as well as experience with responsible AI practices (ethical AI, bias mitigation, data privacy). Background in financial, payments, or security-related AI applications, giving you insight into the challenges and standards of the fintech industry. We offer Opportunity to work on bleeding-edge projectsWork with a highly motivated and dedicated teamCompetitive salaryFlexible scheduleBenefits package - medical insurance, sportsCorporate social eventsProfessional development opportunitiesWell-equipped office
Not specified
INR 6.0 - 10.0 Lacs P.A.
Work from Office
Full Time
AI Expert (Team Lead) - LLM PII Detection Grid Dynamics is seeking an AI Expert (Team Lead) to spearhead the development of local Large Language Models (LLMs) for Personally Identifiable Information (PII) detection across our vast data ecosystem. This role involves building and fine-tuning custom LLMs that can identify PII in documents, databases, knowledge graphs, source code, and other knowledge-sharing platforms. The models must effectively handle data at scale (50+ PB of data) and remain up-to-date with continuous retraining on a quarterly or monthly basis. As the AI Team Lead, you will design and implement these AI solutions using technologies like Ray, llama.cpp, ollama, vLLM, and PyTorch, deploying them on Azure cloud and on-premises infrastructure for efficient, large-scale operation. You will also develop generative AI agents that leverage data schemas and metadata to improve PII detection accuracy. Collaboration is key: you will work closely with data engineering, security, and compliance teams to ensure the AI models meet business requirements and align with industry regulations (GDPR, CCPA, PCI DSS, etc.). Additionally, you will mentor a small team of AI engineers, guiding their work and fostering a culture of technical excellence and responsible AI practices. Essential functions Design, train, fine-tune, and optimize local LLMs or other NLP models for PII detection across diverse data types (documents, databases, knowledge graphs, code, and other knowledge-sharing formats). Develop generative AI agents (on Autogen/Langgraph) for schema- and metadata-based PII detection to enhance identification of sensitive data. Work with cutting-edge AI frameworks (Ray, llama.cpp, ollama, vLLM, PyTorch) to deploy and scale models efficiently in a distributed environment. Implement and optimize AI/ML solutions on Azure cloud and on-premise infrastructure, ensuring high performance and reliability. Collaborate with data engineering, security, and compliance teams to align AI solutions with business needs and regulatory requirements. Lead a small team of AI engineers, providing mentorship, code reviews, and technical guidance to drive project success. Maintain and monitor model performance, retraining models on a quarterly or monthly basis to handle 50+ PB of evolving data and to improve accuracy over time. Ensure AI models follow best practices and compliance standards, adhering to security requirements and regulations (GDPR, CCPA, PCI DSS, etc.). Qualifications Strong experience with AI frameworks such as Ray, llama.cpp, ollama, vLLM, and PyTorch for building and scaling LLM solutions. Expertise in LLM fine-tuning and prompt engineering, including techniques like Reinforcement Learning from Human Feedback (RLHF) to refine model outputs. Hands-on experience with AI model deployment in Azure cloud environments as well as on-premises servers. Familiarity with large-scale data (50+ PB) and distributed computing paradigms (e.g., using clusters or Ray) to handle massive datasets. Familiarity with MCP (Model Context Protocol) Servers and securing them. Strong programming skills in Python, with experience in machine learning frameworks and libraries. Ability to work cross-functionally with stakeholders in security, compliance, and data engineering to incorporate their requirements into AI solutions. Strong awareness if not implementation experience with Differential Privacy/Federated Learning. Excellent communication skills, with the ability to explain complex AI concepts and results to non-technical teams and leadership clearly. Would be a plus Proven experience leading small AI/ML teams, with a track record of delivering projects and mentoring team members. Knowledge of data security and compliance frameworks, as well as experience with responsible AI practices (ethical AI, bias mitigation, data privacy). Background in financial, payments, or security-related AI applications, giving you insight into the challenges and standards of the fintech industry. We offer Opportunity to work on bleeding-edge projectsWork with a highly motivated and dedicated teamCompetitive salaryFlexible scheduleBenefits package - medical insurance, sportsCorporate social eventsProfessional development opportunitiesWell-equipped office
Not specified
INR 7.0 - 12.0 Lacs P.A.
Work from Office
Full Time
We are looking for a highly skilled Java full stack Engineer with ReactJs to join our team. As a Java full stack Engineer, you will be responsible for designing, building, and maintaining scalable, high-performance Java applications. You will collaborate with cross-functional teams to develop robust solutions, provide technical leadership, and ensure best practices in software development. Essential functions Development of technical solutions that are built for quality, scale and performance. Collaborate with the business, product management and PMO on product roadmaps and quarterly planning sessions. Participate in code and design reviews to minimize rework and catch issues early in the process. Work efficiently as a part of a global team of engineers ensuring effective collaboration, communication, and delivery. Primary Skill Set:- ReactJs, Strong Java, SpringBoot, MySQL Database Qualifications Bachelor s or Master s Degree in Computer Science/Engineering 9+ years of software development experience in designing and building MicroServices , APIs using Java and deploying to cloud environments preferably AWS and associated frameworks. Deep understanding of best design and software engineering practices, design principles and patterns and unit testing. Understanding of server-side technologies (Java) and Microservices architecture (design, REST, monitoring, security) Experience with ReactJS. Architect and design leading solutions with a strong focus on security. SOLID design principles Strong database programming skills, preferably in both SQL and NoSQL Databases. Able to work in a fast paced and dynamic environment and achieve results amidst constraints. Proven experience working in an Agile/Scrum environment. Would be a plus Any cloud experience: AWS/Azure/GCP We offer Opportunity to work on bleeding-edge projectsWork with a highly motivated and dedicated teamCompetitive salaryFlexible scheduleBenefits package - medical insurance, sportsCorporate social eventsProfessional development opportunitiesWell-equipped office
Not specified
INR 11.0 - 15.0 Lacs P.A.
Work from Office
Full Time
We are looking for a highly skilled Java full stack Engineer with Angular to join our team. As a Java full stack Engineer, you will be responsible for designing, building, and maintaining scalable, high-performance Java applications. You will collaborate with cross-functional teams to develop robust solutions, provide technical leadership, and ensure best practices in software development. Essential functions Development of technical solutions that are built for quality, scale and performance. Collaborate with the business, product management and PMO on product roadmaps and quarterly planning sessions. Participate in code and design reviews to minimize rework and catch issues early in the process. Work efficiently as a part of a global team of engineers ensuring effective collaboration, communication, and delivery. Primary Skill Set:- Angular UX, HTML and other typical UX skillset, MS SQL, Java spring boot Qualifications Bachelor s or Master s Degree in Computer Science/Engineering 9+ years of software development experience in designing and building MicroServices , APIs using Java and deploying to cloud environments preferably AWS and associated frameworks. Deep understanding of best design and software engineering practices, design principles and patterns and unit testing. Understanding of server-side technologies (Java) and Microservices architecture (design, REST, monitoring, security) Experience with Angular. Architect and design leading solutions with a strong focus on security. SOLID design principles Strong database programming skills, preferably in both SQL and NoSQL Databases. Able to work in a fast paced and dynamic environment and achieve results amidst constraints. Proven experience working in an Agile/Scrum environment. Would be a plus Any cloud experience: AWS/Azure/GCP We offer Opportunity to work on bleeding-edge projectsWork with a highly motivated and dedicated teamCompetitive salaryFlexible scheduleBenefits package - medical insurance, sportsCorporate social eventsProfessional development opportunitiesWell-equipped office
Not specified
INR 4.0 - 8.0 Lacs P.A.
Work from Office
Full Time
Greetings from Grid Dynamics! Hope this message finds you well.Your profile has caught our attention and appears to be a strong fit for the Senior Technical Recruiter (On Contract) position at our Bangalore office.The Contract will be for a duration of 1 year and extendable basis performance.In this role, you will play a key part in driving our talent acquisition efforts by sourcing, screening, and curating a shortlist of highly qualified candidates for various technical positions. You will also actively networkboth online and offlinewith potential candidates to strengthen our employer brand, optimize time-to-hire, and ensure we attract top-tier IT professionals. If you have 4 to 6 years of experience in recruiting for Technology roles and this sounds like the right opportunity for you, we would love to hear from you!Please share your updated resume along with the following details and if your profile is shortlisted, you will be invited for a In-person interview:Current Location in Bangalore:Available to join within 1 weeks (Yes/No):Willingness to consider Contract-to-Hire (Yes/No):Current Salary:Expected Salary:Are you available for in-person interview (Yes/No):We look forward to reviewing your application and connecting with you soon!
Not specified
INR 7.0 - 10.0 Lacs P.A.
Work from Office
Full Time
Job Overview: We are looking for an experienced Technical Recruiter to join our dynamic HR team. The ideal candidate will have a strong background in hiring technical talent for various roles, with a proven track record of sourcing, screening, and hiring candidates in the IT industry. This role offers an excellent opportunity to contribute to the growth of the company by bringing in top-tier talent.Key Responsibilities:Manage the full recruitment lifecycle for technical roles (Software Engineers, Data Scientists, DevOps Engineers, etc.) in a fast-paced environment. Partner with hiring managers and technical teams to understand hiring needs and develop effective sourcing strategies. Source candidates through various channels such as job boards, social media, and networking. Screen resumes, conduct initial interviews, and assess candidates' technical skills and cultural fit. Coordinate and schedule interviews, provide feedback, and ensure timely communication between candidates and hiring managers. Maintain and update candidate database using an applicant tracking system (ATS). Negotiate job offers, handle compensation discussions, and close candidates effectively. Ensure a smooth and positive candidate experience throughout the hiring process. Build and maintain relationships with potential candidates for future hiring needs. Stay updated on industry trends and technologies to better understand the technical roles being recruited for. Collaborate with HR and leadership teams to improve recruitment strategies and processes. Requirements:4 to 7 years of proven experience in technical recruitment, preferably in the IT industry. Strong understanding of technical roles and the IT landscape, including software development, infrastructure, cloud technologies, etc. Hands-on experience using recruitment platforms, job boards (LinkedIn, Naukri, etc.), and applicant tracking systems (ATS). Excellent communication and interpersonal skills. Ability to work independently and manage multiple hiring processes simultaneously. Strong problem-solving skills and attention to detail. Ability to build strong relationships with both candidates and internal teams. Knowledge of sourcing techniques and best practices for attracting top-tier technical talent. Bachelors degree in Human Resources, Business Administration, or a related field (preferred). Preferred Skills:Experience in hiring for niche and hard-to-fill technical roles. Familiarity with programming languages and technologies (e.g., Java, Python, DevOps tools, Cloud technologies). Previous experience with recruitment tools like LinkedIn Recruiter, GitHub, etc. Knowledge of market trends and compensation benchmarking in the tech industry.
Not specified
INR 18.0 - 23.0 Lacs P.A.
Work from Office
Full Time
Skills Required: GCP Services : BigQuery, Cloud Dataflow, Pub/Sub, Dataproc, Cloud Storage. Data Processing : Apache Beam (batch/stream), Apache Kafka, Cloud Dataprep. Programming : Python, Java/Scala, SQL. Orchestration : Apache Airflow (Cloud Composer), Terraform. Security : IAM, Cloud Identity, Cloud Security Command Center. Containerization : Docker, Kubernetes (GKE). Machine Learning : Google AI Platform, TensorFlow, AutoML. Certifications : Google Cloud Data Engineer, Cloud Architect (preferred). Proven ability to design scalable and robust AI/ML systems in production, with a focus on high-performance and cost-effective solutions. Strong experience with cloud platforms (Google Cloud, AWS, Azure) and cloud-native AI/ML services (e.g., Vertex AI, SageMaker). Expertise in implementing MLOps practices, including model deployment, monitoring, retraining, and version control. Strong leadership skills with the ability to guide teams, mentor engineers, and collaborate with cross-functional teams to meet business objectives. Deep understanding of frameworks like TensorFlow, PyTorch, and Scikit-learn for designing, training, and deploying models. Experience with data engineering principles, scalable pipelines, and distributed systems (e.g., Apache Kafka, Spark, Kubernetes). Essential functions Nice to have requirements to the candidate Strong leadership and mentorship capabilities, guiding teams toward best practices and high-quality deliverables. Excellent problem-solving skills, with a focus on designing efficient, high-performance systems. Effective project management abilities to handle multiple initiatives and ensure timely delivery. Strong emphasis on collaboration and teamwork , fostering a positive and productive work environment Qualifications Skills Required: GCP Services : BigQuery, Cloud Dataflow, Pub/Sub, Dataproc, Cloud Storage. Data Processing : Apache Beam (batch/stream), Apache Kafka, Cloud Dataprep. Programming : Python, Java/Scala, SQL. Orchestration : Apache Airflow (Cloud Composer), Terraform. Security : IAM, Cloud Identity, Cloud Security Command Center. Containerization : Docker, Kubernetes (GKE). Machine Learning : Google AI Platform, TensorFlow, AutoML. Certifications : Google Cloud Data Engineer, Cloud Architect (preferred). Proven ability to design scalable and robust AI/ML systems in production, with a focus on high-performance and cost-effective solutions. Strong experience with cloud platforms (Google Cloud, AWS, Azure) and cloud-native AI/ML services (e.g., Vertex AI, SageMaker). Expertise in implementing MLOps practices, including model deployment, monitoring, retraining, and version control. Strong leadership skills with the ability to guide teams, mentor engineers, and collaborate with cross-functional teams to meet business objectives. Deep understanding of frameworks like TensorFlow, PyTorch, and Scikit-learn for designing, training, and deploying models. Experience with data engineering principles, scalable pipelines, and distributed systems (e.g., Apache Kafka, Spark, Kubernetes). Would be a plus Strong leadership and mentorship capabilities, guiding teams toward best practices and high-quality deliverables. Excellent problem-solving skills, with a focus on designing efficient, high-performance systems. Effective project management abilities to handle multiple initiatives and ensure timely delivery. Strong emphasis on collaboration and teamwork , fostering a positive and productive work environment. We offer Opportunity to work on bleeding-edge projects Work with a highly motivated and dedicated team Competitive salary Flexible schedule Benefits package - medical insurance, sports Corporate social events Professional development opportunities Well-equipped office
Not specified
INR 14.0 - 20.0 Lacs P.A.
Work from Office
Full Time
Primary Skill SetExperience in Health Insurance or Health Care domain.Experience in SQL - Able to write complex queries.SSISETLAdded AdvantageSSRSPower BIAzure
Not specified
0.0 - 0.0 Lacs P.A.
On-site
Full Time
Not specified
INR 15.0 - 22.5 Lacs P.A.
Work from Office
Full Time
Not specified
INR 20.0 - 30.0 Lacs P.A.
Work from Office
Full Time
Not specified
INR 30.0 - 40.0 Lacs P.A.
Work from Office
Full Time
Not specified
0.0 - 0.0 Lacs P.A.
On-site
Full Time
Not specified
0.0 - 0.0 Lacs P.A.
On-site
Full Time
Not specified
0.0 - 0.0 Lacs P.A.
On-site
Full Time
Not specified
0.0 - 0.0 Lacs P.A.
On-site
Full Time
Not specified
0.0 - 0.0 Lacs P.A.
On-site
Full Time
Not specified
0.0 - 0.0 Lacs P.A.
On-site
Full Time
Not specified
0.0 - 0.0 Lacs P.A.
On-site
Full Time
Not specified
0.0 - 0.0 Lacs P.A.
On-site
Full Time
Not specified
INR 35.0 - 40.0 Lacs P.A.
Work from Office
Full Time
Not specified
0.0 - 0.0 Lacs P.A.
On-site
Full Time
FIND ON MAP
Reviews
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Chrome Extension