Learn more about our recent progress on Sign Language task. Read more

Project Deliverables

Key Project Deliverables

Explore the outcomes and research outputs from the CLAIM project led by CSIR-Fourth Paradigm Institute (CSIR-4PI).

Explainable AI (XAI) based Breast cancer diagnostic and prognostic decision support system for Pathologists and Oncologists

  • Development of Comprehensive XAI-Based Nottingham Grading System (XAI-NGS)
  • Design and develop an Explainable AI (XAI)-driven precision diagnostic and prognostic decision support system tailored for oncologists and pathologists. The system will use histopathology image data to enhance the accuracy and explainability of breast cancer diagnostics and prognosis.

Advancing Digital Metrology through AI

  • Metrology-Aligned AI/ML Models: Frameworks for uncertainty quantification in sensor measurements.
  • Research Publications: 3–5 high-impact papers in SCI journals and presentations at global conferences (e.g., IEEE, IMEKO).
  • Trained Manpower: Capacity-building through the training of researchers in AI/ML-metrology integration.
  • Functional Prototypes: Validated models for at least two sensor use cases (thermocouple and AMR sensors).
  • Documentation: Comprehensive technical reports and datasets for adoption.

AI for Project Resource Integration and Management for Efficient Governance (AI-PRIME)

  • Functional similarity search system for identifying similar projects (if any) against the given proposal with defined accuracy benchmarks.
  • Smart data retrieval engine for accessing existing project data using natural language queries on the project database.
  • A User-friendly interface for accessing similarity search and smart data retrieval functionalities.
  • Comprehensive technical report documenting methodologies, system design, and results.
  • Deployable codebase with APIs and installation instructions for integration into the existing/upcoming system environment.

Real-time Video to Sign Language Video Synthesis

  • Curated and Annotated Dataset - A high-quality dataset of spoken language video paired with corresponding sign language videos. Annotations including spoken language transcriptions, gloss sequences, and any other metadata, if any.
  • Speech-to-Gloss Translation Model - A NMT model that converts spoken language into structured gloss sequences.
  • Gloss-to-Sign Language Marker/Pose Mapping Framework - A deep learning model that maps gloss sequences to corresponding sign language poses.
  • Sign Language Video Synthesis Model - A generative model capable of producing temporally coherent sign language videos.
  • Speaker-to-Signer Transformation Model - A generative model that reconstructs the speaker’s identity and fuses with sign language poses.
  • Background Preservation Model - A synthesis model to generate videos with the original background remaining unchanged while integrating the synthesized signer into the video.
  • Real-Time & Offline Translation Pipeline - A fully functional pipeline optimized for both real-time and offline sign language video generation.

Spatio-Geometric Foundational Models

  • An aligned Multi-Source Dataset – curated from at least 15 open-source datasets + synthetic 3D data from 2D images and various physics engines
  • Spatial & Geometric QA Benchmark – Custom evaluation dataset for spatial & geometric reasoning tasks.
  • Multi-Task Trained Model - VLM - Fine-tuned for spatial & geometric QA with scene graphs or/and voxel/point-based encodings.
  • Chain of Spatial Thought Framework – Modular reasoning framework for complex spatial queries.
  • 3D Asset Generation Pipeline – NURBS (or) Mesh-based automated asset creation for the real/synthetic world.
  • Robot Manipulator Planning/Security and Surveillance Task – Downstream validation of model efficacy.

Foundational methods for Learning-from-Demonstrations validated for Autonomous Navigation

  • Domain Agnostic LfD + exploration based algorithms with fully documented, open-source codebase.
  • An annotated dataset of autonomous driving demonstrations (with controlled noise and suboptimal trajectories) collected in CARLA Simulator and in real world.
  • A behavior planning module validated within the CARLA driving simulator.
  • A deployable prototype on a real vehicle/ADAS system demonstrating the behavior planner’s performance in unstructured environments.

AI-Driven Smart Material-Actuated (SMA) Soft-Robotic Gripper with Tactile Sensor Array for Precise Gripping Control

  • A Smart material-based actuated robotic gripper
  • A vision-based tactile sensing module that can be attached to gripper end effectors for delicate operations.
  • An integrated prototype system combining both robotic gripper and tactile sensor array system.
  • AI/ML algorithms for tactile sensor array and gripping control at lower level and high level

Need more info? Visit our Contact page or reach out to the respective project leads for collaboration opportunities.