Our Projects

Real results. Real impact. From AI-powered platforms to enterprise-grade solutions.

SCROLL TO EXPLORE
HMK Interiors Preview

HMK Interiors

Full Stack Interior Design Website with admin dashboard. Built using Next.js, Prisma, and Postgres.

Brainwave Academy Preview

Brainwave Academy

Learn smarter with Brain Waves’ AI-powered courses. Built using Next.js, Vapi, Prisma, and Postgres.

AirWhite Flight Reservation Preview

AirWhite | Flight Reservation

Real World Flight Reservation System with smooth UX using Tailwind CSS and Shadcn UI.

Verve AI Interviewer Preview

Verve AI Interviewer

Create mock interviews and practice them using AI. Built with Next.js, Tailwind CSS, and Vapi AI.

AI Projects

Pushing the frontiers of artificial intelligence with production-grade systems and research-driven innovation.

DISCOVER AI INNOVATIONS

Full-Stack Workspace Agent

Engineered a stateful AI agent using LangGraph with both parallel and sequential execution paths, enabling complex workflow orchestration. Integrated Human-in-the-Loop confirmation flows to ensure safe and controlled automation of Google Workspace operations. Developed a Vue 3 frontend with a RESTful FastAPI backend, real-time SSE streaming (<400ms latency), OAuth2 authentication with token caching, and Redis-backed rate limiting.

Modular RAG Pipeline

Architected a 4-stage modular RAG pipeline (Pre-retrieval → Indexing → Retrieval → Post-retrieval) using LlamaIndex, Qdrant, and FalkorDB. Fully configurable via YAML registry for rapid experimentation. Benchmarked on HotpotQA dataset achieving 0.72 faithfulness score (RAGAS). Integrated LangFuse for full observability and evaluation tracking.

Fine-Tuned Llama 3 via QLoRA

Performed Supervised Fine-Tuning (SFT) on Llama 3.1-8B using QLoRA. Reduced VRAM consumption by ~70% on RTX A4000 while boosting training efficiency. Structured raw clinical datasets into standardized chat templates for evidence-based mental health responses. Exported adapters to GGUF format for Ollama deployment (~74 tokens/sec offline inference).