Product Manager turned Builder and GenAI enthusiast. Master’s degree in Information Technology Management from the University of Texas at Dallas. Previously at Microsoft and Deloitte.
View my LinkedIn Profile
This project demonstrates the design, build and implementation of an end-to-end Gen AI system called second-brain-style AI assistant, from designing data-feature-training-inference pipelines to LLM fine-tuning to agentic RAG and real-time inference.
View code on GitHub | View my Blog |
This project demonstrates how to build an Agentic AI app using LangGraph, Groq, Pydantic, and Streamlit, implementing three powerful workflows for different use cases. Inspired by Anthropic’s research on “Building Effective Agents”, this project showcases how to design and deploy AI workflows efficiently.
✅ Learning Path Generator (Orchestrator-Synthesizer) – Creates a custom learning roadmap based on your skills and goals.
✅ Peer Code Reviewer (Parallelized Workflow) – Reviews your code snippet for security, readability, and best practices.
✅ Blog Generator (Evaluator-Optimizer) – Generates a high-quality blog and evaluates it against predefined criteria.
View code on GitHub | Live app hosted on HF Spaces | View my Blog | Watch my YouTube tutorial |
This project demonstrates a multi-agent system built using AGNO (previously called PhiData) library, integrating web search, GitHub code search, and GIF retrieval. The system features a Streamlit web app and a local AGNO playground for testing agent behaviors.
View code on GitHub | View my Blog | Watch my YouTube tutorial |
This project showcases how to set up and deploy an AWS Bedrock Agent capable of web scraping URLs provided by users. It utilizes AWS Lambda for backend processing, Anthropic Claude 3.5 Sonnet as the underlying model, and Streamlit for the user interface. The workflow covers agent configuration, schema creation, and real-time testing, resulting in a user-friendly web scraping application.
View code on GitHub | Watch my YouTube tutorial | View my Blog |
This project utilizes the NASA NEO dataset to predict whether near-Earth objects are hazardous. With 338,199 observations and 9 features, I applied supervised learning techniques, preprocessing the data to optimize inputs. Random Forest emerged as the best-performing algorithm, achieving an accuracy of 91% in classifying hazardous objects.
This project creates an ETL pipeline using Apache Airflow to extract data from NASA’s Astronomy Picture of the Day (APOD) API. The extracted data is transformed and loaded into a PostgreSQL database for further analysis. Airflow orchestrates the workflow, managing task dependencies and scheduling. Docker is used to run Airflow and Postgres in isolated containers. The pipeline automates the daily extraction, transformation, and loading of astronomy-related data.
This project develops a Photo Critique App using Streamlit and Google’s Gemini-1.5-Flash-8B model. It involves setting up a virtual environment, securely storing API keys, and configuring the app to interact with the Gemini model for generating critiques based on uploaded images. The Streamlit interface allows users to run and test different versions of the app seamlessly.
View code on GitHub | Watch my YouTube tutorial | View Blog on Medium |
This project demonstrates how I designed and implemented a Cloud Connections Game as part of AWS Game Builder Challenge Hackathon 2024. I was inspired by NYT connections game. It features a backend that manages the game logic, dynamic word selection, and validation. Built with Python, the game supports scalable data integration through S3 for loading word categories dynamically. The workflow includes category creation, word randomization, and player interaction, resulting in fun gaming experience, powered by Streamlit.
View code on GitHub | View Blog on DevPost |
This application provides an interactive Q&A platform for users to ask questions about the Fuji X-S20 camera, leveraging the power of Gemini Flash API and Retrieval Augmented Generation (RAG) to deliver real-time, concise and accurate answers.
View code on GitHub | View Blog on Medium |
This project is an end-to-end machine learning application that predicts student performance based on various input features. It utilizes Flask for the web interface and Scikit-Learn for model development, along with a complete CI/CD workflow using GitHub Actions, Docker, and AWS ECR. The application encompasses data ingestion, transformation, and a prediction pipeline, resulting in an effective model for forecasting student grades.
This project leverages AWS SageMaker to classify mobile price ranges using a dataset with multiple features like battery power, RAM, and more. The data is split for training and testing, and a Random Forest model is trained to predict price categories. Finally, the model is deployed via an endpoint for real-time inference, showcasing end-to-end machine learning, all performed with AWS SageMaker.
This project utilizes AWS Lambda and the Stability AI Stable Diffusion model from AWS Bedrock to generate animated movie posters based on user-provided text prompts. The generated images are stored in an S3 bucket, and a pre-signed URL is returned for easy access. The implementation includes setting up an API Gateway for user interaction and testing with Postman for seamless functionality.
Generated image for the prompt “Unicorn in the style of Dr.Seuss”
This project utilizes the Wine Quality dataset to predict quality by leveraging ElasticNet regression model, with performance evaluated using RMSE, MAE, and R² metrics. Experiment tracking and model artifacts are managed with MLflow on a remote AWS server, with S3 used for secure and scalable storage.
My active cloud certifications include: