Sitemap
A list of all the posts and pages found on the site. For you robots out there, there is an XML version available for digesting as well.
Pages
Posts
Understanding Interpretability!
Published:
Cosmos by Carl Sagan!
Published:
Life often feels like a never-ending to-do list. We rush through days, weighed down by deadlines, obligations, and the noise of modern living. But what if the antidote to this chaos isnât found in productivity hacks or escapism, but in something far older and grander? Cosmos is Saganâs love letter to curiosity, a reminder that weâre all made of stardust, and a guide to finding awe in the ordinary.
Understanding Quantization!
Published:
Massive models contain billions of parameters. These models are incredibly powerful, but their size comes with significant challenges. This is where Quantization steps in as a crucial optimization technique.
Understanding LoRA!
Published:
We live in an era of massive AI models. Think Llama or Stable Diffusionâ-âmodels trained on vast amounts of data, possessing incredible general capabilities. But often, we want to adapt these powerhouses for specific needs: making a language model better at writing legal documents, generating medical reports, or even just mimicking a particular artistic style for image generation.
Understanding CLIP!
Published:
In recent years, the integration of computer vision and natural language processing has led to amazing advancements. One such innovation is OpenAIâs CLIP (Contrastive LanguageâImage Pre-training), which combines visual and textual understanding to tackle a variety of tasks without needing task-specific training. Hereâs a comprehensive yet simplified exploration of CLIPâs inner workings, features, and transformative potential.
Understanding ResNets!
Published:
Deep Residual Networks (ResNets) are a development in deep learning, designed to address the challenges associated with training deep neural networks. This architecture is pivotal in advancing image recognition and other tasks by enabling deeper networks to achieve better performance without succumbing to optimization difficulties. Hereâs a breakdown of ResNets.
Evolution and Magic of Attention!
Published:
```bash TLDR;
- Transformers use attention to process entire sequences simultaneously, bypassing the limitations of sequential RNNs.
portfolio
Vision Transformers (ViTs)
And what I learned while implementing them!
VGG16 and Transfer Learning
How I used them to classify medical images!
CLIP Implementation
Overview of Clip to understand whatâs going on under the hood!
publications
Paper Title Number 1
Published in Journal 1, 2009
This paper is about the number 1. The number 2 is left for future work.
Recommended citation: Your Name, You. (2009). "Paper Title Number 1." Journal 1. 1(1).
Download Paper | Download Slides
Paper Title Number 2
Published in Journal 1, 2010
This paper is about the number 2. The number 3 is left for future work.
Recommended citation: Your Name, You. (2010). "Paper Title Number 2." Journal 1. 1(2).
Download Paper | Download Slides
Paper Title Number 3
Published in Journal 1, 2015
This paper is about the number 3. The number 4 is left for future work.
Recommended citation: Your Name, You. (2015). "Paper Title Number 3." Journal 1. 1(3).
Download Paper | Download Slides
Paper Title Number 4
Published in GitHub Journal of Bugs, 2024
This paper is about fixing template issue #693.
Recommended citation: Your Name, You. (2024). "Paper Title Number 3." GitHub Journal of Bugs. 1(3).
Download Paper
talks
Talk 1 on Relevant Topic in Your Field
Published:
This is a description of your talk, which is a markdown file that can be all markdown-ified like any other post. Yay markdown!
Conference Proceeding talk 3 on Relevant Topic in Your Field
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
teaching
2015 Spring Teaching 2
, , 1900
vdvsvdv
Direct Preference Optimization (DPO)
, , 2024
A deep dive into DPO and its advantages over traditional RLHF