Sitemap

A list of all the posts and pages found on the site. For you robots out there, there is an XML version available for digesting as well.

Pages

Posts

Cosmos by Carl Sagan!

2 minute read

Published:

Life often feels like a never-ending to-do list. We rush through days, weighed down by deadlines, obligations, and the noise of modern living. But what if the antidote to this chaos isn’t found in productivity hacks or escapism, but in something far older and grander? Cosmos is Sagan’s love letter to curiosity, a reminder that we’re all made of stardust, and a guide to finding awe in the ordinary.

Understanding Quantization!

5 minute read

Published:

Massive models contain billions of parameters. These models are incredibly powerful, but their size comes with significant challenges. This is where Quantization steps in as a crucial optimization technique.

Understanding LoRA!

5 minute read

Published:

We live in an era of massive AI models. Think Llama or Stable Diffusion - models trained on vast amounts of data, possessing incredible general capabilities. But often, we want to adapt these powerhouses for specific needs: making a language model better at writing legal documents, generating medical reports, or even just mimicking a particular artistic style for image generation.

Understanding CLIP!

8 minute read

Published:

In recent years, the integration of computer vision and natural language processing has led to amazing advancements. One such innovation is OpenAI’s CLIP (Contrastive Language–Image Pre-training), which combines visual and textual understanding to tackle a variety of tasks without needing task-specific training. Here’s a comprehensive yet simplified exploration of CLIP’s inner workings, features, and transformative potential.

Understanding ResNets!

4 minute read

Published:

Deep Residual Networks (ResNets) are a development in deep learning, designed to address the challenges associated with training deep neural networks. This architecture is pivotal in advancing image recognition and other tasks by enabling deeper networks to achieve better performance without succumbing to optimization difficulties. Here’s a breakdown of ResNets.

Evolution and Magic of Attention!

5 minute read

Published:

```bash TLDR;

  • Transformers use attention to process entire sequences simultaneously, bypassing the limitations of sequential RNNs.

portfolio

publications

Paper Title Number 4

Published in GitHub Journal of Bugs, 2024

This paper is about fixing template issue #693.

Recommended citation: Your Name, You. (2024). "Paper Title Number 3." GitHub Journal of Bugs. 1(3).
Download Paper

talks

teaching