Skip to content

Commit 8203375

Browse files
committed
add 2025 assignment 3
1 parent 72e3b14 commit 8203375

File tree

1 file changed

+40
-27
lines changed

1 file changed

+40
-27
lines changed

assignments/2025/assignment3.md

Lines changed: 40 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -5,68 +5,81 @@ mathjax: true
55
permalink: /assignments2025/assignment3/
66
---
77

8-
<span style="color:red">This assignment is due on **Tuesday, May 28 2024** at 11:59pm PST.</span>
8+
<span style="color:red">This assignment is due on **Friday, May 30 2025** at 11:59pm PST.</span>
99

10-
Starter code containing Colab notebooks can be [downloaded here]({{site.hw_3_colab}}).
10+
Starter code containing Colab notebooks can
11+
be [downloaded here](https://drive.google.com/file/d/1m4eU68YJOqsX842otWS0z8hEaBB8c3EH/view?usp=sharing).
1112

1213
- [Setup](#setup)
1314
- [Goals](#goals)
14-
- [Q1: Image Captioning with Vanilla RNNs](#q1-image-captioning-with-vanilla-rnns)
15-
- [Q2: Image Captioning with Transformers](#q2-image-captioning-with-transformers)
16-
- [Q3: Generative Adversarial Networks](#q3-generative-adversarial-networks)
17-
- [Q4: Self-Supervised Learning for Image Classification](#q4-self-supervised-learning-for-image-classification)
18-
- [Extra Credit: Image Captioning with LSTMs](#extra-credit-image-captioning-with-lstms-5-points)
15+
- [Q1: Image Captioning with Transformers](#q1-image-captioning-with-transformers)
16+
- [Q2: Self-Supervised Learning for Image Classification](#q2-self-supervised-learning-for-image-classification)
17+
- [Q3: Denoising Diffusion Probabilistic Models](#q3-denoising-diffusion-probabilistic-models)
18+
- [Q4: CLIP and Dino](#q4-clip-and-dino)
1919
- [Submitting your work](#submitting-your-work)
2020

2121
### Setup
2222

23-
Please familiarize yourself with the [recommended workflow]({{site.baseurl}}/setup-instructions/#working-remotely-on-google-colaboratory) before starting the assignment. You should also watch the Colab walkthrough tutorial below.
23+
Please familiarize yourself with
24+
the [recommended workflow]({{site.baseurl}}/setup-instructions/#working-remotely-on-google-colaboratory) before starting
25+
the assignment. You should also watch the Colab walkthrough tutorial below.
2426

2527
<iframe style="display: block; margin: auto;" width="560" height="315" src="https://www.youtube.com/embed/DsGd2e9JNH4" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
2628

27-
**Note**. Ensure you are periodically saving your notebook (`File -> Save`) so that you don't lose your progress if you step away from the assignment and the Colab VM disconnects.
29+
**Note**. Ensure you are periodically saving your notebook (`File -> Save`) so that you don't lose your progress if you
30+
step away from the assignment and the Colab VM disconnects.
2831

29-
While we don't officially support local development, we've added a <b>requirements.txt</b> file that you can use to setup a virtual env.
32+
While we don't officially support local development, we've added a <b>requirements.txt</b> file that you can use to
33+
setup a virtual env.
3034

31-
Once you have completed all Colab notebooks **except `collect_submission.ipynb`**, proceed to the [submission instructions](#submitting-your-work).
35+
Once you have completed all Colab notebooks **except `collect_submission.ipynb`**, proceed to
36+
the [submission instructions](#submitting-your-work).
3237

3338
### Goals
3439

35-
In this assignment, you will implement language networks and apply them to image captioning on the COCO dataset. Then you will train a Generative Adversarial Network to generate images that look like a training dataset. Finally, you will be introduced to self-supervised learning to automatically learn the visual representations of an unlabeled dataset.
40+
In this assignment, you will implement language networks and apply them to image captioning on the COCO dataset. Then
41+
you will be introduced to self-supervised learning to automatically learn the visual representations of an unlabeled
42+
dataset. Next, you will implement diffusion models (DDPMs) and apply them to image generation. Finally, you will explore
43+
CLIP and DINO, two self-supervised learning methods that leverage large amounts of unlabeled data to learn visual
44+
representations.
3645

3746
The goals of this assignment are as follows:
3847

39-
- Understand and implement RNN and Transformer networks. Combine them with CNN networks for image captioning.
40-
- Understand how to train and implement a Generative Adversarial Network (GAN) to produce images that resemble samples from a dataset.
48+
- Understand and implement Transformer networks. Combine them with CNN networks for image captioning.
4149
- Understand how to leverage self-supervised learning techniques to help with image classification tasks.
50+
- Implement and understand diffusion models (DDPMs) and apply them to image generation.
51+
- Implement and understand CLIP and DINO, two self-supervised learning methods that leverage large amounts of unlabeled
52+
data to learn visual representations.
4253

4354
**You will use PyTorch for the majority of this homework.**
4455

45-
### Q1: Image Captioning with Vanilla RNNs
56+
### Q1: Image Captioning with Transformers
4657

47-
The notebook `RNN_Captioning.ipynb` will walk you through the implementation of vanilla recurrent neural networks and apply them to image captioning on COCO.
58+
The notebook `Transformer_Captioning.ipynb` will walk you through the implementation of a Transformer model and apply it
59+
to image captioning on COCO.
4860

49-
### Q2: Image Captioning with Transformers
61+
### Q2: Self-Supervised Learning for Image Classification
5062

51-
The notebook `Transformer_Captioning.ipynb` will walk you through the implementation of a Transformer model and apply it to image captioning on COCO.
63+
In the notebook `Self_Supervised_Learning.ipynb`, you will learn how to leverage self-supervised pretraining to obtain
64+
better performance on image classification tasks. **When first opening the notebook, go
65+
to `Runtime > Change runtime type` and set `Hardware accelerator` to `GPU`.**
5266

53-
### Q3: Generative Adversarial Networks
67+
### Q3: Denoising Diffusion Probabilistic Models
5468

55-
In the notebook `Generative_Adversarial_Networks.ipynb` you will learn how to generate images that match a training dataset and use these models to improve classifier performance when training on a large amount of unlabeled data and a small amount of labeled data. **When first opening the notebook, go to `Runtime > Change runtime type` and set `Hardware accelerator` to `GPU`.**
69+
In the notebook `DDPM.ipynb`, you will implement a Denoising Diffusion Probabilistic Model
70+
(DDPM) and apply it to image generation.
5671

57-
### Q4: Self-Supervised Learning for Image Classification
72+
### Q4: CLIP and Dino
5873

59-
In the notebook `Self_Supervised_Learning.ipynb`, you will learn how to leverage self-supervised pretraining to obtain better performance on image classification tasks. **When first opening the notebook, go to `Runtime > Change runtime type` and set `Hardware accelerator` to `GPU`.**
60-
61-
### Extra Credit: Image Captioning with LSTMs
62-
63-
The notebook `LSTM_Captioning.ipynb` will walk you through the implementation of Long-Short Term Memory (LSTM) RNNs and apply them to image captioning on COCO.
74+
In the notebook `CLIP_DINO.ipynb`, you will implement CLIP and DINO, two self-supervised learning methods that leverage
75+
large amounts of unlabeled data to learn visual representations.
6476

6577
### Submitting your work
6678

6779
**Important**. Please make sure that the submitted notebooks have been run and the cell outputs are visible.
6880

69-
Once you have completed all notebooks and filled out the necessary code, you need to follow the below instructions to submit your work:
81+
Once you have completed all notebooks and filled out the necessary code, you need to follow the below instructions to
82+
submit your work:
7083

7184
**1.** Open `collect_submission.ipynb` in Colab and execute the notebook cells.
7285

0 commit comments

Comments
 (0)