I like to have fun.
I like to go out on Fridays! I like to go out on the weekends!
For a long time, I have denied myself this truth. Whether it was due to school work, no plans, or just not feeling it, I have spent many a Friday night or weekend wondering what to do. But from now on, I will be honest with myself and try to actively plan for fun on Friday nights.
[Read More]
Learning Chinese
Nearly after a decade and a half after I stopped attending Chinese Saturday school, I am making a concious and concerted effort to learn the $belle$ language.
As an analytical thinker, I am always interested in quickly seeing the big picture, and identifying high-information patterns that can be quickly and easily applied repeatedly within and across domains. So when you learn Chinese, there is the traditional method of learning, character-by-character, in a rote memorization way.
[Read More]
Expectation Ambiguity
So it’s been a while since I’ve updated this blog. The short (and complete) story is that I’ve been rushing final projects for school. There might be a new article on that (later)
Now that I am reviewing some of my SOCMLx notes (still need to scribe some stuff up), I want to revisit a topic that has always troubled me. Specifically:
$$ E[X] vs E[XY] $$
In both of these expectations, what does the expectation refer to?
[Read More]
The_gap
The Gap is something that exists between what you learn in your DS and Algos course, and how you actually implement it in Python/Leetcode.
Graphs are a big one.
But even bigger are hashmaps.
In general, when we have $h(k_1) = h(k_2)$ we cannot conclude $k_1 = k_2$. But in Python we can (kind of).
RL example
Here are my thoughts on trying to provide an example of RL.
- Policy gradient
- “Standing derivative”, weighted by the return
- Instead, we should do Q-learning
RL Curriculum
Finally, the pieces of RL are starting to really crystallize in my head. It only took til the end of CSC2547, and my first semester of grad school to get it!
Here is an ordering I found useful for studying:
http://www.cs.toronto.edu/~rgrosse/courses/csc411_f18/ to lay the framework for machine learning. http://www.cs.toronto.edu/~rgrosse/courses/csc421_2019/ to lay the framework for deep learning. Policy gradient, REINFORCE is reinforced here ;)s http://karpathy.github.io/2016/05/31/rl/ to connect RL in the context of supervised learning; and the practicalities of training it https://lilianweng.
[Read More]
Active Learning
A thorny and universal open problem that continues to plague deep learning is the scarcity of labelled data. Unlabelled data can be transformed to labelled data in a procedure that is costly, labour-intensive time-consuming, and not scalable. Active learning thus deals with picking \textit{good} data points from the unlabelled dataset to label. This discrete choice of unlabelled data points is called a \textit{query} and there are several heuristics for selecting the best query, including max entropy, diversity sampling and others.
[Read More]
Cuda Guide
Step 1:
Check if cuda is available, and if the user wants to enable it.
Save this into a config variable, e.g. args.device
Step 2: Write device agnostic code.
Use torch.tensor((1,2), device=args.device)
Use x = model().to(device=args.device)
That’s it!
Blog Post of the Day
This is an exceptional blog post!
https://thegradient.pub/frontiers-of-generalization-in-natural-language-processing/
Hackers Guide Multidimensional Arrays
Let’s say we have an array of 3 x 1 x 6 x 8. Then we can quickly make some assertions and inferences: 1. The “smallest” level elements of this MD array are size 8.
When we do broadcasting, we are just doing expand_dims operations, and then working on two huge cubes that are the same size!
When we do unsqueezing, we can get a real flavour of how Torch/numpy actually stores the values in the underlying memory addresses.
[Read More]