Amazon Interview


Addicted to interviewing.

Just finished the Amazon interview. It was a shadowing interview, so there was a bonus interviewer!

Surprisingly, it was not a coding challenge! Instead they asked (quite a few) questions about my machine learning knowledge.

Things I did well: Answered the first LP question nicely (overall). Tell me about a time when you went outside your scope to improve something.

Should have mentioned I got an award for that. Overseas Taiwan Youth Outstanding Award. We need to MOTIVATE what the start of the problem was.

Things I could have improved upon: - Answered the second LP question in a kind of way that was not super good or the correct way. It was about describing a time where we switched direction, and this is intentionally vague, and you should just take this as an opportunity to just run with it! Essentially, I think they wanted me to discuss if there was an A-ha! moment that led me to discovering something! **You ALWAYS want to talk about the results! ** - The STAR method actually works. You MUST motivate why you did something. - (it kind of doesn’t matter exactly what the question is, always talk about leadership and projects and results) - Just end your answer decisively. Don’t kind of taper off. If you want to say more, you can say more! But 99% of the time, say less.

One thing I want to take away is that it is ALWAYS acceptable to start off with saying something. It is an open conversation and dialogue: you can just keep talking! And pivot, and take natural segues. Having said that, there are “better” initializations, that will make the entire process much more pleasant.

Technical portion: 1. Difference between GRU and LSTM and RNN 2. How does Vanilla RNN work 3. How would a POS tagger work 1. CRF question 4. How does attention work. 5. Boosting, and bagging 6. Regularization: L1 vs L2 7. Regularization in a neural network 1. Dropout, Batchnorm. Turn off dropout at inference time! 8. Logistic regression vs SVM. And now the training process is different. No longer is it MSE, but now it is cross entropy (https://ml-cheatsheet.readthedocs.io/en/latest/logistic_regression.html#cost-function). Would be great if they dug more into this

Variance: reduction with 1/n or 1/sqrt(n)

What I liked about the interview:

Learnt some stuff! CRF and impressed by the depth of questioning on the machine learning section. Liked how he asked more about the L1/L2 regularization question, and I was able to actually dig in and get at the innards!

Also good follow-up questions: 1. What you do on your projects. 2. Basic research/publishing 3. What is the team composition like? 4. Lab 126: was just a reg. old start up that weas subsumed into Amazon (acquihired)

Final word: Overall 7.510. There were some hiccups with the interview, especially the LP portion. But a) I was not expecting that (interview was remixed; meaning I am doing another stage 1 interview). And b) I got the L1 ball question correct! (TBH it was the correct answer, it wasn’t really relevant. i.e. it didn’t matter, it was just a minor victory point)