This may be slightly OT, but my layperson perspective is for a pre-eminent STEM university, that MIT always seems to lag behind its rivals in deep learning and the latest ml advances were spearheaded at Stanford/Silicon Valley outfits, the Montreal/Toronto academic communities, Caltech and to some extent in the UK. Whereas its background should have made it one of the leaders in the field. Am I overlooking something, or just gross misinformed? Did MIT culturally stay married for a while to the Minsky school of deterministic AI vs. the Big Data statistical approach?
Source/Disclaimer: Am graduate student in Boston area who works in Computer Engineering. I am certain I am overlooking some important info that others should add.
You're right in Saying that Stanford, UCB, UW, and other schools have had a much stronger track record producing deep learning models that have revolutionized certain fields (img classification, segmentation, nlp, etc.). I think MIT has hired some seriously talented researchers who have chosen to invest their time into different avenues of research that align more closely with the "engineering" side of deep learning.
For example, Eyeriss[1] kickstarted the AI accelerator race. Halide[2] is a DSL+runtime used as the basis in a lot of deep learning compilation tools, like Tensor Comprehensions[3]. I don't think you're grossly misinformed, I just think that while other schools have invested in the theory, MIT is betting on a level of abstraction that's a bit lower.
Citations tend to be a better indicator of impact. There are some interesting analysis here [1]. MIT has the most citations, and they've also published the most papers by far. If considering citations per paper, MIT falls in the middle of the pack. Toronto is an outlier with a very high number of citations per paper.
NeurIPs is a big conference covering lots of topics, though. All of this says relatively little about MIT's true impact on deep learning in the last eight years.
Looks neat. Personally bummed that it goes with Tensorflow, though I guess that may be related to the course being sponsored in part by Google. Pretty much all the latest research is being published in Pytorch and even OpenAI switched to Pytorch recently.
Both PyTorch and Tensorflow are used extensively. If you look at the last year's most popular models - GPT2 and StyleGan - they are both Tensorflow. When I check on Scholar there are many, many results for both Tensorflow and PyTorch from 2019. The most recent stuff I've been working on (because that's what they were published in) have been in Jax.
PyTorch is one of the biggest frameworks but it's unreasonable to suggest it is the only one that matters.
GPT-2 was developed by Open-AI. They recently switched to Pytorch themselves for similar reasons [1]. I didn't claim that all research that matters is happening on Pytorch, but it is very true that most non-Google research is [2]. Having used both, Pytorch has been a far better experience for me, and lately, most of the reference implementations I care about are released in Pytorch, so it makes it very easy to experiment with and also pull into my own code base if needed.
Also, for an intro class, Pytorch would have been a better choice imo, because it has a simpler and cleaner API. In my experience, it was a lot easier to get up and running with Pytorch compared to Tensorflow.
https://github.com/melling/MathAndScienceNotes/blob/master/m...
Direct link to 2018-2019: http://www.youtube.com/watch?v=5v1JnYv_yWs&list=PLtBw6njQRU-...
2017: https://www.youtube.com/playlist?list=PLkkuNyzb8LmxFutYuPA7B...