Yeah a bunch, I worked at Amazon but I'm sure its similar everywhere else. Basically, the scientists were way more familiar with notebooks, and they'd code their models there, but when we needed to deploy it, we needed a proper python package that we could store in git, build, test, run in a container, integrate with data engineering tools, and deploy on some internal tools and AWS Sagemaker later. So we'd usually end up converting it to a Python package once it was ready, which worked OK, but you could tell the scientists were more comfortable in notebooks.
Funnily, there were a bunch of internal MLOps type frameworks there (at least 4) that tried to let the scientists deploy to production w/o engineers, but they all failed or semi-failed. I've heard Netflix made it work and I follow MLFlow so I'd be curious what sticks here.
I don't work in the space anymore but it was an interesting space, definitely could use more standardized tooling.
That totally resonates with me! I spent 6 years working as a data scientist and notebooks just make it a lot simpler to explore and interact with the data, so I totally understand my data science peers for sticking with notebooks.
Having said that, the challenge now is to hit a sweetspot between keeping the Jupyter interactive experience, and providing some features to help data scientists develop modular work. That's where most frameworks fail, so we want to keep our eyes open, and get feedback from both scientists and engineers to develop something that works for everyone.