This is a follow-on from my investigation on how to use AWS SageMaker as an AWS replacement to my current approach to generate text from a Machine Learning model. Up until this point I’ve been running torch-rnn on a server locally. You can follow part 1 and part 2 of my progress so far.
In summary, here’s what I learned this week:
- Some Python modules are OS platform specific. That means, if you install a module on MacOS, you can’t zip it up as a dependency in a .zip deployment for a Lambda which runs on Linux, as it won’t be OS compatible
- The maximum size for an AWS Lambda deployment is 50MB. Zipping up what I’ve built so far (only a minimal script but relying on a number of modules) I’ve got a 500MB zip file. Clearly that’s too large to deploy as a Lambda
- Following some suggestions here, there are Python frameworks (such as Zappa) to help build Python based AWS Lambdas and address some of the issues with modules and deployment. Clearly I’ve got some learning here to get his to work 🙂