I decided I wanted to run the exercises from the fast.ai MOOC on my local computer. The recommended approach is to run everything on a t2 AWS instance to make sure everything is setup correctly and then run the final product on a p2 AWS instance. While it still makes sense to run your final job on something with a lot of GPU horsepower (like an AWS p2 instance or Floydhub), I didn't particularly feel like using an AWS instance with no GPU to setup the job. Thus the quest for setting up my home machine - a Macbook Pro running MacOS Sierra with a NVIDIA GeForce GT 750M 2048 MB card - for this purpose.
- Install xcode 8.2 command line tools from apple’s site from here: http://adcdownload.apple.com/Developer_Tools/Command_Line_Tools_macOS_10.12_for_Xcode_8.2/Command_Line_Tools_macOS_10.12_for_Xcode_8.2.dmg. (Note: as of this writing, CUDA 8.0 requires Xcode 8.2 on Mac OS Sierra Other versions of Xcode lead to a nvcc error - nvcc is the nvidia compiler)
- Make the Xcode command line tools the default: sudo xcode-select --switch /Library/Developer/CommandLineTools/
- Install CUDA 8.0 from here: https://developer.nvidia.com/cuda-downloads. Make sure you check if your particular graphics card is supported. There's a support matrix on the CUDA site.
- Install cuDNN 5.1 for CUDA 8.0 (https://developer.nvidia.com/compute/machine-learning/cudnn/secure/v5.1/prod_20161129/8.0/cudnn-8.0-osx-x64-v5.1-tgz). (Note: although higher versions of cuDNN are available, keras 1.2.2 seems to only work with cuDNN 5.x)
- Install anaconda using the anaconda installer from https://www.continuum.io/downloads,
- Install theano (pip install theano)
- Install keras v1.2.2 (pip install keras==1.2.2)
- Install kaggle-cli
The list above is from memory and judicious use of the bash history command, so I cannot guarantee I haven't missed anything. If you happen to get stuck at some point, try looking at the script here http://files.fast.ai/files/install-gpu.sh to figure out if you can spot the missing piece. The steps below are based off that script.