Lesson 2 focused on publishing the model (putting things in production).
I approached reworking this lesson in the following way:
Note: This is the Official Thread in the Forums.
After watching the video, I worked on my old bear detector which I created about a year ago (on the previous version of the course / based on the book): Since I have kind of done the training before (i.e. last year), instead of re-visiting that, I first wanted to publish my old model using HuggingFace spaces, because I still remember quite vividly that is was quite difficult to publish last year, failing with Voila, finally doing it Heroku. Additionally, the Heroku free services will also be retired later this year.
So here it is, the New Bear Detector on HuggingFaces.
All the files are located in the bear_detector
subfolder, most notably, the notebook which I used to generate the content which I uploaded to HuggingFace.
So I restarted the video at the point where we publish on HuggingFace, jumping in the Gradio + HuggingFace Spaces Tutorial.
In addition to the video lecture, the following things turned out to be important.
The installation of the following packages was necessary on my local machine:
pip install gradio
mamba install -c fastchan nbdev
This piece of code produces warnings:
image = gr.inputs.Image(shape=(192,192))
label = gr.outputs.Label()
Instead, the image and the label needs to be instantiated like this:
image = gr.components.Image(shape=(192,192))
label = gr.components.Label()
The way to export the app.py file does not work anymore the same way it is shown in the video.
This piece of code does not work anymore:
import notebook2script from nbdev.export
notebook2script('app.ipynb')
Instead, this is the way to export the code (as suggested here):
import nbdev
nbdev.export.nb_export('app.ipynb', 'app')
print('Export successful')
Note: Additionally, I noticed that the notebook should reside in a folder called
nbs
- not sure if that still needs to be, will test next time.
You also need to create a requirements.txt file with the following content:
fastai
scikit-image
First I cloned my repo:
git clone https://huggingface.co/spaces/chrwittm/bear-detector
The first attempt to upload my pkl
-file was not successful, because it is a binary:
remote: -------------------------------------------------------------------------
remote: Your push was rejected because it contains binary files.
remote: Please use https://git-lfs.github.com/ to store binary files.
remote: See also: https://hf.co/docs/hub/repositories-getting-started#terminal
remote: -------------------------------------------------------------------------
remote: Offending files:
remote: - export.pkl (ref: refs/heads/main)
This kind of upload needs to be done with Git Large File Storage (LFS). To install it in Ubuntu, the following steps are necessary as described here:
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
sudo apt-get install git-lfs
git lfs install
Afterwards, the files can be added and pushed to git:
git lfs track "*.pkl"
git add .gitattributes
git add export.pkl
git add app.py
git add blackbear.jpg
git add teddybear.jpg
git add grizzly.jpg
git commit -m "uploaded app"
git push
So here it is, the New Bear Detector on HuggingFaces.
Compared to the previous method with Viola / Heroku, the overall workflow was a lot easier.
Restarting the video at 1:02:13, it looks pretty straight-forward, but the first topic to “solve” it GitHub Pages:
Fastpages is actually depreciated by now, and the new Fast.AI recommendation is Quarto. To keep things simple, I decided to go with just the GitHub Pages for now. I activated GitHub Pages for my FastAI2022 repo and created a hello-world.html, and it works :)
Once that was done, I copied the tiny-pets example, and made some slight adjustments:
Here is it: The Bear Detector on GitHub Pages
The purpose of this exercise was a closed-loop repeat:
Learnings:
Screenshots of the final results: