Recently, I have gotten a lot of requests from faculty members and students who are interested in exploring applications of deep learning but have little or no experience programming. As such, I decided to develop a few tools to both introduce them to concepts of deep learning, and to allow them to practice deep learning on text and images via a command line interface. There have been a fair amount of products developed in this space recently. The team behind have developed Prodigy ( an interface for working with text data that includes a CLI for text classification. More recently, a team of developers at Uber released Ludwig ( a general CLI for various types of image recognition and text classification, built on top of Tensorflow. I while these tools are great, they're still relatively complex (or expensive). I developed this internal tools as an easier inroad for my co-workers at Duke.

EasyDL CLI comes in two flavors. One for image recognition (EasyDL Image CLI) and one for text classification (EasyDL Text CLI).

EasyDL Image CLI

On the image side, the user can input a folder of test, training, and validation images via the command line.  The application then fine-tunes a ResNet  on the images provided. I included several techniques to improve accuracy including image preprocessing and augmentation. During the training step, users are prompted to select the training and test folders, as well as the number of epochs they would like to train for and the batch size for model training. The number of images and classes are displayed, and the model begins training. A progress bar displays the models progress, and the loss and accuracy are displayed. After the model is built, the user can then via the command line, kick up an API to serve the model. This CLI was developed using Click. The image recognition models are developed with Keras and the API is constructed using Flask and Pillow.

Additionally, the CLI comes with another set of commands for PCA analysis. PCA is helpful for clustering images from a corpus or creating a pseudo-search engine to query for similar images. Users can run the 'closest_train' command to train the model with PCA and 'closest_predict' to query the model for like-images. I included this feature to allow for further analysis on large corpuses where the user may want to organize images without inspecting each image individually.

After training, the CLI kicks up an API that serves the model for inference. As someone interested in applied machine learning techniques, I've struggled to find great resources on serving model in production settings, and understand that its a sticking point in the development pipeline. I use Flask spin up the API and serve the model. Moving forward, I would like to migrate much of this work to tensorflow and develop the API with Tensorflow Serving. This work is still in progress, but I hope to round out the both the image and text CLI by the end of the semester. My hope is that it will serve as a starting point for deep learning application. TODOs for the image CLI are to allow further hyperparameter turning, include an object detection model like YOLO or Single Shot Object Detection (SSD).