Hexabot NLU Engine
The Hexabot NLU (Natural Language Understanding) engine is a Python-based project that provides tools for building, training, and evaluating machine learning models for natural language tasks such as intent detection and language recognition. It also includes a REST API for inference, built using FastAPI.
Directory Structure
/run.py: The CLI tool that provides commands for training, evaluating, and managing models.
/models: Contains the different model definitions and logic for training, testing, and evaluation.
/data: Placeholder for datasets used during training and evaluation.
/experiments: Placeholder for stored models generated during training.
/data_loaders: Classes that define the way to load datasets to be used by the different models.
/main.py: The FastAPI-based REST API used for inference, exposing endpoints for real-time predictions.
Setup
No dependencies needed besides Python 3.11.6, virtualenv, and TensorFlow. Start developing your new model on top of this workflow by cloning this repository:
Directory structure
data
: gitignore'd, place datasets here.experiments
: gitignore'd, trained models written here.data_loaders
: write your data loaders here.models
: write your models here.
Usage
Check models/mlp.py
and data_loaders/mnist.py
for fully working examples.
You should run source env.sh
on each new shell session. This activates the virtualenv and creates a nice alias for run.py
:
Most routines involve running a command like this:
Examples :
where the model
and data_loader
args are the module names (i.e., the file names without the .py
). The command above would run the Keras model's fit
method, but it could be any custom as long as it accepts a data loader instance as argument.
If save_dir
already has a model:
Only the first two arguments are required and the data loader may be changed, but respecifying the model is not allowed-- the existing model will always be used.
Specified hyperparameter values in the command line WILL override previously used ones (for this run only, not on disk).
tfbp.Model
tfbp.Model
Models pretty much follow the same rules as Keras models with very slight differences: the constructor's arguments should not be overriden (since the boilerplate code handles instantiation), and the save
and restore
methods don't need any arguments.
You can also write your own training loops à la pytorch by overriding the fit
method or writing a custom method that you can invoke via run.py
simply by adding the @tfbp.runnable
decorator. Examples of both are available in models/mlp.py
.
tfbp.DataLoader
tfbp.DataLoader
Since model methods invoked by run.py
receive a data loader instance, you may name your data loader methods whatever you wish and call them in your model code. A good practice is to make the data loader handle anything that is specific to a particular dataset, which allows the model to be as general as possible.
API
API is built using FastAPI : https://fastapi.tiangolo.com/
Run the dev server in standalone with:
Run the project with Docker :
Pushing models to HuggingFace
Please refer to official HF documentation on how to host models : https://huggingface.co/docs/hub/en/repositories-getting-started
What is important to note is that big files should be tracked with git-lfs, which you can initialize with:
and if your files are larger than 5GB you’ll also need to run:
Last updated