Gearing up for the Raytheon AVC Challenge
Autonomous UAV-UGV Collaboration System. A distributed ROS 2 architecture for "Operation Touchdown" (Raytheon AVC 2026). Features GPS-denied navigation (VIO), neuromorphic edge AI (BrainC...
github.com
Xplorer07/
Bison_VisionPublic
Autonomous UAV-UGV Collaboration System. A distributed ROS 2 architecture for "Operation Touchdown" (Raytheon AVC 2026). Features GPS-denied navigation (VIO), neuromorphic edge AI (BrainChip Akida) for obstacle mapping, and precision autonomous landing on a moving ground vehicle.
Bison Vision Training
A clean, professional YOLOv2 training pipeline targeting BrainChip Akida1000 neuromorphic processors, containerized with Docker for NVIDIA GPU acceleration.
Purpose
This repository provides a complete training pipeline for YOLOv2 object detection models optimized for deployment on BrainChip Akida1000 hardware. The pipeline includes training, evaluation, and conversion to Akida-compatible formats, all within a reproducible Docker environment.
Training Pipeline
Dataset → Training → Evaluation → Akida Conversion
- Dataset Preparation: Prepare your dataset in the expected format (see docs/training_guide.md)
- Training: Train YOLOv2 model on NVIDIA GPUs with configurable hyperparameters
- Evaluation: Validate model performance on test sets
- Akida Conversion: Convert trained model to Akida-compatible format (.fbz) for deployment
Repository Structure
bison-vision-training/
├── README.md # This file
├── .gitignore # Git ignore rules
├── Dockerfile # Docker environment definition
├── scripts/
│ └── train_yolov2_akida.py # Main training + evaluation + conversion script
├── configs/
│ └── yolov2_akida.yaml # Training configuration (classes, hyperparameters)
├── docs/
│ └── training_guide.md # Detailed training documentation
└── docker/
└── README.md # Docker build and usage instructions
Expected Local Directories
This repository expects the following directories to exist on your local machine (outside the repository):
- ~/datasets/: Training and validation datasets
- ~/experiments/: Training outputs, checkpoints, logs, and converted models
These directories will be mounted into the Docker container during training.
Quick Start
Prerequisites
- Docker installed on your system
- NVIDIA GPU with CUDA support
- NVIDIA Container Toolkit installed (installation guide)
- Dataset in COCO format (assumed to be in ~/datasets but you can adjust as needed)
- Directory on host for training output and converted models (assumed to be ~/experiments but you can adjust as needed)
Load Docker Image
# Load the training image
docker load -i yolov2-training.tar
Run Container
# Run with GPU support
docker run --gpus all -it \
-v ~/datasets:/datasets \
-v ~/experiments:/experiments \
-v $(pwd):/workspace \
yolov2-training:latest bash
# Inside container, verify setup
python -c "import torch; print(f'PyTorch: {torch.__version__}')"
python -c "import torch; print(f'CUDA available: {torch.cuda.is_available()}')"
Run Training
Once inside the container, run training with your dataset:
# Basic usage - provide your dataset directory
python scripts/train_yolov2_akida.py --data-dir /datasets/your_dataset
# With absolute path
python scripts/train_yolov2_akida.py --data-dir /home/user/datasets/coco
# With relative path (if mounted differently)
python scripts/train_yolov2_akida.py --data-dir ../my_dataset
All data and outputs must be stored in external directories (e.g., ~/datasets, ~/experiments) and mounted into the container at runtime.
Documentation
- Training Guide: Detailed explanation of dataset format, training process, evaluation metrics, and Akida conversion
- Docker Guide: Instructions for building, loading, and running the Docker container
Hardware Requirements
- Training: NVIDIA GPU with CUDA support (recommended: 8GB+ VRAM)
- Deployment: BrainChip Akida1000 neuromorphic processor