Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Correct pathes and fix error #453

Open
wants to merge 22 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 42 additions & 0 deletions .codecov.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
#see https://github.com/codecov/support/wiki/Codecov-Yaml
codecov:
notify:
require_ci_to_pass: yes

coverage:
precision: 0 # 2 = xx.xx%, 0 = xx%
round: nearest # how coverage is rounded: down/up/nearest
range: 40...100 # custom range of coverage colors from red -> yellow -> green
status:
# https://codecov.readme.io/v1.0/docs/commit-status
project:
default:
against: auto
target: 90% # specify the target coverage for each commit status
threshold: 20% # allow this little decrease on project
# https://github.com/codecov/support/wiki/Filtering-Branches
# branches: master
if_ci_failed: error
# https://github.com/codecov/support/wiki/Patch-Status
patch:
default:
against: auto
target: 40% # specify the target "X%" coverage to hit
# threshold: 50% # allow this much decrease on patch
changes: false

parsers:
gcov:
branch_detection:
conditional: true
loop: true
macro: false
method: false
javascript:
enable_partials: false

comment:
layout: header, diff
require_changes: false
behavior: default # update if exists else create new
# branches: *
7 changes: 7 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -106,3 +106,10 @@ ENV/

# mypy
.mypy_cache/

# PyCharm IDE
.idea/

# Temporary
temp/
temporary/
24 changes: 24 additions & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# this file is *not* meant to cover or endorse the use of travis, but rather to
# help confirm pull requests to this project.

language: python

matrix:
include:
# - python: 2.7
# env: TOXENV=py27
- python: 3.5
env: TOXENV=py35
- python: 3.6
env: TOXENV=py36
- python: 3.7
env: TOXENV=py37

install:
- pip install -r requirements.txt
- pip install tox codecov
- pip list

script: tox

after_success: codecov
28 changes: 28 additions & 0 deletions MANIFEST.in
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Manifest syntax https://docs.python.org/2/distutils/sourcedist.html
graft wheelhouse

recursive-exclude __pycache__ *.pyc *.pyo *.orig

# Include the README
include *.md

# Include the license file
include LICENSE

# Include the data files
recursive-include model_data *.txt *.cfg *.yaml *.csv *.jpg

# Include scripts
recursive-include scripts *.py

# Include the Requirements
include requirements.txt

# Exclude build configs
exclude *.yml

prune .git
prune venv
prune temp
prune results
prune test*
118 changes: 55 additions & 63 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,99 +1,91 @@
# keras-yolo3

[![Build Status](https://travis-ci.org/Borda/keras-yolo3.svg?branch=master)](https://travis-ci.org/Borda/keras-yolo3)
[![Build status](https://ci.appveyor.com/api/projects/status/24m00vife2wae7k0/branch/master?svg=true)](https://ci.appveyor.com/project/Borda/keras-yolo3/branch/master)
[![CircleCI](https://circleci.com/gh/Borda/keras-yolo3.svg?style=svg)](https://circleci.com/gh/Borda/keras-yolo3)
[![codecov](https://codecov.io/gh/Borda/keras-yolo3/branch/master/graph/badge.svg)](https://codecov.io/gh/Borda/keras-yolo3)
[![Codacy Badge](https://api.codacy.com/project/badge/Grade/e03dbbb0f0fd48baa70f637456f1fe36)](https://www.codacy.com/project/Borda/keras-yolo3/dashboard?utm_source=github.com&utm_medium=referral&utm_content=Borda/keras-yolo3&utm_campaign=Badge_Grade_Dashboard)
[![CodeFactor](https://www.codefactor.io/repository/github/borda/keras-yolo3/badge)](https://www.codefactor.io/repository/github/borda/keras-yolo3)
[![license](https://img.shields.io/github/license/mashape/apistatus.svg)](LICENSE)

## Introduction

A Keras implementation of YOLOv3 (Tensorflow backend) inspired by [allanzelener/YAD2K](https://github.com/allanzelener/YAD2K).

A [Keras](https://keras.io/) implementation of YOLOv3 ([Tensorflow backend](https://www.tensorflow.org/)) inspired by [allanzelener/YAD2K](https://github.com/allanzelener/YAD2K).

---

## Quick Start

For more model and configuration please see [YOLO website](http://pjreddie.com/darknet/yolo/) and [darknet](https://github.com/pjreddie/darknet/tree/master/cfg) repository.

1. Download YOLOv3 weights from [YOLO website](http://pjreddie.com/darknet/yolo/).
```bash
wget -O ./model_data/yolo3.weights \
https://pjreddie.com/media/files/yolov3.weights \
--progress=bar:force:noscroll
```
alternatively you can download light version `yolov3-tiny.weights`
2. Convert the Darknet YOLO model to a Keras model.
```bash
python3 scripts/convert_weights.py \
--config_path ./model_data/yolo.cfg \
--weights_path ./model_data/yolo.weights \
--output_path ./model_data/yolo.h5
```
3. Run YOLO detection.
```bash
python3 scripts/predict.py \
--path_weights ./model_data/yolo.h5 \
--path_anchors ./model_data/yolo_anchors.csv \
--path_classes ./model_data/coco_classes.txt \
--path_output ./results \
--path_image ./model_data/bike-car-dog.jpg \
--path_video person.mp4
```
For Full YOLOv3, just do in a similar way, just specify model path and anchor path with `--path_weights <model_file>` and `--path_anchors <anchor_file>`.
4. MultiGPU usage: use `--gpu_num N` to use N GPUs. It is passed to the Keras [multi_gpu_model()](https://keras.io/utils/#multi_gpu_model).

```
wget https://pjreddie.com/media/files/yolov3.weights
python convert.py yolov3.cfg yolov3.weights model_data/yolo.h5
python yolo_video.py [OPTIONS...] --image, for image detection mode, OR
python yolo_video.py [video_path] [output_path (optional)]
```

For Tiny YOLOv3, just do in a similar way, just specify model path and anchor path with `--model model_file` and `--anchors anchor_file`.

### Usage
Use --help to see usage of yolo_video.py:
```
usage: yolo_video.py [-h] [--model MODEL] [--anchors ANCHORS]
[--classes CLASSES] [--gpu_num GPU_NUM] [--image]
[--input] [--output]

positional arguments:
--input Video input path
--output Video output path

optional arguments:
-h, --help show this help message and exit
--model MODEL path to model weight file, default model_data/yolo.h5
--anchors ANCHORS path to anchor definitions, default
model_data/yolo_anchors.txt
--classes CLASSES path to class definitions, default
model_data/coco_classes.txt
--gpu_num GPU_NUM Number of GPU to use, default 1
--image Image detection mode, will ignore all positional arguments
```
---

4. MultiGPU usage: use `--gpu_num N` to use N GPUs. It is passed to the [Keras multi_gpu_model()](https://keras.io/utils/#multi_gpu_model).

## Training

For training you can use [VOC dataset](http://host.robots.ox.ac.uk/pascal/VOC/), [COCO datset](cocodataset.org) or your own...

1. Generate your own annotation file and class names file.
One row for one image;
Row format: `image_file_path box1 box2 ... boxN`;
Box format: `x_min,y_min,x_max,y_max,class_id` (no space).
For VOC dataset, try `python voc_annotation.py`
* One row for one image;
* Row format: `image_file_path box1 box2 ... boxN`;
* Box format: `x_min,y_min,x_max,y_max,class_id` (no space).
* For VOC dataset.
Run one of following scrips for dataset conversion
* `scripts/annotation_voc.py`
* `scripts/annotation_coco.py`
* `scripts/annotation_csv.py`
Here is an example:
```
```text
path/to/img1.jpg 50,100,150,200,0 30,50,200,120,3
path/to/img2.jpg 120,300,250,600,2
...
```

2. Make sure you have run `python convert.py -w yolov3.cfg yolov3.weights model_data/yolo_weights.h5`
The file model_data/yolo_weights.h5 is used to load pretrained weights.

3. Modify train.py and start training.
`python train.py`
Use your trained weights or checkpoint weights with command line option `--model model_file` when using yolo_video.py
2. Make sure you have run `python scripts/convert_weights.py <...>`.
The file `model_data/yolo_weights.h5` is used to load pre-trained weights.
3. Modify train.py and start training. `python train.py`.
Use your trained weights or checkpoint weights with command line option `--model model_file` when using `yolo_interactive.py`.
Remember to modify class path or anchor path, with `--classes class_file` and `--anchors anchor_file`.

If you want to use original pretrained weights for YOLOv3:
1. `wget https://pjreddie.com/media/files/darknet53.conv.74`
2. rename it as darknet53.weights
3. `python convert.py -w darknet53.cfg darknet53.weights model_data/darknet53_weights.h5`
4. use model_data/darknet53_weights.h5 in train.py
If you want to use original pre-trained weights for YOLOv3:
1. `wget https://pjreddie.com/media/files/darknet53.conv.74`
2. rename it as `darknet53.weights`
3. `python convert.py -w darknet53.cfg darknet53.weights model_data/darknet53_weights.h5`
4. use `model_data/darknet53_weights.h5` in `train.py`

---

## Some issues to know

1. The test environment is
- Python 3.5.2
- Keras 2.1.5
- tensorflow 1.6.0

1. The test environment is Python 3.5.2 ; Keras 2.1.5 ; tensorflow 1.6.0
2. Default anchors are used. If you use your own anchors, probably some changes are needed.

3. The inference result is not totally the same as Darknet but the difference is small.

4. The speed is slower than Darknet. Replacing PIL with opencv may help a little.

5. Always load pretrained weights and freeze layers in the first stage of training. Or try Darknet training. It's OK if there is a mismatch warning.

6. The training strategy is for reference only. Adjust it according to your dataset and your goal. And add further strategy if needed.

7. For speeding up the training process with frozen layers train_bottleneck.py can be used. It will compute the bottleneck features of the frozen model first and then only trains the last layers. This makes training on CPU possible in a reasonable time. See [this](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html) for more information on bottleneck features.
6. The training strategy is for reference only. Adjust it according to your dataset and your goal. and add further strategy if needed.
7. For speeding up the training process with frozen layers train_bottleneck.py can be used. It will compute the bottleneck features of the frozen model first and then only trains the last layers. This makes training on CPU possible in a reasonable time. See this [post](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html) for more information on bottleneck features.
48 changes: 48 additions & 0 deletions appveyor.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# https://www.appveyor.com/docs/appveyor-yml/
environment:

# SDK v7.0 MSVC Express 2008's SetEnv.cmd script will fail if the
# /E:ON and /V:ON options are not enabled in the batch script interpreter
# See: http://stackoverflow.com/a/13751649/163740
CMD_IN_ENV: "cmd /E:ON /V:ON /C obvci_appveyor_python_build_env.cmd"

matrix:

# Pre-installed Python versions, which Appveyor may upgrade to
# a later point release.
# See: http://www.appveyor.com/docs/installed-software#python

- PYTHON: "C:\\Python35-x64"
PYTHON_VERSION: "3.5.x"
PYTHON_ARCH: "64"
TOXENV: "py35"

- PYTHON: "C:\\Python36-x64"
PYTHON_VERSION: "3.6.x"
PYTHON_ARCH: "64"
TOXENV: "py36"

- PYTHON: "C:\\Python37-x64"
PYTHON_VERSION: "3.7.x"
PYTHON_ARCH: "64"
TOXENV: "py37"

build: off

install:
# If there is a newer build queued for the same PR, cancel this one.
# The AppVeyor 'rollout builds' option is supposed to serve the same
# purpose but it is problematic because it tends to cancel builds pushed
# directly to master instead of just PR builds (or the converse).
# credits: JuliaLang developers.
- SET PATH=%PYTHON%;%PYTHON%\Scripts;%path%
- python --version
- pip --version
- pip install -r requirements.txt
- pip install tox codecov
- pip list
- dir

test_script: tox

on_success: codecov
78 changes: 78 additions & 0 deletions circle.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
version: 2.0

jobs:
Py2:
docker:
- image: circleci/python:2.7
steps: &steps
- checkout

- run:
name: Install Packages
command: |
sudo apt-get update
sudo apt-get install pkg-config python-dev python-tk

- run:
name: Install PyPI dependences
command: |
pip install -r requirements.txt --user
sudo pip install coverage pytest pytest-cov codecov
python --version ; pip --version ; pwd ; ls

- run:
name: Testing
command: |
coverage run --source yolo3 -m py.test yolo3 scripts -v --doctest-modules --junitxml=test-reports/pytest_junit.xml
coverage report && coverage xml -o test-reports/coverage.xml
codecov

- run:
name: Sample Detection
command: |
export DISPLAY=""
# download and conver weights
wget -O ./model_data/tiny-yolo.weights https://pjreddie.com/media/files/yolov3-tiny.weights --progress=bar:force:noscroll
python ./scripts/convert_weights.py --config_path ./model_data/tiny-yolo.cfg --weights_path ./model_data/tiny-yolo.weights --output_path ./model_data/tiny-yolo.h5
mkdir ./results
# download sample image and video
wget -O ./results/dog.jpg https://raw.githubusercontent.com/pjreddie/darknet/master/data/dog.jpg
wget -O ./results/volleyball.mp4 https://d2v9y0dukr6mq2.cloudfront.net/video/preview/UnK3Qzg/crowds-of-poeple-hot-summer-day-at-wasaga-beach-ontario-canada-during-heatwave_n2t3d8trl__SB_PM.mp4
# run sample detections
python ./scripts/predict.py -w ./model_data/tiny-yolo.h5 -a ./model_data/tiny-yolo_anchors.csv --model_image_size 416 416 -c ./model_data/coco_classes.txt -o ./results -i ./results/dog.jpg -v ./results/volleyball.mp4
ls -l results/*

- run:
name: Sample Training
command: |
export DISPLAY=""
# download the dataset
wget -O ./model_data/VOCtrainval_2007.tar http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar
tar xopf ./model_data/VOCtrainval_2007.tar --directory ./model_data/
# prepare dataset for usage
python ./scripts/annotation_voc.py --path_dataset ./model_data/VOCdevkit --classes bicycle car person --sets 2007,train 2007,val --path_output ./model_data
# prepare very tlim training on 2 and 1 epoch
printf "image-size: [416, 416]\nbatch-size:\n body: 4\n fine: 4\nepochs:\n body: 2\n fine: 1\nvalid-split: 0.2\ngenerator:\n augument: false\n nb_threads: 1" > ./model_data/train_tiny-yolo_test.yaml
# cut the dataset size
python -c "lines = open('model_data/VOC_2007_val.txt', 'r').readlines(); open('model_data/VOC_2007_val.txt', 'w').writelines(lines[:250])"
# start the training
python ./scripts/train.py --path_dataset ./model_data/VOC_2007_val.txt --path_weights ./model_data/tiny-yolo.h5 --path_anchors ./model_data/tiny-yolo_anchors.csv --path_classes ./model_data/voc_classes.txt --path_output ./model_data --path_config ./model_data/train_tiny-yolo_test.yaml
# use the train model
python ./scripts/predict.py -w ./model_data/tiny-yolo_trained_final.h5 -a ./model_data/tiny-yolo_anchors.csv --model_image_size 416 416 -c ./model_data/voc_classes.txt -o ./results -i ./model_data/bike-car-dog.jpg
ls -l results/*

- store_test_results:
path: test-reports
- store_artifacts:
path: test-reports

Py3:
docker:
- image: circleci/python:3.6
steps: *steps

workflows:
version: 2
build:
jobs:
- Py3
Loading