# AI-basketball-analysis
**Repository Path**: asdkh/AI-basketball-analysis
## Basic Information
- **Project Name**: AI-basketball-analysis
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: Not specified
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2021-01-19
- **Last Updated**: 2021-01-19
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
> 🏀 Analyze basketball shots and shooting pose with machine learning!
This is an artificial intelligence application built on the concept of **object detection**. Analyze basketball shots by digging into the data collected from object detection. We can get the result by simply uploading files to the web App, or submitting a **POST request** to the API. Please check the [features](#features) below. There are more features coming up! Feel free to follow.
All the data for the shooting pose analysis is calculated by implementing [OpenPose](https://github.com/CMU-Perceptual-Computing-Lab/openpose). Please note that this is an implementation only for noncommercial research use only. Please read the [LICENSE](https://github.com/chonyy/AI-basketball-analysis/blob/master/LICENSE), which is exaclty same as the [CMU's OpenPose License](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/LICENSE).
If your are interested in the concept of human pose estimation, I have written a [research paper **summary**](https://towardsdatascience.com/openpose-research-paper-summary-realtime-multi-person-2d-pose-estimation-3563a4d7e66) of OpenPose. Check it out!
## Getting Started
These instructions will get you a copy of the project up and running on your local machine.
### Get a copy
Get a copy of this project by simply running the git clone command.
``` git
git clone https://github.com/chonyy/AI-basketball-analysis.git
```
### Prerequisites
Before running the project, we have to install all the dependencies from requirements.txt
``` pip
pip install -r requirements.txt
```
Please note that you need a GPU with proper CUDA setup to run the video analysis, since a CUDA device is required to run OpenPose.
### Hosting
Last, get the project hosted on your local machine with a single command.
``` python
python app.py
```
#### Alternatives
##### Google Colab
[
](https://colab.research.google.com/github/hardik0/AI-basketball-analysis-on-google-colab/blob/master/AI_basketball_analysis_google_colab.ipynb)
Thanks to [hardik0](https://github.com/hardik0/AI-basketball-analysis-on-google-colab). Now we can play around with it without a GPU machine!
##### Heroku
This project is also hosted on [Heroku](https://ai-basketball-analysis.herokuapp.com/). However, the heavy computation of TensorFlow may cause Timeout error and crash the app (especially for video analysis). Therefore, hosting the project on your local machine is more preferable.
Please note that the shooting pose analysis won't be running on the Heroku hosted website, since a CUDA device is required to run OpenPose.
## Project Structure
## Features
This project has three main features, [shot analysis](#shot-analysis), [shot detection](#shot-detection), [detection API](#detection-api).
### Shot and Pose analysis
#### Shot counting
Counting shooting attempts and missing, scoring shots from the input video.
Detection keypoints in different colors have different meanings listed below:
* **Blue:** Detected basketball in normal status
* **Purple**: Undetermined shot
* **Green:** Shot went in
* **Red:** Miss
#### Pose analysis
Implementing [OpenPose](https://github.com/CMU-Perceptual-Computing-Lab/openpose) to calculate the angle of elbow and knee during shooting.
Release angle and release time are calculated by all the data collected from shot analysis and pose analysis. Please note that there will be a relatively big **error** for the release time since it was calculated as the total time when the ball is in hand.
### Shot detection
Detection will be shown on the image. The confidence and the coordinate of the detection will be listed below.
### Detection API
Get the JSON response by submitting a **POST** request to (./detection_json) with "image" as KEY and input image as VALUE.
## Detection model
The object detection model is trained with the [Faster R-CNN model architecture](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models), which includes pretrained weight on COCO dataset. Taking the configuration from the model architecture and train it on my own dataset.
## Future plans
1. Host it on azure web app service.
2. Improve the efficiency, making it executable on web app services.