# corenet **Repository Path**: mirrors/corenet ## Basic Information - **Project Name**: corenet - **Description**: CoreNet 是一个深度神经网络工具包,允许研究人员和工程师为各种任务训练标准和新颖的小型和大型模型,包括基础模型(例如 CLIP 和 LLM)、对象分类、对象检测和语义分割 - **Primary Language**: Python - **License**: Not specified - **Default Branch**: main - **Homepage**: https://www.oschina.net/p/corenet - **GVP Project**: No ## Statistics - **Stars**: 1 - **Forks**: 2 - **Created**: 2024-04-24 - **Last Updated**: 2026-02-14 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # CoreNet: A library for training deep neural networks CoreNet is a deep neural network toolkit that allows researchers and engineers to train standard and novel small and large-scale models for variety of tasks, including foundation models (e.g., CLIP and LLM), object classification, object detection, and semantic segmentation. ## Table of contents * [What's new?](#whats-new) * [Research efforts at Apple using CoreNet](#research-efforts-at-apple-using-corenet) * [Installation](#installation) * [Directory Structure](#directory-structure) * [Maintainers](#maintainers) * [Contributing to CoreNet](#contributing-to-corenet) * [License](#license) * [Relationship with CVNets](#relationship-with-cvnets) * [Citation](#citation) ## What's new? * ***October 2024***: Version 0.1.1 of the CoreNet library includes * [KV Prediction](./projects/kv-prediction/) ## Research efforts at Apple using CoreNet Below is the list of publications from Apple that uses CoreNet. Also, training and evaluation recipes, as well as links to pre-trained models, can be found inside the [projects](./projects/) folder. Please refer to it for further details. * [KV Prediction for Improved Time to First Token](https://arxiv.org/abs/2410.08391) * [OpenELM: An Efficient Language Model Family with Open Training and Inference Framework](https://arxiv.org/abs/2404.14619) * [CatLIP: CLIP-level Visual Recognition Accuracy with 2.7x Faster Pre-training on Web-scale Image-Text Data](https://arxiv.org/abs/2404.15653) * [Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement](https://arxiv.org/abs/2303.08983) * [CLIP meets Model Zoo Experts: Pseudo-Supervision for Visual Enhancement](https://arxiv.org/abs/2310.14108) * [FastVit: A Fast Hybrid Vision Transformer using Structural Reparameterization](https://arxiv.org/abs/2303.14189) * [Bytes Are All You Need: Transformers Operating Directly on File Bytes](https://arxiv.org/abs/2306.00238) * [MobileOne: An Improved One millisecond Mobile Backbone](https://arxiv.org/abs/2206.04040) * [RangeAugment: Efficient Online Augmentation with Range Learning](https://arxiv.org/abs/2212.10553) * [Separable Self-attention for Mobile Vision Transformers (MobileViTv2)](https://arxiv.org/abs/2206.02680) * [CVNets: High performance library for Computer Vision, ACM MM'22](https://arxiv.org/abs/2206.02002) * [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer, ICLR'22](https://arxiv.org/abs/2110.02178) ## Installation You will need Git LFS (instructions below) to run tests and Jupyter notebooks ([instructions](https://jupyter.org/install)) in this repository, and to contribute to it so we recommend that you install and activate it first. On Linux we recommend to use Python 3.10+ and PyTorch (version >= v2.1.0), on macOS system Python 3.9+ should be sufficient. Note that the optional dependencies listed below are required if you'd like to make contributions and/or run tests. For Linux (substitute `apt` for your package manager): ```bash sudo apt install git-lfs git clone git@github.com:apple/corenet.git cd corenet git lfs install git lfs pull # The following venv command is optional, but recommended. Alternatively, you can create and activate a conda environment. python3 -m venv venv && source venv/bin/activate python3 -m pip install --editable . ``` To install optional dependencies for audio and video processing: ```bash sudo apt install libsox-dev ffmpeg ``` For macOS, assuming you use Homebrew: ```bash brew install git-lfs git clone git@github.com:apple/corenet.git cd corenet cd \$(pwd -P) # See the note below. git lfs install git lfs pull # The following venv command is optional, but recommended. Alternatively, you can create and activate a conda environment. python3 -m venv venv && source venv/bin/activate python3 -m pip install --editable . ``` To install optional dependencies for audio and video processing: ```bash brew install sox ffmpeg ``` Note that on macOS the file system is case insensitive, and case sensitivity can cause issues with Git. You should access the repository on disk as if the path were case sensitive, i.e. with the same capitalization as you see when you list the directories `ls`. You can switch to such a path with the `cd $(pwd -P)` command. ## Directory Structure This section provides quick access and a brief description for important CoreNet directories.
| Description | Quick Access |
|---|---|
Getting StartedWorking with the examples is an easy way to get started with CoreNet. | └── tutorials ├── train_a_new_model_on_a_new_dataset_from_scratch.ipynb ├── guide_slurm_and_multi_node_training.md ├── clip.ipynb ├── semantic_segmentation.ipynb └── object_detection.ipynb |
Training RecipesCoreNet provides reproducible training recipes, in addition to the pretrained model weights and checkpoints for the publications that are listed inprojects/ directory.
Publication project directories generally contain the following contents:
* `README.md` provides documentation, links to the pretrained weights, and citations.
* ` |
└── projects
├── kv-prediction (*)
├── byteformer
├── catlip
├── clip
├── fastvit
├── mobilenet_v1
├── mobilenet_v2
├── mobilenet_v3
├── mobileone
├── mobilevit
├── mobilevit_v2
├── openelm
├── range_augment
├── resnet
└── vit
|
MLX ExamplesMLX examples demonstrate how to run CoreNet models efficiently on Apple Silicon. Please find further information in theREADME.md file within the corresponding example directory.
|
└──mlx_example
├── clip
└── open_elm
|
Model ImplementationsModels are organized by tasks (e.g. "classification"). You can find all model implementations for each task in the corresponding task folder. Each model class is decorated by a `@MODEL_REGISTRY.register(name=" |
└── corenet
└── modeling
└── models
├── audio_classification
├── classification
├── detection
├── language_modeling
├── multi_modal_img_text
└── segmentation
|
DatasetsSimilarly to the models, datasets are also categorized by tasks. |
└── corenet
└── data
└── datasets
├── audio_classification
├── classification
├── detection
├── language_modeling
├── multi_modal_img_text
└── segmentation
|
Other key directoriesIn this section, we have highlighted the rest of the key directories that implement classes corresponding to the names that are referenced in the YAML configurations. |
└── corenet
├── loss_fn
├── metrics
├── optims
│ └── scheduler
├── train_eval_pipelines
├── data
│ ├── collate_fns
│ ├── sampler
│ ├── text_tokenizer
│ ├── transforms
│ └── video_reader
└── modeling
├── layers
├── modules
├── neural_augmentor
└── text_encoders
|