# linear-lognormal-attention-ms
**Repository Path**: ynahshan/linear-lognormal-attention-ms
## Basic Information
- **Project Name**: linear-lognormal-attention-ms
- **Description**: MindSpore code for Linear Long-Normal Attention
- **Primary Language**: Python
- **License**: MIT
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2024-01-25
- **Last Updated**: 2024-02-22
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# Linear Log-Normal Attention with Unbiased Concentration
A MindSpore implementation of Linear Log-Normal Attention with Unbiased Concentration - [arxiv](https://arxiv.org/abs/2311.13541).
## Install requirements
MindSpore - https://www.mindspore.cn/install/en
Mindformers - https://gitee.com/mindspore/mindformers
```bash
$ pip install -r requirements.txt
```
## Usage
```python
import sys
import mindspore as ms
import numpy as np
from lln.lln_attention import LLNAttention
np.random.seed(0)
batch_size = 1
seq_len = 512
num_heads = 2
hidden_size = 64*num_heads
lln_attn = LLNAttention(batch_size=batch_size, src_seq_length=seq_len,
tgt_seq_length=seq_len, hidden_size=hidden_size,
num_heads=num_heads)
q = ms.Tensor(np.random.normal(size=(batch_size, seq_len, hidden_size)), dtype=ms.float32)
k = ms.Tensor(np.random.normal(size=(batch_size, seq_len, hidden_size)), dtype=ms.float32)
v = ms.Tensor(np.random.normal(size=(batch_size, seq_len, hidden_size)), dtype=ms.float32)
out = lln_attn(q, k, v, None)
```
## Citations
```bibtex
@misc{nahshan2024linear,
title={Linear Log-Normal Attention with Unbiased Concentration},
author={Yury Nahshan and Joseph Kampeas and Emir Haleva},
year={2024},
eprint={2311.13541},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```