1 Star 0 Fork 0

Gitee 极速下载/Meetily

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
此仓库是为了提升国内下载速度的镜像仓库,每日同步一次。 原始仓库: https://github.com/Zackriya-Solutions/meeting-minutes
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README
MIT


Meetily - AI-Powered Meeting Assistant


Pre-Release MIT Pre-Release


Open source Ai Assistant for taking meeting notes

WebsiteAuthorDiscord Channel

An AI-Powered Meeting Assistant that captures live meeting audio, transcribes it in real-time, and generates summaries while ensuring user privacy. Perfect for teams who want to focus on discussions while automatically capturing and organizing meeting content without the need for external servers or complex infrastructure.

Meetily Demo
View full Demo Video

Overview

An AI-powered meeting assistant that captures live meeting audio, transcribes it in real-time, and generates summaries while ensuring user privacy. Perfect for teams who want to focus on discussions while automatically capturing and organizing meeting content.

Why?

While there are many meeting transcription tools available, this solution stands out by offering:

  • Privacy First: All processing happens locally on your device
  • Cost Effective: Uses open-source AI models instead of expensive APIs
  • Flexible: Works offline, supports multiple meeting platforms
  • Customizable: Self-host and modify for your specific needs
  • Intelligent: Built-in knowledge graph for semantic search across meetings

Features

✅ Modern, responsive UI with real-time updates

✅ Real-time audio capture (microphone + system audio)

✅ Live transcription using Whisper.cpp ✅ Speaker diarization

✅ Local processing for privacy

✅ Packaged the app for Mac Os

🚧 Export to Markdown/PDF

Note: We have a Rust-based implementation that explores better performance and native integration. It currently implements:

  • ✅ Real-time audio capture from both microphone and system audio
  • ✅ Live transcription using locally-running Whisper
  • ✅ Speaker diarization
  • ✅ Rich text editor for notes

We are currently working on:

  • ✅ Export to Markdown/PDF
  • ✅ Export to HTML

Release 0.0.2

A new release is available!

Please check out the release here.

What's New

  • Transcription quality is improved.
  • Bug fixes and improvements for frontend
  • Better backend app build process
  • Improved documentation
  • New .dmg package

What would be next?

  • Database connection to save meeting minutes
  • Improve summarization quality for smaller llm models
  • Add download options for meeting transcriptions
  • Add download option for summary

Known issues

  • Smaller LLMs can hallucinate, making summarization quality poor
  • Backend build process require CMake, C++ compiler, etc. Making it harder to build
  • Backend build process require Python 3.10 or newer
  • Frontend build process require Node.js

LLM Integration

The backend supports multiple LLM providers through a unified interface. Current implementations include:

Supported Providers

  • Anthropic (Claude models)
  • Groq (Llama3.2 90 B, Deepseek)
  • Ollama (Local models)

Configuration

Create .env file with your API keys:

# Required for Anthropic
ANTHROPIC_API_KEY=your_key_here  

# Required for Groq 
GROQ_API_KEY=your_key_here

System Architecture

High Level Architecture

Core Components

  1. Audio Capture Service

    • Real-time microphone/system audio capture
    • Audio preprocessing pipeline
    • Built with Rust (experimental) and Python
  2. Transcription Engine

    • Whisper.cpp for local transcription
    • Supports multiple model sizes (tiny->large)
    • GPU-accelerated processing
  3. LLM Orchestrator

    • Unified interface for multiple providers
    • Automatic fallback handling
    • Chunk processing with overlap
    • Model configuration:
  4. Data Services

    • ChromaDB: Vector store for transcript embeddings
    • SQLite: Process tracking and metadata storage
  5. API Layer

    • FastAPI endpoints:
      • POST /upload
      • POST /process
      • GET /summary/{id}
      • DELETE /summary/{id}

Deployment Architecture

  • Frontend: Tauri app + Next.js (packaged executables)
  • Backend: Python FastAPI:
    • Transcript workers
    • LLM inference

Prerequisites

  • Node.js 18+
  • Python 3.10+
  • FFmpeg
  • Rust 1.65+ (for experimental features)

Setup Instructions

1. Frontend Setup

Run packaged version

Go to the releases page and download the latest version.

Unzip the file and run the executable.

Provide necessary permissions for audio capture and microphone access (Only screen capture permission is required).

Dev run


# Navigate to frontend directory
cd frontend

# Give execute permissions to clean_build.sh
chmod +x clean_build.sh

# run clean_build.sh
./clean_build.sh

2. Backend Setup

# Create and activate virtual environment
python -m venv venv
source venv/bin/activate  # Windows: .\venv\Scripts\activate

# Navigate to backend directory
cd backend

# Install dependencies
pip install -r requirements.txt

# Start backend servers
./clean_start_backend.sh

Development Guidelines

  • Follow the established project structure
  • Write tests for new features
  • Document API changes
  • Use type hints in Python code
  • Follow ESLint configuration for JavaScript/TypeScript

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Submit a pull request

License

MIT License - Feel free to use this project for your own purposes.

Last updated: December 26, 2024

Star History

Star History Chart

MIT License Copyright (c) 2024 Zackriya Solutions Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

简介

Meetily是开源的本地 AI 会议记录助手,能实时转录会议内容并生成会议纪要,让你专注于讨论,无需担心记录 展开 收起
C/C++
MIT
取消

发行版

暂无发行版

贡献者

全部

近期动态

不能加载更多了
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
C/C++
1
https://gitee.com/mirrors/Meetily.git
git@gitee.com:mirrors/Meetily.git
mirrors
Meetily
Meetily
29-windows-support

搜索帮助