# OSTuneAgent **Repository Path**: hubin95/ostune-agent ## Basic Information - **Project Name**: OSTuneAgent - **Description**: No description available - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2026-01-27 - **Last Updated**: 2026-01-28 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # OSTuneAgent An automated OS Performance Tuning Agent that uses LLM to analyze system metrics and apply optimizations. ## Overview OSTuneAgent is a Python-based intelligent agent system that automatically optimizes Linux system performance through: 1. **Automated Benchmarking**: Establishes baseline performance using custom benchmark commands 2. **Smart Metrics Collection**: Collects comprehensive system metrics (CPU, memory, I/O, network) 3. **LLM-Powered Analysis**: Uses language models to identify performance bottlenecks and suggest optimizations 4. **Auto-Generated Scripts**: Creates and applies system optimization scripts via LLM 5. **Performance Verification**: Re-benchmarks to verify improvements against baseline 6. **Iterative Optimization**: Continues until 10% improvement target is reached ## Architecture The agent system consists of the following components: - **Main Agent** (`src/main_agent.py`): Orchestrates the optimization workflow - **Hotspot Analyzer** (`src/agents/hotspot_analyzer.py`): Identifies performance bottlenecks - **Hotspot Optimizer** (`src/agents/hotspot_optimizer.py`): Suggests and applies optimizations - **Command Executor** (`src/agents/command_executor.py`): Executes shell commands with auto-retry - **Performance Validator** (`src/agents/performance_validator.py`): Runs benchmarks and parses results ## Quick Start ```bash # Install dependencies pip install -r requirements.txt # Configure LLM settings in config.json # Set api_url, api_key, model, and max_tokens # Run the agent python3 main.py --context "Web server optimization" --benchmark "sysbench cpu run" --keywords "total time operations per second" ``` ## Configuration Edit `config.json` to configure the LLM: ```json { "llm": { "api_url": "https://api.openai.com/v1/chat/completions", "api_key": "your-api-key-here", "model": "gpt-4", "max_tokens": 40000, "temperature": 0.7 } } ``` ## Requirements - Python 3.7+ - Linux-based OS - Root or sudo privileges - LLM API access (OpenAI compatible) ## Project Structure ``` ostune-agent/ ├── src/ # Source code │ ├── main_agent.py # Main optimization workflow │ ├── agents/ # Agent implementations │ │ ├── hotspot_analyzer.py │ │ ├── hotspot_optimizer.py │ │ ├── command_executor.py │ │ └── performance_validator.py │ └── utils/ # Utility modules │ ├── config.py │ └── llm_client.py ├── tests/ # Unit tests ├── config.json # Configuration file ├── requirements.txt # Python dependencies ├── main.py # Entry point └── README.md # This file ``` ## Development ### Running Tests ```bash # Run all tests python3 -m unittest discover -s tests -v # Run specific test python3 -m unittest tests.test_command_executor -v ``` ### Code Style Follows PEP 8 guidelines. See [AGENTS.md](AGENTS.md) for detailed coding standards. ## Features ### Workflow Steps 1. **Receive User Input**: Context, benchmark command, performance keywords 2. **Establish Baseline**: Run benchmark to get initial performance metrics 3. **Analyze Hotspots**: Use LLM to identify performance bottlenecks 4. **Apply Optimizations**: Generate and execute optimization commands 5. **Verify Performance**: Re-run benchmark and compare to baseline 6. **Iterate**: Repeat until 10% improvement or max iterations reached ### Supported Optimizations The LLM can suggest various optimizations including: - CPU governor tuning - I/O scheduler optimization - Memory tuning (swappiness, hugepages) - Disk performance tuning - System parameter adjustments - Custom optimizations based on context ## Example Usage ```bash # Basic usage python3 main.py \ --context "Database server optimization" \ --benchmark "sysbench oltp_read_write run" \ --keywords "transactions per second" # With custom config python3 main.py \ --config custom_config.json \ --context "Web application server" \ --benchmark "ab -n 10000 http://localhost:8080/" \ --keywords "Requests per second Time per request" ``` ## Safety - Scripts are generated by LLM and should be reviewed - All optimization attempts are logged - Results saved to `optimization_results.json` - Test in non-production environments first ## License [Specify your license] ## Contributing Contributions welcome! Please ensure tests pass and follow PEP 8 style guidelines. ## Disclaimer This tool modifies system settings. Test in non-production environments first.