# ru **Repository Path**: underdogs/ru ## Basic Information - **Project Name**: ru - **Description**: minio resumable upload command tools - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2026-02-05 - **Last Updated**: 2026-02-05 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Resumable Upload Tool (ru) A command-line tool for uploading directories to MinIO with resume capability. ## Key Features - **Automatic Resume Detection**: The upload command now automatically detects existing upload sessions for the same directory/bucket combination and resumes from where it left off - **MinIO v7 Compatibility**: Fixed compatibility issues with minio-go v7 client library - **Simplified API**: Uses `PutObject` for all uploads - minio-go v7 handles multipart uploads automatically - **Rate Limiting**: Built-in upload speed limiting for bandwidth-constrained environments - **Concurrent Uploads**: Configurable worker pool for parallel file uploads - **Checksum Verification**: Optional MD5 checksum verification for data integrity - **Retry Logic**: Configurable retry attempts for failed uploads - **State Persistence**: Upload progress saved to JSON state files for resumability ## Installation ```bash go build -o ru . ``` ## Usage ### Basic Upload ```bash # Upload with credentials as flags ru upload /path/to/files mybucket backup --access-key KEY --secret-key SECRET # Upload with environment variables export MINIO_ACCESS_KEY=your-access-key export MINIO_SECRET_KEY=your-secret-key ru upload /path/to/files mybucket backup --endpoint minio.example.com:9000 --ssl ``` ### Advanced Options ```bash # Upload with speed limiting and custom settings ru upload /data mybucket backup \ --access-key KEY --secret-key SECRET \ --upload-limit 10 \ --part-size 32 \ --max-workers 2 \ --retry-attempts 5 ``` ### Resume Functionality The tool automatically detects and resumes interrupted uploads: 1. **Automatic Detection**: When you run `upload` command, it automatically checks for existing sessions 2. **No Separate Command**: No need to use a separate `resume` command - just run `upload` again 3. **Smart Matching**: Matches sessions based on local directory, bucket, and prefix combination ### Session Management ```bash # List all upload sessions ru list # Show detailed status of a specific session ru status .resumable_upload_upload_1234567890.json # Clean completed sessions ru clean ``` ## Configuration Options | Flag | Default | Description | |------|---------|-------------| | `--endpoint` | localhost:9000 | MinIO server endpoint | | `--access-key` | | MinIO access key (or use MINIO_ACCESS_KEY env var) | | `--secret-key` | | MinIO secret key (or use MINIO_SECRET_KEY env var) | | `--ssl` | false | Use SSL/TLS connection | | `--region` | us-east-1 | MinIO region | | `--part-size` | 64 | Multipart upload part size in MB | | `--max-workers` | 4 | Maximum concurrent uploads | | `--upload-limit` | 0 | Upload speed limit in MB/s (0 = unlimited) | | `--retry-attempts` | 3 | Number of retry attempts for failed uploads | | `--checksum-verify` | true | Verify file checksums | ## Changes Made ### Fixed MinIO v7 Compatibility - **Removed deprecated methods**: `NewMultipartUpload`, `PutObjectPart`, `CompleteMultipartUpload` are not available in minio-go v7 public API - **Simplified approach**: Now uses `PutObject` for all files - minio-go v7 automatically handles multipart uploads for large files - **Maintained functionality**: All features like rate limiting, retry logic, and progress tracking are preserved ### Combined Upload and Resume - **Automatic detection**: Upload command now automatically detects existing sessions - **Smart resume**: Checks for matching directory/bucket/prefix combinations - **Seamless experience**: No need to remember state file names or use separate resume command ### Improved Error Handling - **Better retry logic**: More robust retry mechanism with exponential backoff - **Clearer error messages**: More descriptive error reporting - **Graceful degradation**: Continues with other files if individual uploads fail ## State File Format Upload progress is saved in JSON files with the format `.resumable_upload_.json`: ```json { "session_id": "upload_1234567890", "local_dir": "/path/to/files", "bucket": "mybucket", "prefix": "backup", "files": { "/path/to/file1.txt": { "file_path": "/path/to/file1.txt", "object_key": "backup/file1.txt", "size": 1024, "status": "completed", "checksum": "d41d8cd98f00b204e9800998ecf8427e", "last_modified": "2026-02-05T20:00:00Z" } }, "config": { "endpoint": "localhost:9000", "part_size": 67108864, "max_workers": 4 } } ``` ## Docker Usage ```bash # Build Docker image docker build -t resumable-upload . # Run with Docker docker run -v /local/path:/data resumable-upload \ upload /data mybucket backup \ --access-key KEY --secret-key SECRET \ --endpoint host.docker.internal:9000 ``` ## Building ```bash # Build for current platform go build -o ru . # Build with version information go build -ldflags "-X main.Version=1.0.0 -X main.BuildTime=$(date -u '+%Y-%m-%d_%H:%M:%S') -X main.GitCommit=$(git rev-parse --short HEAD)" -o ru . # Cross-compile for different platforms GOOS=linux GOARCH=amd64 go build -o ru-linux-amd64 . GOOS=windows GOARCH=amd64 go build -o ru-windows-amd64.exe . GOOS=darwin GOARCH=amd64 go build -o ru-darwin-amd64 . ```