# HMOSLiveStream
**Repository Path**: harmonyos_samples/HMOS_LiveStream
## Basic Information
- **Project Name**: HMOSLiveStream
- **Description**: 基于HarmonyOS媒体子系统实现媒体直播开播端和看播端方案。
- **Primary Language**: Unknown
- **License**: Apache-2.0
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 17
- **Forks**: 12
- **Created**: 2025-09-08
- **Last Updated**: 2025-11-17
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# Media Live Streaming Feature Based on the HarmonyOS Media Subsystem
## Overview
This sample demonstrates how to implement both the live streaming and viewer ends of media live streaming based on the
HarmonyOS media subsystem. This sample demonstrates common features in live streaming scenarios, including audio and
video capture, audio and video playback, audio focus management, ROI, background music addition, and front/rear camera
switching. This sample helps develop applications for starting and watching live stream.
- The viewer end is simulated by playing video files. The main process is to play the video files recorded on the live
streaming end by using the AVPlayer.
- The main process of video recording on the live streaming end is as follows: camera capture, OpenGL rotation,
encoding, and multiplexing into MP4 files.
- The live streaming process is simulated using distributed files. Two mobile phones are used to simulate the live
streaming end and the viewer end respectively. They must log in to the same HUAWEI ID and enable Wi-Fi and Bluetooth,
to complete distributed networking. The specific process is as follows: The live streaming end saves the recorded
video file to the sandbox and copies the video file to the distributed directory. The viewer end copies the video file
in the distributed directory to the sandbox and plays the video file using AVPlayer.
- In the recording scenario, the OpenGL rendering pipeline is added between the camera and encoding. You can add the
corresponding shader, for example, beauty and filter operators, to the live streaming scenario by referring to the
process.
### Atomic Capability Specifications Supported for Playback
| Media | Muxing Format | Stream Format |
|-------|:--------------|:---------------------------------------------|
| Video | mp4 | Video stream: H.264/H.265; audio stream: ACC |
### Atomic Capability Specifications Supported for Recording
| Muxing Format | Video Codec Type | Audio Codec Type |
|---------------|------------------|------------------|
| mp4 | H.264/H.265 | AAC |
### Preview
| HomePage | Page of Live Streaming End | Page of Viewer End |
|-------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------|
|
|
|
|
## How to Use
1. When a dialog box is displayed asking whether to allow **HMOSLiveStream** to access the camera, tap **Allow**.
2. When a dialog box is displayed asking whether to allow **HMOSLiveStream** to access the microphone, tap **Allow**.
3. When a dialog box is displayed asking whether to allow **HMOSLiveStream** to discover and connect to nearby devices,
tap **Allow**.
### Starting Live Stream
1. Tap **Start live stream**.
2. Confirm to allow the recorded files to be saved to distributed files.
3. After recording is completed, tap the button in the upper-right corner to close it.
### Watching Live Stream
1. After the process of starting a live stream is complete, you can start watching a live stream.
2. Tap **Watch live stream**. The live stream page is displayed.
## Project Directory
```
├──entry/src/main/cpp // Native layer
│ ├──capbilities // Capability APIs and implementation
│ │ ├──render // APIs and implementation of the display module
│ │ │ ├──include // Display module APIs
│ │ │ │ ├──egl_render_context.h // EGL rendering context APIs
│ │ │ │ ├──render_thread.h // Rendering thread APIs
│ │ │ │ └──shader_program.h // APIs for muxing OpenGL ES shader programs
│ │ │ ├──render_thread.cpp // Rendering thread
│ │ │ ├──egl_render_context.cpp // EGL rendering context implementation
│ │ │ └──shader_program.cpp // OpenGL ES shader program Encapsulation
│ │ ├──codec // Audio/video capture codec
│ │ │ ├──include // APIs for audio/video capture codec
│ │ │ ├──AudioCapturer.cpp // Audio capture implementation
│ │ │ ├──AudioDecoder.cpp // Audio decoding implementation
│ │ │ ├──AudioEncoder.cpp // Audio encoding implementation
│ │ │ ├──AudioRender.cpp // Audio rendering implementation
│ │ │ ├──CodecCallback.cpp // Codec callback
│ │ │ ├──Demuxer.cpp // Demuxing implementation
│ │ │ ├──Muxer.cpp // Muxing implementation
│ │ │ └──VideoEncoder.cpp // Video encoding implementation
│ ├──common // Common modules
│ │ ├──dfx // Logs
│ │ ├──ApiCompatibility.h // API compatibility
│ │ └──SampleInfo.h // Common classes for functionality implementation
│ ├──player // Player APIs and implementation at the native layer
│ │ ├──include // Player APIs at the native layer
│ │ │ ├──Player.h // Player invocation APIs at the native layer
│ │ │ └──PlayerNative.h // Player entry APIs at the native layer
│ │ ├──Player.cpp // Player implementation at the native layer
│ │ └──PlayerNative.cpp // Player entry at the native layer
│ └──recorder // Recorder APIs and implementation at the native layer
│ │ ├──include // Recorder implementation at the native layer
│ │ │ ├──Recorder.h // Recorder invocation APIs at the native layer
│ │ │ └──RecorderNative.h // Recorder entry APIs at the native layer
│ │ ├──Recorder.cpp // Recorder implementation at the native layer
│ │ └──RecorderNative.cpp // Recorder entry at the native layer
│ ├──types // APIs provided by the native layer
│ │ ├──libplayer // APIs provided by the player module to the UI layer
│ │ └──librecorder // APIs provided by the recorder module to the UI layer
│ └──CMakeLists.txt // Compilation entry
├──ets // UI layer
│ ├──common // Common modules
│ │ ├──utils // Common utilities
│ │ │ ├──BackgroundTaskManager.ets // Background task utility class
│ │ │ ├──CameraCheck.ets // File to check whether the camera parameters are supported
│ │ │ ├──DateTimeUtils.ets // Time conversion utility class
│ │ │ ├──ImageUtil.ets // Image processing utility class
│ │ │ └──Logger.ets // Log utilities
│ │ ├──GlobalConstants.ets // Global variable name
│ │ └──CommonConstants.ets // Common constants
│ ├──components // Component directories
│ │ └──SettingPopupDialog.ets // Settings data class
│ ├──controller // Controller
│ │ ├──BgmController.ets // Background music controller
│ │ ├──CameraController.ets // Camera controller
│ │ ├──DistributeFileManager.ets // Distributed file manager
│ │ ├──VideoPlayerController.ets // Local audio and video playback controller
│ │ └──VideoSessionController.ets // Audio session controller
│ ├──entryability // Application entry
│ │ └──EntryAbility.ets
│ ├──entrybackupability
│ │ └──EntryBackupAbility.ets
│ ├──model
│ │ ├──CameraDataModel.ets // Camera parameter data class
│ │ └──SettingPopupOptionItem.ets // Settings data class
│ ├──pages // Pages contained in the EntryAbility
│ │ ├──Index.ets // Home page
│ │ ├──StartLiveStream.ets // Page of live streaming end
│ │ └──WatchLiveStream.ets // Page of viewer end
│ └──view // Pages contained in the EntryAbility
│ ├──AvplayerView.ets // Audio/video playback using AVPlayer at the viewer end
│ ├──StartLiveDecorationView.ets // Data page at the live streaming end
│ ├──StartLiveRenderView.ets // Renderer at the live streaming end
│ └──WatchLiveDecorationView.ets // Data page at the viewer end
├──resources // Resource files for storing applications
└──module.json5 // Module configuration
```
## How to Implement
### Starting Live Stream
#### UI Layer
1. On the Index page at the UI layer, after a user taps **Start live stream** and confirms to save the recording file to
the distributed folder, a new file is created.
2. After the file is created, the FD of the created file and the recording parameters preset by the user are used to
call **initNative()** at the native layer for recording initialization. After the initialization is complete, the
native layer calls **OH_NativeWindow_GetSurfaceId** to obtain the **surfaceId** of **NativeWindow** and transfers the
**surfaceId** to the UI layer.
3. After obtaining the **surfaceId** from the encoder, the UI layer constructs **cameraController** and **bgmController**,
and calls the page route to redirect to the **StartLiveStream** page.
4. When the **XComponent** of the **StartLiveRenderView** component on the **StartLiveStream** page is constructed, the
**.onLoad()** method is called. This method first obtains the **surfaceId** of the **XComponent**, and then calls
**createRecorder()** and **startNative()** of **cameraController**. The function establishes a producer-consumer model
where the camera serves as the producer, and the surfaces of the **XComponent** and encoder serve as the consumers.
#### Native Layer Encoding
1. On the recording page, the encoder starts to encode the camera preview stream at the UI layer.
2. Each time the encoder successfully encodes a frame, the callback function **OnNewOutputBuffer()** in
**sample_callback.cpp** is invoked once, and the AVCodec framework provides an **OH_AVBuffer**.
3. In the output callback, you need to manually store the frame buffer and index in the output queue and instruct the
output thread to unlock.
4. The output thread stores the frame information in the previous step as bufferInfo and pops out of the queue.
5. The output thread uses bufferInfo obtained in the previous step to call **WriteSample** to mux the frame into the MP4
format.
6. The output thread calls **FreeOutputBuffer** to return the buffer of this frame to the AVCodec framework, achieving
buffer cycling.
#### Native Layer Decoding
1. The working principles are as follows.
- After the decoder is started, **OnNeedInputBuffer** is invoked each time the decoder obtains a frame, and the
AVCodec framework provides an **OH_AVBuffer**.
- In the input callback, you need to manually store the frame buffer and index in the input queue and instruct the
input thread to unlock.
- The input thread stores the frame information in the previous step as bufferInfo and pops out of the queue.
- The input thread uses bufferInfo obtained in the previous step to call **ReadSample** to demux the frame.
- The input thread uses the demuxed bufferInfo to call **PushInputData** of the decoder. When the buffer is no
longer needed, the input thread returns the buffer to the framework, achieving buffer cycling.
- After **PushInputData** is called, the decoder starts frame decoding. Each time a frame is decoded, the output
callback function is invoked. You need to manually store the frame buffer and index to the output queue.
- The output thread stores the frame information in the previous step as bufferInfo and pops out of the queue.
- After calling **FreeOutputData**, the output thread displays the frame and releases the buffer. The released
buffer is returned to the framework, achieving buffer cycling.
2. In the decoder configuration, the input parameter **OHNativeWindow** of **OH_VideoDecoder_SetSurface** is
**pluginWindow_** in **PluginManager**.
3. In the decoder configuration, **SetCallback** is used. The input and output callbacks in **sample_callback.cpp** must
store the callback frame buffer and index to a user-defined container, named **sample_info.h**, for subsequent
operations.
4. The **Start()** of **Player.cpp** creates two threads for input and output.
### Watching Live Stream
#### UI Layer
1. On the Index page at the UI layer, when a user taps **Watch live stream**, the tap event is triggered. The
application copies the video file in the distributed directory to the sandbox and selects the latest video file for
playback.
2. The video file information is transferred to the AvplayerView, and the AVPlayer is used to play the video.
## Required Permissions
- **ohos.permission.CAMERA**: allows an application to use the camera.
- **ohos.permission.MICROPHONE**: allows an application to use the microphone.
- **ohos.permission.DISTRIBUTED_DATASYNC**: allows an application to synchronize using distributed files.
- **ohos.permission.KEEP_BACKGROUND_RUNNING**: allows an application to run in the background.
## Dependencies
- N/A
## Constraints
1. This sample is only supported on Huawei phones running standard systems.
2. The HarmonyOS version must be HarmonyOS 5.0.5 Release or later.
3. The DevEco Studio version must be DevEco Studio 6.0.0 Release or later.
4. The HarmonyOS SDK version must be HarmonyOS 6.0.0 Release SDK or later.