The audio framework is used to implement audio-related features, including audio playback, audio recording, volume management, and device management.
Figure 1 Architecture of the audio framework
Sampling
Sampling is a process to obtain discrete-time signals by extracting samples from analog signals in a continuous time domain at a specific interval.
Sampling rate
Sampling rate is the number of samples extracted from a continuous signal per second to form a discrete signal. It is measured in Hz. Generally, the human hearing range is from 20 Hz to 20 kHz. Common audio sampling rates include 8 kHz, 11.025 kHz, 22.05 kHz, 16 kHz, 37.8 kHz, 44.1 kHz, 48 kHz, 96 kHz, and 192 kHz.
Channel
Channels refer to different spatial positions where independent audio signals are recorded or played. The number of channels is the number of audio sources used during audio recording, or the number of speakers used for audio playback.
Audio frame
Audio data is in stream form. For the convenience of audio algorithm processing and transmission, it is generally agreed that a data amount in a unit of 2.5 to 60 milliseconds is one audio frame. This unit is called sampling time, and its length is specific to codecs and the application requirements.
PCM
Pulse code modulation (PCM) is a method used to digitally represent sampled analog signals. It converts continuous-time analog signals into discrete-time digital signal samples.
The structure of the repository directory is as follows:
/foundation/multimedia/audio_standard # Service code of the audio framework
├── frameworks # Framework code
│ ├── native # Internal native API implementation
│ └── js # External JS API implementation
│ └── napi # External native API implementation
├── interfaces # API code
│ ├── inner_api # Internal APIs
│ └── kits # External APIs
├── sa_profile # Service configuration profile
├── services # Service code
├── LICENSE # License file
└── bundle.json # Build file
You can use the APIs provided in the current repository to convert audio data into audible analog signals, play the audio signals using output devices, and manage playback tasks. The following describes how to use the AudioRenderer class to develop the audio playback feature:
Call Create() with the required stream type to create an AudioRenderer instance.
AudioStreamType streamType = STREAM_MUSIC; // Stream type example.
std::unique_ptr<AudioRenderer> audioRenderer = AudioRenderer::Create(streamType);
(Optional) Call the static APIs GetSupportedFormats(), GetSupportedChannels(), GetSupportedEncodingTypes(), and GetSupportedSamplingRates() to obtain the supported values of parameters.
Prepare the device and call SetParams() to set parameters.
AudioRendererParams rendererParams;
rendererParams.sampleFormat = SAMPLE_S16LE;
rendererParams.sampleRate = SAMPLE_RATE_44100;
rendererParams.channelCount = STEREO;
rendererParams.encodingType = ENCODING_PCM;
audioRenderer->SetParams(rendererParams);
(Optional) Call GetParams(rendererParams) to obtain the parameters set.
Call Start() to start an audio playback task.
Call GetBufferSize() to obtain the length of the buffer to be written.
audioRenderer->GetBufferSize(bufferLen);
Call bytesToWrite() to read the audio data from the source (such as an audio file) and pass it to a byte stream. You can repeatedly call this API to write rendering data.
bytesToWrite = fread(buffer, 1, bufferLen, wavFile);
while ((bytesWritten < bytesToWrite) && ((bytesToWrite - bytesWritten) > minBytes)) {
bytesWritten += audioRenderer->Write(buffer + bytesWritten, bytesToWrite - bytesWritten);
if (bytesWritten < 0)
break;
}
Call Drain() to clear the streams to be played.
Call Stop() to stop the output.
After the playback task is complete, call Release() to release resources.
The preceding steps describe the basic development scenario of audio playback.
For details, see audio_renderer.h and audio_info.h.
You can use the APIs provided in the current repository to record audio via an input device, convert the audio into audio data, and manage recording tasks. The following describes how to use the AudioCapturer class to develop the audio recording feature:
Call Create() with the required stream type to create an AudioCapturer instance.
AudioStreamType streamType = STREAM_MUSIC;
std::unique_ptr<AudioCapturer> audioCapturer = AudioCapturer::Create(streamType);
(Optional) Call the static APIs GetSupportedFormats(), GetSupportedChannels(), GetSupportedEncodingTypes(), and GetSupportedSamplingRates() to obtain the supported values of parameters.
Prepare the device and call SetParams() to set parameters.
AudioCapturerParams capturerParams;
capturerParams.sampleFormat = SAMPLE_S16LE;
capturerParams.sampleRate = SAMPLE_RATE_44100;
capturerParams.channelCount = STEREO;
capturerParams.encodingType = ENCODING_PCM;
audioCapturer->SetParams(capturerParams);
(Optional) Call GetParams(capturerParams) to obtain the parameters set.
Call Start() to start an audio recording task.
Call GetBufferSize() to obtain the length of the buffer to be written.
audioCapturer->GetBufferSize(bufferLen);
Call bytesRead() to read the captured audio data and convert it to a byte stream. The application will repeatedly call this API to read data until it is manually stopped.
// set isBlocking = true/false for blocking/non-blocking read
bytesRead = audioCapturer->Read(*buffer, bufferLen, isBlocking);
while (numBuffersToCapture) {
bytesRead = audioCapturer->Read(*buffer, bufferLen, isBlockingRead);
if (bytesRead < 0) {
break;
} else if (bytesRead > 0) {
fwrite(buffer, size, bytesRead, recFile); // example shows writes the recorded data into a file
numBuffersToCapture--;
}
}
(Optional) Call Flush() to clear the recording stream buffer.
Call Stop() to stop recording.
After the recording task is complete, call Release() to release resources.
For details, see audio_capturer.h and audio_info.h.
You can use the APIs provided in the audio_system_manager.h to control the volume and devices.
AudioSystemManager *audioSystemMgr = AudioSystemManager::GetInstance();
AudioVolumeType streamType = AudioVolumeType::STREAM_MUSIC;
int32_t maxVol = audioSystemMgr->GetMaxVolume(streamType);
int32_t minVol = audioSystemMgr->GetMinVolume(streamType);
int32_t result = audioSystemMgr->SetVolume(streamType, 10);
int32_t vol = audioSystemMgr->GetVolume(streamType);
int32_t result = audioSystemMgr->SetMute(streamType, true);
bool isMute = audioSystemMgr->IsStreamMute(streamType);
int32_t result = audioSystemMgr->SetRingerMode(RINGER_MODE_SILENT);
AudioRingerMode ringMode = audioSystemMgr->GetRingerMode();
int32_t result = audioSystemMgr->SetMicrophoneMute(true);
bool isMicMute = audioSystemMgr->IsMicrophoneMute();
DeviceFlag deviceFlag = OUTPUT_DEVICES_FLAG;
vector<sptr<AudioDeviceDescriptor>> audioDeviceDescriptors
= audioSystemMgr->GetDevices(deviceFlag);
sptr<AudioDeviceDescriptor> audioDeviceDescriptor = audioDeviceDescriptors[0];
cout << audioDeviceDescriptor->deviceType_;
cout << audioDeviceDescriptor->deviceRole_;
ActiveDeviceType deviceType = SPEAKER;
int32_t result = audioSystemMgr->SetDeviceActive(deviceType, true);
bool isDevActive = audioSystemMgr->IsDeviceActive(deviceType);
const audioManager = audio.getAudioManager();
export default {
onCreate() {
audioManager.on('volumeChange', (volumeChange) ==> {
console.info('volumeType = '+volumeChange.volumeType);
console.info('volume = '+volumeChange.volume);
console.info('updateUi = '+volumeChange.updateUi);
}
}
}
int32_t result = audioSystemMgr->SetAudioScene(AUDIO_SCENE_PHONE_CALL);
AudioScene audioScene = audioSystemMgr->GetAudioScene();
For details about the supported audio scenes, see the enumerated values of AudioScene defined in audio_info.h.
You can use the APIs provided in audio_stream_manager.h to implement stream management.
Call GetInstance() to obtain an AudioSystemManager instance.
AudioStreamManager *audioStreamMgr = AudioStreamManager::GetInstance();
Call RegisterAudioRendererEventListener() to register a listener for renderer state changes. A callback will be invoked when the renderer state changes. You can override OnRendererStateChange() in the AudioRendererStateChangeCallback class.
const int32_t clientPid;
class RendererStateChangeCallback : public AudioRendererStateChangeCallback {
public:
RendererStateChangeCallback = default;
~RendererStateChangeCallback = default;
void OnRendererStateChange(
const std::vector<std::unique_ptr<AudioRendererChangeInfo>> &audioRendererChangeInfos) override
{
cout<<"OnRendererStateChange entered"<<endl;
}
};
std::shared_ptr<AudioRendererStateChangeCallback> callback = std::make_shared<RendererStateChangeCallback>();
int32_t state = audioStreamMgr->RegisterAudioRendererEventListener(clientPid, callback);
int32_t result = audioStreamMgr->UnregisterAudioRendererEventListener(clientPid);
Call RegisterAudioCapturerEventListener() to register a listener for capturer state changes. A callback will be invoked when the capturer state changes. You can override OnCapturerStateChange() in the AudioCapturerStateChangeCallback class.
const int32_t clientPid;
class CapturerStateChangeCallback : public AudioCapturerStateChangeCallback {
public:
CapturerStateChangeCallback = default;
~CapturerStateChangeCallback = default;
void OnCapturerStateChange(
const std::vector<std::unique_ptr<AudioCapturerChangeInfo>> &audioCapturerChangeInfos) override
{
cout<<"OnCapturerStateChange entered"<<endl;
}
};
std::shared_ptr<AudioCapturerStateChangeCallback> callback = std::make_shared<CapturerStateChangeCallback>();
int32_t state = audioStreamMgr->RegisterAudioCapturerEventListener(clientPid, callback);
int32_t result = audioStreamMgr->UnregisterAudioCapturerEventListener(clientPid);
Call GetCurrentRendererChangeInfos() to obtain information about all running renderers, including the client UID, session ID, renderer information, renderer state, and output device details.
std::vector<std::unique_ptr<AudioRendererChangeInfo>> audioRendererChangeInfos;
int32_t currentRendererChangeInfo = audioStreamMgr->GetCurrentRendererChangeInfos(audioRendererChangeInfos);
Call GetCurrentCapturerChangeInfos() to obtain information about all running capturers, including the client UID, session ID, capturer information, capturer state, and input device details.
std::vector<std::unique_ptr<AudioCapturerChangeInfo>> audioCapturerChangeInfos;
int32_t currentCapturerChangeInfo = audioStreamMgr->GetCurrentCapturerChangeInfos(audioCapturerChangeInfos);
For details, see audioRendererChangeInfos and audioCapturerChangeInfos in audio_info.h.
Call IsAudioRendererLowLatencySupported() to check whether low latency is supported.
const AudioStreamInfo &audioStreamInfo;
bool isLatencySupport = audioStreamMgr->IsAudioRendererLowLatencySupported(audioStreamInfo);
JavaScript applications can call the audio management APIs to control the volume and devices.
For details, see js-apis-audio.md.
You can use the APIs provided in audio_bluetooth_manager.h to implement Bluetooth calls over synchronous connection-oriented (SCO) links.
Call OnScoStateChanged() to listen for SCO link state changes.
const BluetoothRemoteDevice &device;
int state;
void OnScoStateChanged(const BluetoothRemoteDevice &device, int state);
(Optional) Call the static API RegisterBluetoothScoAgListener() to register a Bluetooth SCO listener, and call UnregisterBluetoothScoAgListener() to unregister the listener when it is no longer required.
The following lists the device types supported by the audio framework.
USB Type-C Headset
A digital headset that consists of its own digital-to-analog converter (DAC) and amplifier that functions as part of the headset.
WIRED Headset
An analog headset that does not contain any DAC. It can have a 3.5 mm jack or a USB-C socket without DAC.
Bluetooth Headset
A Bluetooth Advanced Audio Distribution Mode (A2DP) headset for wireless audio transmission.
Internal Speaker and MIC
A device with a built-in speaker and microphone, which are used as default devices for playback and recording, respectively.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。
Activity
Community
Health
Trend
Influence
:Code submit frequency
:React/respond to issue & PR etc.
:Well-balanced team members and collaboration
:Recent popularity of project
:Star counts, download counts etc.