Copyright © 2008-2019 MultiMedia Soft

How to manage audio playback through WASAPI

Previous pageReturn to chapter overviewNext page

[This tutorial applies to Windows Vista and later versions only]

 

Starting from Windows Vista, Microsoft has rewritten the multimedia sub-system of the Windows operating system from the ground-up; at the same time Microsoft introduced a new API, also known as Core audio API, which allows interacting with the multimedia sub-system and with audio endpoint devices (sound cards).

 

The Core Audio APIs implemented in Windows Vista and higher versions are the following:

 

Multimedia Device (MMDevice) API. Clients use this API to enumerate the audio endpoint devices in the system.

DeviceTopology API. Clients use this API to directly access the topological features (for example, volume controls and multiplexers) that lie along the data paths inside hardware devices in audio adapters.

EndpointVolume API. Clients use this API to directly access the volume controls on audio endpoint devices. This API is primarily used by applications that manage exclusive-mode audio streams.

Windows Audio Session API (WASAPI). Clients use this API to create and manage audio streams to and from audio endpoint devices.

 

In general, WASAPI operates in two modes:

 

In exclusive mode (also called DMA mode), unmixed audio streams are rendered directly to the audio adapter and no other application's audio will play and signal processing has no effect. Exclusive mode is useful for applications that demand the least amount of intermediate processing of the audio data or those that want to output compressed audio data such as Dolby Digital, DTS or WMA Pro over S/PDIF.

 

In shared mode, audio streams are rendered by the application and optionally applied per-stream audio effects known as Local Effects (LFX) (such as per-session volume control). Then the streams are mixed by the global audio engine, where a set of global audio effects (GFX) may be applied. Finally, they're rendered on the audio device. Differently from Windows XP and older versions, there is no more direct path from DirectSound to the audio drivers, indeed DirectSound and MME are totally emulated through WASAPI working in shared mode, which results in pre-mixed PCM audio that is sent to the driver in a single format (in terms of sample rate, bit depth and channel count). This format is configurable by the end user through the "Advanced" tab of the Sounds applet of the Control Panel as seen on the picture below:

 

 

In order to enable the usage of WASAPI you must call the InitDriversType method with the nDriverType parameter set to DRIVER_TYPE_WASAPI. The call to the InitDriversType method is mandatory before performing calls to the InitEditor method, to the OutputDeviceGetCount and OutputDeviceGetDesc methods and any other method belonging to the WASAPIMan class: if the InitDriversType method should be called at a later time, it would report back an error; if for any reason you should need calling it at a later time, you would need performing the following sequence of calls:

 

1. ResetEngine method

2. InitDriversType method

3. Eventual new enumeration of output devices through the combination of OutputDeviceGetCount and OutputDeviceGetDesc methods (or through the combination of WASAPI.RenderDeviceGetCount and WASAPI.RenderDeviceGetDesc methods).

4. ResetControl method

 

After initializing the usage of WASAPI through the InitDriversType method, Audio Sound Editor for .NET gives access to WASAPI itself through the WASAPIMan class accessible through the WASAPI property.

 

WASAPI can manage three different types of devices:

 

Render devices are playback devices where audio data flows from the application to the audio endpoint device, which renders the audio stream..
Capture devices are recording devices where audio data flows from the audio endpoint device, that captures the audio stream, to the application..
Loopback devices are recording devices that capture the mixing of all of the audio streams being rendered by a specific render device, also if audio streams are being played by third-party multimedia application like Windows Media Player: each render device always has a corresponding loopback device.

 

Audio Sound Editor for .NET allows managing Render devices only, meaning that it can be only used for sound playback purposes: in case you should need managing Capture devices and Loopback devices as well, for example for sound recording purposes or for allowing a direct playback of sound exposed by a loopback device, you should consider using one between our Audio Dj Studio for .NET and Audio Sound Recorder for .NET components which come with a more extensive coverage of WASAPI and CoreAudio features.

 

Available WASAPI render devices can be enumerated through the WASAPI.RenderDeviceGetCount and WASAPI.RenderDeviceGetDesc methods: if you only need enumerating output devices you can also use the combination of OutputDeviceGetCount and OutputDeviceGetDesc methods or through the combination of WASAPI.RenderDeviceGetCount and WASAPI.RenderDeviceGetDesc methods. In both cases, only devices reported as "Enabled" by the system will be listed: unplugged or disabled devices will not be enumerated.

 

 

Important note

 

Differently from usage of DirectSound drivers, when using WASAPI drivers there is no need to perform a reset of the engine when an audio device is added or removed from the system: new calls to the WASAPI.RenderDeviceGetCount and WASAPI.RenderDeviceGetDesc methods will report the change.

 

 

Before using an output device for playback, starting the device itself is a mandatory operation: for exclusive mode you can use the WASAPI.RenderDeviceStartExclusive method while for shared mode you can use the WASAPI.RenderDeviceStartShared method. In both cases the started device can be stopped through the WASAPI.RenderDeviceStop method. You can check if a device is already started through the WASAPI.RenderDeviceIsStarted method.

 

For exclusive mode you need to start the device by specifying, inside the call to the WASAPI.RenderDeviceStartExclusive method, the playback format which is represented by the frequency and number of channels: you can know if a WASAPI device supports a specific format through the WASAPI.RenderDeviceIsFormatSupported method.

 

For shared mode you directly rely on the playback format chosen from the Sound applet of the Windows control panel: you can know which is the current format through the WASAPI.RenderDeviceSharedFormatGet method.

 

WASAPI clients can individually control the volume level of each audio session. WASAPI applies the volume setting for a session uniformly to all of the streams in the session; you can modify the session volume through the WASAPI.RenderDeviceVolumeSet method and to retrieve the current volume through the WASAPI.RenderDeviceVolumeGet method. In case you should need to get/set the master volume for the given WASAPI device, shared by all running processes, you should use the CoreAudioDevices.MasterVolumeGet and CoreAudioDevices.MasterVolumeSet methods exposed by the Audio Dj Studio for .NET and/or Audio Sound Recorder for .NET components.

 

 

Samples of usage of WASAPI in Visual C#.NET and Visual Basic.NET can be found inside the following samples installed with the product's setup package:

- WasapiPlayer