Skip to content

Commit

Permalink
[Docs] Added missing code examples
Browse files Browse the repository at this point in the history
  • Loading branch information
sierpinskid committed Nov 21, 2023
1 parent 57552e1 commit 50dc4d7
Showing 1 changed file with 82 additions and 20 deletions.
102 changes: 82 additions & 20 deletions docusaurus/docs/Unity/02-tutorials/02-audio-room.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -195,10 +195,12 @@ Next, we'll create the **UI Manager** script to reference all UI elements and ha
2. Open this script in your code editor and paste the following content:
```csharp
using System;
using System.Linq;
using TMPro;
using System.Collections.Generic;
using System.Threading.Tasks;
using StreamVideo.Core;
using StreamVideo.Core.StatefulModels;
using StreamVideo.Libs.Auth;
using UnityEngine;
using UnityEngine.UI;

public class AudioRoomsUI : MonoBehaviour
{
Expand Down Expand Up @@ -355,7 +357,7 @@ private async void OnLeaveButtonClicked()
}
```

### Step 6 - Capture **Microphone** Input
### Step 6 - Capture Microphone Input

Now, let's add sending audio input captured from our microphone device.

Expand All @@ -376,7 +378,6 @@ private void SetActiveMicrophone(int optionIndex)
_microphoneAudioSource.clip
= Microphone.Start(_activeMicrophoneDevice, true, 3, AudioSettings.outputSampleRate);
_microphoneAudioSource.loop = true;
_microphoneAudioSource.volume = 0; // Set volume to 0 so we don't hear back our microphone
_microphoneAudioSource.Play();
}
```
Expand All @@ -399,7 +400,7 @@ if (_activeMicrophoneDevice != null)
}
```

Next, we use Unity's `Microphone.Start` method to capture the audio from the microphone device and stream it into the `_microphoneAudioSource` AudioSource component. We'll later set the `microphoneAudioSource` component as an audio input source.
We use Unity's `Microphone.Start` method to capture the audio from the microphone device and stream it into the `_microphoneAudioSource` AudioSource component. We'll later set the `microphoneAudioSource` component as an audio input source.

:::note

Expand Down Expand Up @@ -427,18 +428,68 @@ The code above:
2. Passes the `AudioSource` to the `_audioRoomsManager.SetInputAudioSource` method. We'll create this method in the next step.
3. Calls the `SetActiveMicrophone(0);` to capture audio from the first microphone in the dropdown.

Next, open the `AudioRoomsManager.cs` class and add the following method:
Next, add these lines to the end of the `Awake` method in the `AudioRoomsUI` class:
```csharp
// SetActiveMicrophone will be called when a microphone is picked in the dropdown menu
_microphoneDropdown.onValueChanged.AddListener(SetActiveMicrophone);

InitMicrophone();
```

Next, open the `AudioRoomsManager.cs` class in the code editor.

First, add this field anywhere in the class:
```csharp
private AudioSource _inputAudioSource;
```

Next, add this method:
```csharp
public void SetInputAudioSource(AudioSource audioSource)
{
_inputAudioSource = audioSource;

// If client is already created, update the audio input source
if (_client != null)
{
_client.SetAudioInputSource(_inputAudioSource);
}
}
```
And the following field to the fields section:

And finally, replace the `Start` method with the following code:
```csharp
private AudioSource _inputAudioSource;
private async void Start()
{
// Create Client instance
_client = StreamVideoClient.CreateDefaultClient();

var credentials = new AuthCredentials(_apiKey, _userId, _userToken);

try
{
// Connect user to Stream server
await _client.ConnectUserAsync(credentials);
Debug.Log($"User `{_userId}` is connected to Stream server");

// highlight-start
// Set audio input source
if (_inputAudioSource != null)
{
_client.SetAudioInputSource(_inputAudioSource);
}
// highlight-end
}
catch (Exception e)
{
// Log potential issues that occured during trying to connect
Debug.LogException(e);
}
}
```

What changed is that we've added the part highlighted above. It will set the **Audio Input Source** if case when `SetInputAudioSource` was called before the client was created.

### Step 7 - Create UI scene objects

Go to the scene Hierarchy window and create the `Canvas` game object. One way to do this is by selecting the `GameObject -> UI -> Canvas` from the top menu.
Expand Down Expand Up @@ -611,20 +662,20 @@ private void OnTrackAdded(IStreamVideoCallParticipant participant, IStreamTrack

Next, in the `OnDestroy` method (automatically called by Unity), we unsubscribe from the `TrackAdded` event:
```csharp
private void OnDestroy()
{
// It's a good practice to always unsubscribe from events
_participant.TrackAdded -= OnTrackAdded;
}
private void OnDestroy()
{
// It's a good practice to always unsubscribe from events
_participant.TrackAdded -= OnTrackAdded;
}
```

And lastly, we've defined two fields to keep the `AudioSource` and the participant references:
```csharp
// This AudioSource will play the audio received from the participant
private AudioSource _audioOutputAudioSource;
// This AudioSource will play the audio received from the participant
private AudioSource _audioOutputAudioSource;

// Keep reference so we can unsubscribe from events in OnDestroy
private IStreamVideoCallParticipant _participant;
// Keep reference so we can unsubscribe from events in OnDestroy
private IStreamVideoCallParticipant _participant;
```

---
Expand Down Expand Up @@ -668,8 +719,11 @@ private void OnParticipantLeft(string sessionId, string userid)

var audioCallParticipant = _callParticipantBySessionId[sessionId];

// Remember to destroy the game object and not the component only
// Destroy the game object represeting a participant
Destroy(audioCallParticipant.gameObject);

// Remove entry from the dictionary
_callParticipantBySessionId.Remove(sessionId);
}
```

Expand Down Expand Up @@ -712,14 +766,22 @@ public async Task LeaveCallAsync()
_activeCall.ParticipantLeft -= OnParticipantLeft;

await _activeCall.LeaveAsync();

// Destroy all call participants objects
foreach (var participant in _callParticipantBySessionId.Values)
{
Destroy(participant.gameObject);
}

_callParticipantBySessionId.Clear();
}
```

We've extended the previous implementation with unsubscribing from the `ParticipantJoined` and `ParticipantLeft` events.

### Step 9 - Test

We're now ready to test the app! To test it, we need two instances of the app. Each running instance of the app will use a microphone and a speaker. This may result in a permission conflict if multiple applications attempt to use the microphone and camera simultaneously. Therefore, a good way to test it is by using your PC and a smartphone. Depending on what type of device you have, you can follow the [Android](../04-platforms/02-android.mdx) or [IOS](../04-platforms/03-ios.mdx) sections to learn how to deploy the app on those devices.
We're now ready to test the app! To test it, we need two instances of the app. Each running instance of the app will use a microphone and a speaker. This may result in a permission conflict if multiple applications attempt to use the microphone and camera simultaneously. Therefore, a good way to test it is by using your PC and a smartphone. Depending on what type of device you have, you can follow the [Android](../04-platforms/02-android.mdx) or [IOS](../04-platforms/03-ios.mdx) sections to learn how to build the app for mobile platforms.

Once you launch the app on two separate devices, provide the same **call Id** on both devices to join the same audio call.

Expand Down

0 comments on commit 50dc4d7

Please sign in to comment.