Introduction

Imagine stepping into a virtual meeting room, only to be met with technical glitches that disrupt the flow of conversation. This is where the concept of a Pre-call Check comes into play. Much like detailed pre-flight checklist for pilots, a Pre-call setup acts as a crucial preparatory phase, allowing developers to troubleshoot and optimize user settings before the main event.

In this article, we'll explore the pre-call check integration in JavaScript and provide a comprehensive guide on how to effectively implement this feature, ensuring your users can engage  without interruption.

Why is it Necessary?

Why invest time and effort into crafting a precall experience, you wonder? Well, picture this scenario: your users eagerly join a video call, only to encounter a myriad of technical difficulties—muted microphones, pixelated cameras, and laggy connections. Not exactly the smooth user experience you had in mind, right?

By integrating a robust precall process into your app, developers become the unsung heroes, preemptively addressing potential pitfalls and ensuring that users step into their video calls with confidence.

Step-by-Step Guide: Integrating Precall Feature

Step 1: Check Permissions​

  • Begin by ensuring that your application has the necessary permissions to access user devices such as cameras, microphones, and speakers.
  • Utilize the checkPermissions() method of the VideoSDK class to verify if permissions are granted.
const checkMediaPermission = async () => {
  //These methods return a Promise that resolve to a Map<string, boolean> object.
  const checkAudioPermission = await VideoSDK.checkPermissions("audio"); //For getting audio permission
  const checkVideoPermission = await VideoSDK.checkPermissions("video"); //For getting video permission
  const checkAudioVideoPermission = await VideoSDK.checkPermissions(
    "audio_video"
  ); //For getting both audio and video permissions
  // Output: Map object for both audio and video permission:
  /*
        Map(2)
        0 : {"audio" => true}
            key: "audio"
            value: true
        1 : {"video" => true}
            key: "video"
            value: true
    */
};

When microphone and camera permissions are blocked, rendering device lists is not possible:

Video SDK Image

Step 2: Request Permissions

If permissions are not granted, use the requestPermission() method of the VideoSDK class to prompt users to grant access to their devices.

const requestAudioVideoPermission = async () => {
  try {
    //These methods return a Promise that resolve to a Map<string, boolean> object.
    const requestAudioPermission = await VideoSDK.requestPermission("audio"); //For Requesting Audio Permission
    const requestVideoPermission = await VideoSDK.requestPermission("video"); //For Requesting Video Permission
    const requestAudioVideoPermission = await VideoSDK.requestPermission(
      "audio_video"
    ); //For Requesting Audio and Video Permissions
  } catch (ex) {
    console.log("Error in requestPermission ", ex);
  }
};

Requesting permissions if not already granted:

Video SDK Image

Step 3: Render Device List​

  • Once you have the necessary permissions, Fetch and render list of available camera, microphone, and speaker devices using the getCameras(), getMicrophones(), and getPlaybackDevices() methods of the VideoSDK class respectively.
  • Enable users to select their preferred devices from these lists.
const getMediaDevices = async () => {
  try {
    //Method to get all available webcams.
    //It returns a Promise that is resolved with an array of CameraDeviceInfo objects describing the video input devices.
    let webcams = await VideoSDK.getCameras();
    //Method to get all available Microphones.
    //It returns a Promise that is resolved with an array of MicrophoneDeviceInfo objects describing the audio input devices.
    let mics = await VideoSDK.getMicrophones();
    //Method to get all available speakers.
    //It returns a Promise that is resolved with an array of PlaybackDeviceInfo objects describing the playback devices.
    let speakers = await VideoSDK.getPlaybackDevices();
  } catch (err) {
    console.log("Error in getting audio or video devices", err);
  }
};

Displaying device lists once permissions are granted:

Video SDK Image

Step 4: Handle Device Changes​

  • Implement the device-changed event of the VideoSDK class to dynamically re-render device lists whenever new devices are attached or removed from the system.
  • Ensure that users can seamlessly interact with newly connected devices without disruptions.
//Fetch camera, mic and speaker devices again using this function.
const deviceChangeEventListener = async (devices) => {
  //
  console.log("Device Changed", devices);
};

VideoSDK.on("device-changed", deviceChangeEventListener);

Dynamically updating device lists when new devices are connected or disconnected:

Video SDK Image

Step 5: Create Media Tracks​

  • Upon user selection of devices, create media tracks for the selected microphone and camera using the createMicrophoneAudioTrack() and createCameraVideoTrack() methods of the VideoSDK class.
  • Ensure that these tracks originate from the user-selected devices for accurate testing.
//For Getting Audio Tracks
const getMediaTracks = async () => {
  try {
    //Returns a MediaStream object, containing the Audio Stream from the selected Mic Device.
    const customAudioStream = await VideoSDK.createMicrophoneAudioTrack({
      // Here, selectedMicId should be the microphone id of the device selected by the user.
      microphoneId: selectedMicId,
    });
    //To retrive audio tracks that will be displayed to the user from the stream.
    const audioTracks = stream?.getAudioTracks();
    const audioTrack = audioTracks.length ? audioTracks[0] : null;
  } catch (error) {
    console.log("Error in getting Audio Track", error);
  }

  //For Getting Video Tracks
  try {
    //Returns a MediaStream object, containing the Video Stream from the selected Webcam Device.
    const customVideoStream = await VideoSDK.createCameraVideoTrack({
      // Here, selectedWebcamId should be the webcam id of the device selected by the user.
      cameraId: selectedWebcamId,
      encoderConfig: encoderConfig ? encoderConfig : "h540p_w960p",
      optimizationMode: "motion",
      multiStream: false,
    });
    //To retrive video tracks that will be displayed to the user from the stream.
    const videoTracks = stream?.getVideoTracks();
    const videoTrack = videoTracks.length ? videoTracks[0] : null;
  } catch (error) {
    console.log("Error in getting Video Track", error);
  }
};

Rendering Media Tracks when necessary permissions are available:

Video SDK Image

Step 6: Testing Microphone​

  • The process of testing microphone device provides valuable insights into microphone quality and ensures users can optimize their audio setup for clear communication.
  • To facilitate this functionality, incorporate a recording feature that enables users to capture audio for a specified duration. After recording, users can playback the audio to evaluate microphone performance accurately.
  • For implementing this functionality, you can refer to the official guide of MediaRecorder for comprehensive instructions and best practices.
Video SDK Image

Step 7: Testing Speakers​

  • Testing the speaker device allows users to assess audio playback clarity and fidelity, enabling them to fine-tune settings for optimal sound quality in calls and meetings.
  • To facilitate effective speaker testing, integrate sound playback functionality into your application.
  • This functionality empowers users to play a predefined audio sample, providing a precise evaluation of their speaker output quality.
const testSpeakers = () => {
  //Here, you have to path of your desired test sound.
  const test_sound_path = "test_sound_path";
  //Create an audio tag using a test sound of your choice.
  const audio = new Audio(test_sound_path);
  try {
    //Set the sinkId of the audio to the speaker's device Id, as selected by the user.
    audio.setSinkId(selectedSpeakerDeviceId).then(() => {
      audio.play();
    });
  } catch (error) {
    console.log(error);
  }
};
Video SDK Image

Step 8: Network Quality Assessment​

  • Utilize the getNetworkStats() method of the VideoSDK class to provide users with insights into their network upload and download speeds.
  • Handle potential errors gracefully, such as offline status or poor connection, to maintain a smooth user experience.
const getNetworkStatistics = async () => {
  try {
    //The timeOutDuration is a set time, after which the method stops fetching stats and throws a timeout error.
    const options = { timeoutDuration: 45000 };
    //This method returns a Promise that resolves with an object, containing network speed statistics or rejects with an error message.
    const networkStats = await VideoSDK.getNetworkStats(options);
    const downloadSpeed = networkStats["downloadSpeed"];
    const uploadSpeed = networkStats["uploadSpeed"];
  } catch (ex) {
    console.log("Error in networkStats: ", ex);
  }
};
  • Displaying the Upload and Download speeds of the network:
Video SDK Image
  • Error Handling when user is offline:
Video SDK Image

Step 9: Ensuring Correct Device Selection in the Meeting​

  • Ensure that all relevant states, such as microphone and camera status (on/off), and selected devices, are passed into the meeting from the precall screen.
  • This can be accomplished by passing these crucial states and media streams during the initialization of the meeting using the initMeeting method of the VideoSDK class.
  • This ensures consistency and allows users to seamlessly transition from the precall setup to the actual meeting while retaining their chosen settings.
const meeting = VideoSDK.initMeeting({
    ...
    //Status of Mircophone Device as selected by the user (On/Off).
    micEnabled: micEnable,
    //Status of Webcam Device as selected by the user (On/Off).
    webcamEnabled: webCamEnable,
    //customVideoStream has to be the Video Stream of the user's selected Webcam device as created in Step-5.
    customCameraVideoTrack: customVideoStream,
    //customAudioStream has to be the Audio Stream of the user's selected Microphone device as created in Step-5.
    customMicrophoneAudioTrack: customAudioStream,
});

Conclusion

Following the step-by-step instructions, we explore the pre-call check integration in JavaScript and prepare your users with the tools they need to optimize their audio and video settings, ensuring smooth and effective communication. From checking permissions to assessing network quality, each step plays a vital role in preemptively addressing potential issues.

As developers, embracing this proactive approach not only enhances user satisfaction but also positions your application as a reliable platform for virtual interactions. With a well-executed Pre-call check, you empower your users to focus on what truly matters—connecting and collaborating effectively.