Users hear an echo when making a call using Skype for Business

Users hear an echo when making a call using Skype for Business either as a published app or published desktop with HDX RealTime Optimization Pack.

Let’s imagine you are UserA and in a conversation with UserB. Echo is when your voice is retransmitted back to you by User B. The effects of this can be distracting and lead to poor user experience, especially in multiparty calls.

SCENARIO

The most important take away here is that echo is not produced on the side where it is heard.

Example of a conversation with no acoustic echo cancellation (AEC):

  • User 1 places an audio call to User 2
  • User 1 is using a headset (composite device) as an audio device.
  • User 2 is using a separate device for speakerphone and microphone (Non-Composite device) where speakerphone does not have an AEC. (acoustic echo cancellation).
  • User 2 does not hear the echo from their end.
  • User 1 hears his own echo when speaking.

User-added image
Note: The Echo is produced when one of the users in multiparty call is using a separate device for speakerphone for audio output and a microphone phone for audio input or speakerphone with an inbuilt microphone which does not support AEC (acoustic echo cancellation).

Related:

  • No Related Posts

Intermediate results of live audio using IBM Watson Speech to Text websockets

I want to take audio input from microphone and use IBM Watson’s Speech to Text Service (web sockets). I don’t want to create a .wav file instead, I want to get intermediate results of whatever i say like that happens in google audio searching i.e. i speak a word it detects and converts it into text then i speak next word, it detects and converts it.

Can anybody provide me some code or some hints of code? Its really urgent.
Thank you in advance

P.S: i am using Python

Related:

Tips/guidance for maximizing STT transcription accuracy in noisy environments?

Hello, we are wondering if there are any suggestions/guidance/tips on reducing noise for utterances recorded in noisy environments?

**Some background:** we are working on a feature utilizing the Speech to Text service which involves users speaking short utterances. This feature currently resides in an iOS application, but may reside on desktop in the future.
When recording utterances in noisy environments, specifically environments with background speakers, the STT service picks up and transcribes those background speakers. The desired behavior is to pinpoint the “primary” speaker (closest to the microphone) as much as possible; and only transcribe that speaker’s utterance.

We have been exploring using DSP filters/noise cancellation algorithms on the device, along with exploring what microphone configurations are possible on an iOS device to narrow incoming audio. We think this might not be the best approach though.

Does IBM have any plans to provide any noise cancellation options or an option to configure speech recognition for short utterances? These options would be similar to what Google’s cloud speech API provides.

Thanks much for any help/information!

Related:

IBM Speech to text using Websockets use microphone as input in C#?

Hi,

I am developing a mobile application using Xamarin (C#). The aim is to support continuous speech recognition(Realtime) to transcribe meetings.
I am using IBM Speech To text and using Websockets as approach.

Sending Audio file as input is working fine.But I want to use microphone as audio input.

Please help me if anyone has implemented this in C#

Thanks in Advance

Related: