I'm a Computer Science graduate, likes to do ordinary work in an extraordinary manner. I'm quite creative, a workaholic. I regularly used analyze new research, development, innovation by tech giants. I'm interested in Machine learning, Data Science along with research work applications on them & solving puzzles, quizzes.
A patent application published today shows Apple could be testing new ways for Siri to speak based on environmental conditions. Apple is constantly working on improving Siri with yelling and whispering being one of the things we might see in a future update. This would give Siri similar functionality to the Amazon Alexa platform.
While this is a great update to Siri, and one that I cannot wait to experience, but it will not make the assistant a genius like how Google Assistant is. At WWDC19, Apple announced a more humanized voice for the virtual assistant, speaking sentences more clearly.
In other cases, the one or more speech-synthesis parameters, when incorporated in a speech-synthesis model, can cause a speech mode of the synthesized speech to differ from the speech mode of the utterance. This means that if you had to raise your voice so that Siri could hear you, the voice assistant would respond at a modulated higher volume.
It could also be determined by how you speak to Siri, so if you’re shouting, Siri might assume you’re in a noisy place and could shout back, or if you whisper to it, Siri could then whisper back the decision component may select one or more speech synthesis parameters corresponding to the speech output mode. The decision component may also, or alternatively, select a playback volume.