Manolis Perakakis world

News, diary, journal, whatever

Multimodal mobile interaction – blending speech and GUI input (iphone demo) October 15, 2010

Update: Since Apple yesterday (Oct 5, 2011) announced full integration of Siri personal assistant in IOS 5, I think the title of this post could become: A Siri like (personal assistant) interface developed as part of my PhD research (focus on mutlimodal interaction), circa 2009 🙂

Well, it was about time for a new blog post after errrr…. almost 2 years!

These recent years were so exciting regarding mobile interaction, … I wonder how cool-est(!) the following years may be.

A few years ago I envisioned how speech modality would enrich (or almost supersede) the poor (of that time) mobile interaction experience by working on distributed speech recognition. Look ma(!) touch modality just won the game; it was so much simpler as a technology (well by today’s standards), error free & intuitive. iPhone really revolutionized the mobile interface by exploiting multi-touch input but speech as modality still has a bright future, not by replacing but by enriching mobile interaction.

So the question is: how to build interfaces that combine more than one modalities? Generally speaking, to successfully combine multiple modalities, one has to exploit the synergies that emerge when mixing these modalities. For example, in blending speech and GUI (touch) modalities the following synergies arise:

  • visual output (GUI) is much faster (and informative) than speech output (sequential); this is due to information bandwidth of visual and audio channels of human brain
  • speech input is usually much faster than GUI input (and also the more natural form of communication) A speech sentence can reveal info that would require many GUI actions to complete, e.g. I want to fly from Athens to London
  • speech input is inconsistent due to recognition errors! The same utterance spoken twice can yield different recognition results & fixing errors solely through speech may be difficult. Allow for easy error correction through extra modality instead! (e.g. GUI input)

Multimodal interfaces (interfaces that support more than 1 interaction modalities) thus may offer a richer user experience; they are more flexible and robust at the cost of greater design and implementation complexity.

The video is about a multimodal mobile interaction application demonstrating how to exploit speech and GUI (touch) modalities to enrich user experience. The application scenario is a travel reservation service. The user can use either GUI or speech input at each interaction turn, that is, selecting values from a list by touch or directly speaking, e.g. “I want to fly from Orlando to Chicago next Friday evening“.

This specific demonstration showcases 4 different interactions modes, one unimodal (GUI only input) and 3 different multimodal ones:

  • “Click-to-Talk”: user clicks speech button to talk
  • “Open-Mike”: speech input using voice activity detection
  • “Modality-selection”: default input modality chosen on modality efficiency; the system switches between
    “Click-to-Talk” & “Open-Mike” depending on current context to favor GUI or speech input respectively, .e.g. GUI input might be faster for short lists like date.

Note that the same (and also the simpest possible, e.g. one way trip without car/hotel reservation) scenario (New-York to Chicago, etc.) is demonstrated for all different interaction modes (Of course everything you can do with GUI you can do with speech). This video was shot to showcase the porting to iphone platform (with the help of V Kouloumenta); the platform also runs on PCs and various PDAs (e.g. Zaurus), since 2006.

This demo is part of my PhD work at Electronics & Computer Engineering Dept, Technical University Crete under the supervision of A. Potamianos. For more info you may refer to:
M. Perakakis and A. Potamianos. A study in efficiency and modality usage in multimodal form filling systems. IEEE Transactions on Audio, Speech and Language Processing, 2008.

Advertisements
 

Prime time for Distributed Speech Recognition? February 23, 2009

While an undergraduate student a few years ago I worked on Distributed Speech Recognition (DSR). The main purpose of DSR is to compress the acoustic features used by a speech recognizer, over a data (instead of voice) network, thus saving bandwidth (cost effective) and allowing the use of full speech recognition in mobile terminals. As it compresses acoustic features for speech recognition (not speech signal transmission/reproduction) purposes it can achieve very low bit rates. You can think of it as analogous of what mp3 is for music transmission and storage.

Depicted next is a simple overview of a DSR architecture (model 2). Note that the mobile terminals depicted are Symbian’s reference devices corresponding to smartphone, handheld and PDA respectively (Ooops too old images – it should be back in 2001; should upgrade to something like iPhone or Android …)

My work with Prof. V.Digalakis concluded that one can successfully take advantage of DSR with only a 2 kbps coding, which is an extremely low data rate. After that i ported the DSR engine to a Zaurus Linux PDA and made it work in real-time (a 16MB, 200 MHz StrongArm processor).

Although my recent work focus is now on Multi-modal (speech) interfaces I still keep an eye on DSR. It seems that with the emergence of powerful mobile terminals and the announcement of speech recognition support for Android and iPhone by Google, DSR might become soon a hot topic!

P.S. I just found out my DSR page is ranked 3rd by Google after W3C and ETSI. Holy moly!

Coolness factor: ?

 

My 15 minutes of fame! March 11, 2008

Filed under: HCI,interfaces,Multimodal,Speech,technology — perak @ 8:41 pm

Our work in Telecommunications Lab, at Technical University Crete (TUC) was featured in “Orizontes” documentary series of Kydon TV channel. We demonstrated some of our demos :

  • My work on multimodal interfaces (part of my PhD), including a travel. reservation multimodal (GUI + speech) application running on a Zaurus Linux PDA
  • The automatic video summarizer system (part of MUSCLE NOE european research project showcases).
  • An audio-visual (AV) recognition system (also part of MUSCLE NOE european research project showcases).
  • The multi-mic robust speech recognition demo (part of Hiwire european research project showcases).

We could not showcase the augmented-reality demo, we developed in cooperation with VTT (speech recognition integration), since we currently miss the appropriate hardware, hope we get it soon.

Some of these demos will go public, either by posting videos on YouTube or by releasing the source as open source in Sourceforge/Google code.

More on this as well as a more detailed description of the demos in following future posts!

Stay tuned!

 

Aibo, Lego mindstorms, Wii remote (wiimote), iPhone & Google’s Android!

Filed under: HCI,interfaces,Multimodal,programming,robotics,Speech — perak @ 8:12 pm

What all these have in common? They will be my playground for a while …

I will have the chance to play with all of them during this samester!

As far as aibo and mindstorms are concerned, i will use them for the two robotics related courses i have enrolled in. Some possible projects I am thinking of :

  • Distributed speech recognition (DSR) : enchance the limited speech recognition capabilities of the aibo by exploiting the wireless link and a  speech recognition server.
  • Distributed image processing : enchance aibo’s limited machine vision capabilities by exploiting the wireless link and a machine vision server (similarly to DSR)
  • robot localization using multiple input modalities : machine vision + audio
  • enchanced gesture based interface or multimodal (speech + gesture interfaces)

Wiimote hacks for enchanced HCI, similar to these demos from CMU.

iPhone will be used,  to augment my speech & GUI multimodal interface prototype already  running on the Zaurus PDA, with the gesture modality.

Finally, i can’t resist from playing with Google’s  new Android platform,  for porting  various apps  I have in mind.

Whoa, my hacker alter ego will be definetely be back for good!!!

 

 

Opera prepares for version 10? July 29, 2006

Filed under: HCI,interfaces,Multimodal,technology,web — perak @ 8:42 am

According to this C|net article, Opera prepares version 10 of it’s successful browser.

Wow, it’s not a long time since I updated to version 9. Opera is the ONLY non-open source program I use on my Linux boxes for many years now! It combines rock-solid functionality with an excellent interface.

I really like it’s simple yet intuitive and extremely configurable interface.
(Well in terms of design, it reminds me of Google’s simple interface – simple is beautiful!)
Opera was one of the first apps to use mouse gestures and they have also build a multimodal-enabled version in cooperation with IBM (should test this sometime on my Zaurus)

It’s standard-compliant and it’s blazing fast (compare to pre-firefox mozilla days) and secure (vs IE).I have got 5000 opera bookmarks & a big mailbox and I can quickly find anything in msecs!

In terms of functionality, a jabber IM plug-in would make it almost complete!

<The company expects version 10 to work on and across any platform>
I have already Opera running on my Zaurus PDA and my K700 mobile! – Opera mini is really cool! This gives a strategic advantage for Opera. I think desktop browser wars don’t matter any more, mobile browsing is the next frontier.

<Opera is aiming for a day when people needn’t use a full desktop operating system, instead using a browser and Web applications for most tasks>
This is another cool idea, especially for the mobile space, browser is the computer! And this widget idea is really promising, especially for the mobile space.

<There is also a big push in the company toward creating developer tools>
Attracting developers to it’s already small but dedicated community would be a huge plus. Go for it Opera!

These Norwegian trolls are really cool!

Coolness factor : 4.5