Manolis Perakakis world

News, diary, journal, whatever

Multimodal mobile interaction – blending speech and GUI input (iphone demo) October 15, 2010

Update: Since Apple yesterday (Oct 5, 2011) announced full integration of Siri personal assistant in IOS 5, I think the title of this post could become: A Siri like (personal assistant) interface developed as part of my PhD research (focus on mutlimodal interaction), circa 2009 :)

Well, it was about time for a new blog post after errrr…. almost 2 years!

These recent years were so exciting regarding mobile interaction, … I wonder how cool-est(!) the following years may be.

A few years ago I envisioned how speech modality would enrich (or almost supersede) the poor (of that time) mobile interaction experience by working on distributed speech recognition. Look ma(!) touch modality just won the game; it was so much simpler as a technology (well by today’s standards), error free & intuitive. iPhone really revolutionized the mobile interface by exploiting multi-touch input but speech as modality still has a bright future, not by replacing but by enriching mobile interaction.

So the question is: how to build interfaces that combine more than one modalities? Generally speaking, to successfully combine multiple modalities, one has to exploit the synergies that emerge when mixing these modalities. For example, in blending speech and GUI (touch) modalities the following synergies arise:

  • visual output (GUI) is much faster (and informative) than speech output (sequential); this is due to information bandwidth of visual and audio channels of human brain
  • speech input is usually much faster than GUI input (and also the more natural form of communication) A speech sentence can reveal info that would require many GUI actions to complete, e.g. I want to fly from Athens to London
  • speech input is inconsistent due to recognition errors! The same utterance spoken twice can yield different recognition results & fixing errors solely through speech may be difficult. Allow for easy error correction through extra modality instead! (e.g. GUI input)

Multimodal interfaces (interfaces that support more than 1 interaction modalities) thus may offer a richer user experience; they are more flexible and robust at the cost of greater design and implementation complexity.

The video is about a multimodal mobile interaction application demonstrating how to exploit speech and GUI (touch) modalities to enrich user experience. The application scenario is a travel reservation service. The user can use either GUI or speech input at each interaction turn, that is, selecting values from a list by touch or directly speaking, e.g. “I want to fly from Orlando to Chicago next Friday evening“.

This specific demonstration showcases 4 different interactions modes, one unimodal (GUI only input) and 3 different multimodal ones:

  • “Click-to-Talk”: user clicks speech button to talk
  • “Open-Mike”: speech input using voice activity detection
  • “Modality-selection”: default input modality chosen on modality efficiency; the system switches between
    “Click-to-Talk” & “Open-Mike” depending on current context to favor GUI or speech input respectively, .e.g. GUI input might be faster for short lists like date.

Note that the same (and also the simpest possible, e.g. one way trip without car/hotel reservation) scenario (New-York to Chicago, etc.) is demonstrated for all different interaction modes (Of course everything you can do with GUI you can do with speech). This video was shot to showcase the porting to iphone platform (with the help of V Kouloumenta); the platform also runs on PCs and various PDAs (e.g. Zaurus), since 2006.

This demo is part of my PhD work at Electronics & Computer Engineering Dept, Technical University Crete under the supervision of A. Potamianos. For more info you may refer to:
M. Perakakis and A. Potamianos. A study in efficiency and modality usage in multimodal form filling systems. IEEE Transactions on Audio, Speech and Language Processing, 2008.

 

Prime time for Distributed Speech Recognition? February 23, 2009

While an undergraduate student a few years ago I worked on Distributed Speech Recognition (DSR). The main purpose of DSR is to compress the acoustic features used by a speech recognizer, over a data (instead of voice) network, thus saving bandwidth (cost effective) and allowing the use of full speech recognition in mobile terminals. As it compresses acoustic features for speech recognition (not speech signal transmission/reproduction) purposes it can achieve very low bit rates. You can think of it as analogous of what mp3 is for music transmission and storage.

Depicted next is a simple overview of a DSR architecture (model 2). Note that the mobile terminals depicted are Symbian’s reference devices corresponding to smartphone, handheld and PDA respectively (Ooops too old images – it should be back in 2001; should upgrade to something like iPhone or Android …)

My work with Prof. V.Digalakis concluded that one can successfully take advantage of DSR with only a 2 kbps coding, which is an extremely low data rate. After that i ported the DSR engine to a Zaurus Linux PDA and made it work in real-time (a 16MB, 200 MHz StrongArm processor).

Although my recent work focus is now on Multi-modal (speech) interfaces I still keep an eye on DSR. It seems that with the emergence of powerful mobile terminals and the announcement of speech recognition support for Android and iPhone by Google, DSR might become soon a hot topic!

P.S. I just found out my DSR page is ranked 3rd by Google after W3C and ETSI. Holy moly!

Coolness factor: ?

 

The year of Augmented Reality

Filed under: android,augmented reality,mobile — perak @ 5:24 am
Tags:

Wikitude AR Travel Guide

Untill now there was too much hype around augmented reality since except for some really cool demos and research prototypes no real end user apps existed. Well, it seems that with the emergence of power mobile devices, augmented reality will find it’s way to the public with mobile users to be the first. Wikitude Android App is one of the first ones, with many more following this year.

Coolness factor 5/5!

 

Academic publishing February 21, 2009

Filed under: academic research — perak @ 4:49 am
Tags:

Luis Von (CMU professor & inventor of CAPTHA) points out in his post Academic publishing 2.0 how the academic research world has been turned to a paper-generation industry:

As an academic community, it sometimes feels that the final goal of doing research is publishing papers. The goal of doing research should be, well, doing research. I understand that communicating the results of our work is important, but surely there is a better method than one that was invented before computers were around.

Given the number of people working in computer science and the fact that publishing papers is considered the goal of our work, there is an insane number of papers written every year, the vast majority of which contribute very little (or not at all) to our collective knowledge. This is basically spam.

Can a combination of a wiki, karma, and a voting method like reddit or digg substitute the current system of academic publication?

Well, this is basically a chicken and egg loop problem. Many people doing research know they need to produce a large number of research papers each year to stay competitive. Quantity over quality. Although there are some efforts to quantify the quality of a research paper such as the impact factor, eigenfactor, or even using Pagerank the research public community needs to reinvent itself!

P.S. Darn, i need to finish some papers soon!

 

Free access to scientific knowledge

Filed under: academic research — perak @ 4:18 am
Tags:

I just got into PLOS the
public library to science site. Until now the focus of the library is on medicine, biology and genetics. It is so nice to see such efforts nowdays. Welcome to science 2.0!

I am sure there other other similar efforts out there. MIT, Berkeley and other top US universities publish their course lectures online. Sites such as academic earth and videolectures have a ton of really amazing video lectures.

As a electronic & computer engineer I never got it, that organizations like IEEE and ACM which should pioneer the access to technical advancements are so strict to sharing knowledge. I am sure they can come up with a smarter way of earning their revenues than just locking access to their papers. If they don’t any time soon it is likely they soon become obsolete.

 

My new geek blog … January 26, 2009

Filed under: personal,technology — perak @ 7:20 am

Although I will keep updating this blog I will posting my geek related stuff to a new blog entitled: Manolis Perakakis geek Universe, enjoy!

 

Firefox cloudlet plugin January 21, 2009

Filed under: web — perak @ 8:49 am

In a previous post i said that Opera is one of the few non-open source programs I use due to it’s speed, standard compliance (100% Acid test), simple yet intuitive and extremely configurable interface.

I have (at last) finally moved to Firefox, since it has become fast, secure and has this enormous set of useful plugins. Some of the plugins i use try to resemble Opera a bit :

There are some really invaluable plugins like greasemonkey, zotero and ubiquity but the coolest one i have found so far is the cloudlet search plugin. It filters Google searches by Tag or site allowing not only to narrow down your query results but also to discover very similar content!

Coolness factor : 5/5!

 

 
Follow

Get every new post delivered to your Inbox.