• 'Earable' computing: A new research area

    From ScienceDaily@1337:3/111 to All on Tue Dec 15 21:30:32 2020
    'Earable' computing: A new research area in the making

    Date:
    December 15, 2020
    Source:
    University of Illinois Grainger College of Engineering
    Summary:
    A research group is defining a new sub-area of mobile technology
    that they call 'earable computing.' The team believes that earphones
    will be the next significant milestone in wearable devices, and
    that new hardware, software, and apps will all run on this platform.



    FULL STORY ========================================================================== CSL's Systems and Networking Research Group (SyNRG) is defining a new
    sub-area of mobile technology that they call "earable computing." The
    team believes that earphones will be the next significant milestone in
    wearable devices, and that new hardware, software, and apps will all
    run on this platform.


    ==========================================================================
    "The leap from today's earphones to 'earables' would mimic the
    transformation that we had seen from basic phones to smartphones," said
    Romit Roy Choudhury, professor in electrical and computer engineering
    (ECE). "Today's smartphones are hardly a calling device anymore, much
    like how tomorrow's earables will hardly be a smartphone accessory."
    Instead, the group believes tomorrow's earphones will continuously
    sense human behavior, run acoustic augmented reality, have Alexa and
    Siri whisper just-in- time information, track user motion and health,
    and offer seamless security, among many other capabilities.

    The research questions that underlie earable computing draw from a wide
    range of fields, including sensing, signal processing, embedded systems, communications, and machine learning. The SyNRG team is on the forefront
    of developing new algorithms while also experimenting with them on real earphone platforms with live users.

    Computer science PhD student Zhijian Yang and other members of the SyNRG
    group, including his fellow students Yu-Lin Wei and Liz Li, are leading
    the way. They have published a series of papers in this area, starting
    with one on the topic of hollow noise cancellation that was published at
    ACM SIGCOMM 2018. Recently, the group had three papers published at the
    26th Annual International Conference on Mobile Computing and Networking
    (ACM MobiCom) on three different aspects of earables research: facial
    motion sensing, acoustic augmented reality, and voice localization
    for earphones.

    "If you want to find a store in a mall," says Zhijian, "the earphone
    could estimate the relative location of the store and play a 3D voice
    that simply says 'follow me.' In your ears, the sound would appear to come
    from the direction in which you should walk, as if it's a voice escort."
    The second paper, EarSense: Earphones as a Teeth Activity Sensor, looks at
    how earphones could sense facial and in-mouth activities such as teeth movements and taps, enabling a hands-free modality of communication
    to smartphones.

    Moreover, various medical conditions manifest in teeth chatter, and the proposed technology would make it possible to identify them by wearing earphones during the day. In the future, the team is planning to look
    into analyzing facial muscle movements and emotions with earphone sensors.

    The third publication, Voice Localization Using Nearby Wall Reflections, investigates the use of algorithms to detect the direction of a
    sound. This means that if Alice and Bob are having a conversation,
    Bob's earphones would be able to tune into the direction Alice's voice
    is coming from.

    "We've been working on mobile sensing and computing for 10 years,"
    said Wei.

    "We have a lot of experience to define this emerging landscape of
    earable computing." Haitham Hassanieh, assistant professor in ECE,
    is also involved in this research. The team has been funded by both NSF
    and NIH, as well as companies like Nokia and Google.


    ========================================================================== Story Source: Materials provided by University_of_Illinois_Grainger_College_of_Engineering.

    Note: Content may be edited for style and length.


    ========================================================================== Journal References:
    1. Zhijian Yang, Yu-Lin Wei, Sheng Shen, Romit Roy Choudhury. Ear-AR.

    MobiCom '20: Proceedings of the 26th Annual International
    Conference on Mobile Computing and Networking, 2020 DOI:
    10.1145/3372224.3419213
    2. Jay Prakash, Zhijian Yang, Yu-Lin Wei, Haitham Hassanieh, Romit Roy
    Choudhury. EarSense. MobiCom '20: Proceedings of the 26th Annual
    International Conference on Mobile Computing and Networking,
    2020 DOI: 10.1145/3372224.3419197
    3. Sheng Shen, Daguan Chen, Yu-Lin Wei, Zhijian Yang, Romit Roy
    Choudhury.

    Voice localization using nearby wall reflections. MobiCom '20:
    Proceedings of the 26th Annual International Conference on Mobile
    Computing and Networking, 2020 DOI: 10.1145/3372224.3380884 ==========================================================================

    Link to news story: https://www.sciencedaily.com/releases/2020/12/201215091633.htm

    --- up 7 hours, 57 minutes
    * Origin: -=> Castle Rock BBS <=- Now Husky HPT Powered! (1337:3/111)