Self-reconfiguring distributed vision

  • Andrea Cavallaro. Self-reconfiguring distributed vision. Plenary talk at International work conference on artificial neural networks (IWANN), 2015.
    [BibTeX] [Abstract]

    Assistive technologies, environmental monitoring, search and rescue operations, security and entertainment applications will considerably benefit from the sensing capabilities offered by emerging networks of wireless cameras. These networks are composed of cameras that may be wearable or mounted on robotic platforms and can autonomously sense, compute, decide and communicate. These cameras and their vision algorithms need to adapt their hardware and algorithmic parameters in response to unknown or dynamic environments and to changes in their task(s), i.e. they need to self-reconfigure. Cooperation among the cameras may lead to adaptive and task-dependent visual coverage of a scene or to increased robustness and accuracy in object localization under varying poses or illumination conditions. In this talk I will cover challenges and current solutions in self-reconfiguring distributed vision using networks of wireless cameras. In particular, I will discuss how cameras may learn to improve their performance. Moreover, I will present recent methods that allow cameras to move and to interact locally forming coalitions adaptively in order to provide coordinated decisions under resource and physical constraints.

    @Misc{2015-06-CAVALLARO,
    author = {Andrea Cavallaro},
    title = {{Self-reconfiguring distributed vision}},
    howpublished = {Plenary talk at International work conference on artificial neural networks (IWANN)},
    date = {2015-06-11},
    year = {2015},
    address = {Palma de Mallorca},
    abstract = {Assistive technologies, environmental monitoring, search and rescue operations, security and entertainment applications will considerably benefit from the sensing capabilities offered by emerging networks of wireless cameras. These networks are composed of cameras that may be wearable or mounted on robotic platforms and can autonomously sense, compute, decide and communicate. These cameras and their vision algorithms need to adapt their hardware and algorithmic parameters in response to unknown or dynamic environments and to changes in their task(s), i.e. they need to self-reconfigure. Cooperation among the cameras may lead to adaptive and task-dependent visual coverage of a scene or to increased robustness and accuracy in object localization under varying poses or illumination conditions. In this talk I will cover challenges and current solutions in self-reconfiguring distributed vision using networks of wireless cameras. In particular, I will discuss how cameras may learn to improve their performance. Moreover, I will present recent methods that allow cameras to move and to interact locally forming coalitions adaptively in order to provide coordinated decisions under resource and physical constraints.}
    }

This entry was posted in Dissemination. Bookmark the permalink.

Comments are closed.