The format of the conference will be that of a series of talks run in a single-threaded manner. A poster session will run on the morning of August 25th. A welcome reception will be held on the evening of August 24th.

The conference will be held in the O'Tnúthail Theatre, located in the Arts Millennium building on the South Campus of the National University of Ireland, Galway.

Conference Programme

Wednesday 24th August

7:30 Welcome reception, Aula Maxima NUI, Galway


Thursday 25th August

Theatre Ó'Tnúthail, Arts Millennium Building, NUI, Galway

08:00 – 09:00 Registration

09:00 – 10:00 Oral paper presentation

10:00 – 11:00 Coffee / IMVIP Posters presentation
11:00 – 12:00 Keynote: Jenny Read

Title: How many ways are there to solve stereoscopic vision?

  • Abstract: Matching up the two perspectives from which we see the world requires the brain to compute when particular locations on the retinas are viewing the same object in space. In principle, there are several ways this could be done. For example, corresponding points should usually have roughly the same contrast, luminance, colour, texture and motion. A stereo vision system could proceed by detecting distinctive features or objects in each eye individually and then finding pairs of features that match. Alternatively, it could simply assess how well regions of the retinas match, without identifying particular features or objects; the output of this computation could then guide scene segmentation and object identification. Both approaches have been employed in machine stereo algorithms. Human stereopsis seems to consist of several distinct modules which use different approaches, and contrast this with a very different form of stereopsis found in an insect, the praying mantis.


  • 12:00 – 13:00 Oral paper presentation
    13:00 – 14:00 Lunch

    14:00 – 17:00 Industry Session

    14:-00 – 15:00 Keynote: Alexandru Drimbarean, Fotonation Ireland

    Title: Mobile Computational Imaging

  • Abstract: From its beginning with the transition from film to digital, mobile computational imaging technologies have evolved, transforming forever the way we experience photography. Even though the latest smartphones can capture images with amazing quality and implement a wide range features such as panorama, HDR or VIS not possible in the film era, this "magic" is not without challenges. This presentation will outline the key challenges of mobile computational imaging and the evolution of different solutions leading to FotoNation's IPU (Image Processing Unit) - a hardware unit comprising a set of IP cores tightly connected to provide high performance and low-power computational imaging.


  • 15:00 – 16:00 Keynote: Michael Starr & George Siogkas, Valeo Vision Systems

    Title: Automotive Computer Vision - from ADAS to Autonomous driving

  • Abstract: The automotive world has always been a lucrative application space for cutting edge computer vision research. Nowadays, the hype around driverless vehicles is continuing its upward trend and OEMs and suppliers alike are competing in a race to be the first to deliver a fully autonomous vehicle. The aim of this presentation, is to connect the dots starting at low level supportive computer vision algorithms for ADAS which appeared 10 years ago to the point where fully autonomous vehicles are expected to hit the market in the next 5 years. We will present past, current and future computer vision algorithms that have, or soon will be available on commercial vehicles. Doing so, we will point out challenges, some more obvious than others, that dramatically increase the complexity of the solutions and dictate a cautious approach to developing safety critical algorithms. We will also try to predict what the future has in store for the research community, the OEMs, and the consumers.


  • 16:00 – 16:30 Coffee

    16:30 – 17:30 Keynote: Alireza Dehghani, Movidius

    Title: Embedded Computer Vision and Inference

  • Abstract: Typical embedded computer vision platforms used in applications such as robotics include the Odroid XU4, Nvidia Tegra K1 and Tegra X1 platforms. In all cases the incumbent platforms consume 5-15W depending on the deep-network being used for inference. The level of integration is also low and they require additional boards such as a Teensy 3.1 and level shifters etc. to perform tasks such as motor control and receiving inputs from sensors. Systems based on these platforms used in academia like MIT's Racecar Project often cost thousands of dollars which is clearly beyond the reach of most students and hobbyists. In order to address this need, a low-cost, low-power visual-intelligence computer vision platform based on the H2020 EoT (Eyes of Things) and the Movidius Myriad2 VPU (Vision Processing Unit), associated machine vision, communications and motor-control libraries and the Movidius Fathom deep-learning framework is presented in this talk. The platform allows a variety of low-cost and low-power computer vision systems to be built using kits and programmed in micropython. The ultimate goal of this platform is to enable the mass-production of advanced and highly programmable device for developers, students and hobbyists at a sub $100 price-point that includes hardware and software.


  • 19:30 BBQ at Jury's Inn Hotel (Included in conference subscription if pre-registered online)



    Friday 26th August

    Theatre Ó'Tnúthail, Arts Millennium Building, NUI, Galway

    9:00 – 10:00 Keynote: Chris Solomon

    Title: Making Faces – From concept to commercial product

  • Abstract: This paper will describe the scientific basis of work undertaken over a number of years at the University of Kent to find a more effective way to produce facial composites. Facial composites are images produced from an eyewitness' memory which are used by police forces around the world to help identify criminal suspects. We will outline the core concepts and explain how they have led to the E-FIT system, now in routine use in more than 20 countries around the world.


  • 10:00 – 10:30 Coffee

    10:30 – 12:00 Oral paper presentation

    12:00 – 13:00 Closing Keynote – Rudolf Mester

    Title: Predictive Visual Perception for Automotive Applications

  • Abstract: Understanding the world around us while we are moving means to continuously maintain a dynamically changing representation of the environment, to make predictions about what to see next, and to correctly process those perceptions which were surprising, relative to our predictions. This principle is valid both for animate beings, as well as for technical systems that successfully participate in traffic. At the VSI Lab, we put special emphasis on this recursive / predictive approach to visual perception in ongoing projects for driver assistance and autonomous driving. These processing structures are complemented by statistical modeling of egomotion, environment, and the measurement process. In our opinion, this approach leads to particularly efficient systems, since computational ressources may be focussed on 'surprising' (thus rare) observations, and since this allows for a large reduction of search spaces in typical visual matching and tracking tasks. The talk will present examples for such predictive / recursive processing structures. Furthermore, recent results in the field of monocular, stereo, and multi-monocular (surround vision) applications will be shown.


  • 13:00 – 13:05 IMVIP closing

    13:05 – 13:35 IPRCS meeting