Antero Vainio defends his PhD thesis on Real-Time Mobile AI Applications

On Tuesday the 5th of May 2026, M.Sc. Antero Vainio defends his PhD thesis on Real-Time Mobile AI Applications. The thesis is related to research done in the Content-centric Structures and Networking group at the Department of Computer Science.

M.Sc. Antero Vainio defends his PhD thesis "Real-Time Mobile AI Applications" on Tuesday the 5th of May 2026 at 9 in the University of Helsinki Chemicum Building, Auditorium A129 (A. I. Virtasen aukio 1, 1st floor). His opponent is Professo Jan S. Rellermeyer (Leibniz University Hannover, Germany) and custos Professor Sasu Tarkoma (University of Helsinki). The defence will be held in English.

The thesis of Antero Vainio is a part of research done in the Department of Computer Science and in the Content-centric Structures and Networking group at the University of Helsinki. His supervisor has been Professor Sasu Tarkoma (University of Helsinki).

Real-Time Mobile AI Applications

The role of mobile networks has gradually shifted from the sole purpose of transmitting phone calls, to providing data transmission for internet-based mobile applications. As the data rates in the networks, and the processing power in the computer systems have increased, the applications can use large volumes of data in their operation. The improvements in the latest artificial neural networks have created new possibilities for mobile AI applications from environmental monitoring and eXtended Reality (XR), to smart industry and autonomous vehicles. However, along with the quality of the applications, the computational requirements and the energy consumption of the applications have increased.

With server-based processing, next-generation mobile AI applications can support a wider variety of mobile devices from low cost IoT sensor devices to wearables like smart watches or XR headsets. By offloading the heavy processing on neural networks, the mobile devices can reduce their power consumption and reduce their physical size. To reduce the network latency involved in server offloading, Multi-access Edge Computing (MEC) is a paradigm where cloud based computational resources are located one network hop away from the mobile devices. However, due to the lack of industrial MEC services available, the recent scientific studies have largely relied on software simulations that confirm the performance offerings of MEC.

In contrast to simulated results, we show case studies from a MEC testbed with Wi-Fi access and Graphics Processing Units (GPU) for AI processing, connected to available cloud infrastructure for additional computational resources. We optimize the applications for the available MEC infrastructure and increase the throughput of AI requests, demonstrating the performance gains of MEC in contrast to traditional cloud based processing. We show two novel algorithms to reduce the network traffic and training costs in edge based AI, leading up to a 150% improvement in the overall system throughput. Lastly, we design and validate a network streaming protocol for balancing the load of arriving requests of the MEC server, and gain up to 100% increase in the system throughput.

While our contributions advance the field of MEC research, more results are needed to fully realize the MEC offering. In particular, our results are limited to Wi-Fi networks with commodity edge server hardware for GPU. Similar experiments are needed to study the application performance in cellular networks using other forms of hardware accelerated processing. Nevertheless, our work shows that MEC is a viable infrastructure option for real-time mobile AI applications.

Avail­ab­il­ity of the dis­ser­ta­tion

An electronic version of the doctoral dissertation will be available in the University of Helsinki open repository Helda at .

Printed copies will be available on request from Antero Vainio: .