Research

The ACH Finland focuses on providing support in the utilization of AI, tools for validation, verification and uncertainty quantification, and help with GPU programming and data-management, among other things. Below are highlighted ongoing and finalized projects.
Development of a surrogate model to predict heat fluxes at the tokamak edge

This project aims at developing a surrogate model based on a feed-forward neural network for the quasilinear gyrokinetic code QuaLiKiz. Gyrokinetic simulations are  on of the most faithful but most expensive tools for the prediction of transport of heat and particles caused by microturbulence, with full-machine nonlinear simulations requiring days of walltime on tier-0 supercomputers.

Conversely, using the quasilinear approximation, QuaLiKiz can be used to evaluate radial profiles of the fluxes in hours  on several CPU-cores. While a powerful tool, this is still a costly one for applications such as integrated modeling and flight simulators, which are crucial for tokamak operation.

In order to accelerate further the repeated evaluations of QuaLiKiz required to find the plasma state by optimisation methods, the surrogate model QLKNN has been trained on a dataset generated by QuaLiKiz. The resulting model is able to perform on evaluation of the fluxes in a second on one CPU-core.

The original QLKNN-hyper model was trained on QuaLiKiz data in the parameter range of core plasma. Here we supplement it with a new model trained on data in the parameter range of edge plasma. This is challenging in several aspects, as the turbulence intensity grows significantly towards the tokamak edge, and several micro-instabilities can contribute to turbulence simultaneously.

During this task, it was found that QuaLiKiz often produced incorrect predictions in the extreme range of temperature and density gradients, necessitating extensive curating of the data. Further inquiry revealed that QuaLiKiz was not capturing the most unstable mode in this parameter range. A modification in the implementation resolved this issue, yielded a significantly healthier dataset, and improved the range of application of QuaLiKiz.

Training of the neural network for QuaLiKiz-edge was then carried out successfully (see figure for illustration). Implementation of the surrogate model for integrated modelling in QLKNN-Fortran is ongoing.

Contact person for the project: Laurent Chôné

Plasma edge simulations using the open-source Sparselizard C++ finite element library

The Sparselizard open-source C++ finite element library provides a framework for numerical implementation of multiphysics systems and domain-decomposition capabilities for high-performance computing. The collaboration aims to take advantage of these for numerical simulation of models describing the scrape-off layer (SOL) plasma.

As a first step, the one-dimensional isothermal fluid approximation for SOL plasma was implemented and successfully validated with analytical solutions. Successively, a diffusive neutral model was implemented so that the neutral and particle source are determined in a self-consistent manner. The self-consistency was further extended by implementing the energy conservation equation to calculate the plasma temperature profile in the SOL. The implementation makes use of Newton linearization to address the nonlinearity of the system and subsequently a Newton iterative solver.

Currently, verification and validation of the energy equation is underway. Subsequently the implementation will be extended to include two-dimensional SOL plasma and parallel computing of the same.

The contact person for the project: Rahul Nagaraja

Applying machine learning for material research related to fusion

Material research has a significant impact on the design of future devices. This study, therefore, will support the development of future components for fusion machines.

In the current state we are creating an efficient surrogate model for the analysis of materials with vacancies in them. By vacancies we mean atoms missing from the structure. With that we estimate the difference in energy required for a dislocation to happen when compared to a full material. That is, with no vacancies. Since there is a large number of configurations possible, this easily becomes intractable with traditional approaches. This project makes use of state of the art descriptors for the structures as well as machine learning techniques.

Once this stage is concluded, we expect to be able to use the results for more complex dislocation types. For now we focus on straight dislocations. We also hope to apply the knowledge to expand the estimates to also include different atom types instead of vacancies. This would allow us to study the properties of alloys for systems not achievable with traditional methods.

Contact person for the project: Bruno Oliveira

High Performance Computing and the utilization of Super-Computers

Our efforts within the group concentrate on High Performance Computing and the utilization of Super-Computers at CSC - It center for Science that host some of the most powerful computers in Europe. Another specialization is Fortran code optimization and parallelization. Code projects include: MIGRAINe, BluMira, Ravetime.  Other codes that have been under consideration are: DREAM, LAMPPS/TabGap. 

DREAM code development

In collaboration with Åbo Akademi University a QuadSolver for the non-linear solver implementation in DREAM was finished. The problem behind this originates from the PETSc solvers used in DREAM being incapable of converging, and thus finding solutions, for the Newton-Raphson non-linear implementation into DREAM. After constructing and testing a quadruple-precision solver it was observed that this solves the problem. A master-student, Andreas Salminen, was employed, under the supervision of Prof. Jan Westerholm at Åbo Akademi, to construct a CUDA-language QuadSolver to be used on GPUs. The code is finished and is ready to be implemented in DREAM.

2High performance data compression

Together with Prof. Keijo Heljanko we tested compression schemes with data we got from TSVV-13. For some of the data good compression was achieved (from about 2.3 GB to 0.8-0.9 GB). After reporting back our test results, it was concluded that no further work is need for this task.

Eirene support

As a part of his PhD work, MSc Oskar Lappi has been developing the Eiron code which is intended as a scaled-down version of a fundamentally new algorithm implementation in EIRENE. Jan Åström has been functioning as one of Oskar’s supervisors for his thesis. Furthermore, a request from TSVV-5 was to set up a EUDAT cooperation to serve data handling and sharing. Some progress have been achieved, but difficulties with technicalities, and security rights, slowed down progress, and the project was terminated.

MIGRAINE code development

The MIGRAINE code has been in need of speeding up computations. To tackle this problem, a research project was set up on a CSC computer. After porting, test running and profiling, a straight forward paralellisation scheme using MPI was implemented. The code was run up to roughly 1000 compute core with close to perfect scalability, and thus almost 1000 times faster than the original serial code. In addition, an highly effective code optimization opportunity was discovered in core of the code. This reduced computation further about 92-93%, and removed a bottleneck in L3 cache thus allowing for the close to perfect scaling mentioned above.

 

LAMPPS-tabGAP optimization

The tabGAP implementation into the LAMPPS MD-code considerably improves the quality of the result compared to standard MD. The drawback is that the code become roughly 10 times slower. In order to improve the performance, we did some compiler option, and mathematical-function optimization. Together with a manual AVX implementation in the core of the code, done at Åbo Akademi, we were able to improve the codes performance with about a factor 3.

PlasMod Grad-Shafranov solver rewrite

The e3m.f90 used with the BLUMIRA code computes an analytical solution to the Grad-Shafranov equation that is theoretically given in the form of an integral equation. A rewriting of the outdated and unnecessarily complicated code were performed. The code structure has been clarified and redundant computations have been removed shrinking the length of the code about 25%.

Ravetime diffusion simulator

The Ravetime code simulates particle diffusion in solid materials and it is applied to fusion reactor materials. The code is based on an ODE (Ordinary Differential Equation) solver and is programmed with object-oriented Fortran. The code has been parallelized in one version with OpenMP and in another version with MPI.

Contact person for the project: Jan Åström

Bayesian methods for accelerating validation of predictive models for fusion plasmas

Magnetic confinement fusion research is characterized by expensive experiments, limited and noisy diagnostic information, and computationally costly physics models with uncertain, phenomenological input parameters. In such an environment, scientists conduct hypothesis testing based on limited information, and quantification of uncertainties would be central to assess the degree of belief on the inferred conclusions. However, the conventional approach of manually fitting the free parameters of the computational models in validation or experiment interpretation tasks leads to untraceable uncertainties. Bayesian inference (BI) algorithms provides a principled approach to quantify the uncertainty, as a probability distribution, for the state of the investigated system or hypothesis validity, given the available information. When operating with computationally costly models or limited experimental resources, data-efficiency is key to maximizing the information gain in establishing this probability distribution. Such efficiency can be achieved by combining Bayesian optimization (BO) with the overall BI task. BO is a powerful framework for data-efficient global optimization of costly, non-convex functions, without access to first- or second-order derivatives. On the one hand, BO uses BI to build a statistical approximation in the space of functions that represents the costly model, leveraging the Bayesian quantification of uncertainty over functions to efficiently refine the approximation where needed. On the other hand, the overall BI task is focused on establishing posterior probability distributions over the uncertain state of the investigated system or hypothesis. This project investigates application of BI and BO to accelerate validation of predictive models for fusion plasmas.

Contact person for the project: Aaro Järvinen

Eiron

Eiron is a toy model of a Monte Carlo neutral particle transport solver created to study the performance characteristics of different parallel algorithms that could be applied to EIRENE, a feature complete neutral particle transport solver used by the fusion community. EIRENE has trouble scaling to new problem sizes and needs a reorganization of its design in order to do so. Eiron is a modular reimplementation of the core structures of EIRENE using modern software development practices.

Eiron has limitations as a neutral particle transport solver: it only works on a structured 2D grid, variances aren't calculated, and collision rates are not velocity-dependent. Eiron provides multiple parallel algorithms using both functional decomposition and domain decomposition. A deterministic random number generator ensures that all algorithms produce an equivalent result.

For functional decomposition, Eiron splits the particle simulation and field estimation (aka tallying) tasks into separate steps, and the steps can be done synchronously back-to-back, or composed into a pipeline. Eiron also allows the user to choose to not functionally decompose simulation and tallying, and instead combine the two into a single function. Regardless of the functional decomposition setup, Eiron can use OpenMP-based accelerators for both simulation and tallying, where tallying can be done either on thread-private buffers or a shared buffers.

For domain decomposition, Eiron splits the grid into subdomains and assigns subdomains to processing elements. At the moment, domain decomposition is only implemented for tallying, as domain decomposing the simulation grid requires solutions for passing particle state between simulator subdomains.

Work is ongoing on the more complicated simulation domain decomposition algorithms and a statistical comparison of the estimated field solutions from EIRENE and Eiron. Huw Leggate from EUROFusion ACH IPP‐Garching & Dublin City University and Yannick Marandet from ISFIN at Aix-Marseille University have contributed to the design and implementation of Eiron.

Eiron is also being used to develop a prototype hybrid kinetic-diffusive particle simulation model in cooperation with Emil Løvbak and Giovanni Samaey from KU Leuven.

Contact person for the project: Oskar Lappi

GENE Gyrokinetic Uncertainty Quantification with Sparse Grids

There are many aspects of plasma turbulence and transport that remain a mystery. Understanding the motion of the plasma under certain tokamak device scenarios is key to reducing the transport losses that is a major reduction to fusion performance. The Gyrokinetic vlasov equations are a set of differential equations that describes the evolution of charged particle distributions within a magnetic field. GENE uses numerical techniques to solve the Gyrokinetic vlasov equations and can simulate many turbulent and transport phenomena. In order to understand the phenomena within real tokamak devices the turbulence and transport driving plasma properties can be measured and inserted into GENE as an input. These measurements come with experimental errors which limits the accuracy of GENE and the uncertainty needs to be quantified. By treating GENE as a black box and scanning over the uncertainty of the measured quantities it is possible to deduce the uncertainty of GENE's outputs. To reduce the required number of scan points to obtain an accurate uncertainty quantification adaptive sparse grids and interpolation will be implemented. This will ensure the points selected will be the ideal points to train an accurate interpolation algorithm which can then be used to scan over the uncertainty of the measured quantities with much higher resolution with little computation power. The sparse grid approach will also allow a sensitivity analysis which identifies the key experimental parameters that are affecting the turbulence and transport which provides key insights to researchers within the field.

Contact person for the project: Daniel Harley Jordan