Eemeli Annala & Aleksi Vuorinen
Neutron stars are the remnants of old stars that have undergone a supernova explosion and a subsequent gravitational collapse, which ended just before the formation of a black hole. They contain the densest matter in our observable Universe, with one millilitre of neutron-star material weighing more than 10¹¹ kg.
A question that has been puzzling both astrophysicists and particle theorists for decades is whether the cores of heavy neutron stars might contain an entirely new phase of matter with quarks and gluons as the fundamental degrees of freedom. If confirmed, the discovery of such cold quark matter would be a breakthrough in astrophysics and would additionally shed light on the structure of the phase diagram of Quantum Chromodynamics.
In a recent article, just accepted for publication in Physical Review X, we implemented a set of new neutron-star observations in a framework designed to build a model-independent family of viable equations of state for neutron-star matter. The new observational information included a new radius measurement by the NICER collaboration and the likely formation of a black hole in the astrophysical event that led to the first observed gravitational-wave signal from a binary neutron-star merger, GW170817.
Our results, displayed in the figures below, indicate a dramatic reduction in the current uncertainties associated with the neutron-star matter equation of state and the neutron-star mass-radius relation. Interestingly, all the now excluded equations of state corresponded to very high sound speeds in neutron-star matter, in which case the existence of quark-matter cores in massive neutron stars would be uncertain. The new results thus significantly strengthen the case for quark-matter cores, argued for already in our previous Nature Physics publication in 2020.
Figure: The neutron-star matter equation of state (left) and the corresponding neutron-star mass-radius relation (right), obtained using a recent radius measurement and the likely presence of a supramassive neutron star in the GW170817 event. The color coding corresponds to the highest speed of sound squared reached at any density, and the dashed lines indicate previous results.
Sami Raatikainen & Syksy Räsänen
Most of the matter in the universe is dark matter, which has been detected only via its gravitational interaction. Black holes are a prominent candidate.
Dark matter predates the first stars. Therefore, if dark matter consists of black holes, they were not formed in stellar collapse like run-of-the-mill black holes. Creating them requires matter to be strongly clumped and collapse in some rare regions already at early times.
The most successful explanation for the existence of inhomogeneities in the universe –like stars and galaxies– is cosmic inflation. According to inflation, all structures trace their origin back to quantum fluctuations in the primordial universe.
We studied a model of cosmic inflation where the inflationary quantum fluctuations produce not only normal structures, but also seed the right amount of dark matter in black holes with same mass as the asteroid Eros.
The key new ingredient in our calculation is that we take consistently into account that while the quantum fluctuations affect how the universe evolves, the evolution of the universe also changes the quantum noise.
We solved this coupled evolution by running over 100 billion simulations of cosmic inflation, using over 1 million supercomputer CPU hours at CSC.
We found that taking the effect of evolution into account enhances the kicks, increasing the number of Eros-mass black holes by a factor of a 100 000. This shows that it is essential to treat quantum noise consistently.
Figure: Without treating quantum noise consistently, its statistics are Gaussian – the bell curve shown in blue. Our treatment reveals that the distribution has an exponential tail, shown in red. This means that rare fluctuations that form black holes are more common.
Lauri Niemi & David Weir
Shortly after the Big Bang, the early universe is expected to have undergone phase transitions as it cools down from its hot initial state. Depending on the particle content that makes up the universe, these transitions can be violent first-order phase transitions that fill the universe with gravitational waves, and can produce the out-of-equilibrium conditions required to explain the lack of antimatter in the present universe. In the coming decades, stochastic gravitational waves will be probed over a broad frequency range by several new experiments. Observation of gravitational waves from cosmological phase transitions would shed light on the conditions fractions of seconds after the Big Bang.
Unfortunately, cosmological relics from phase transitions are improbable within the Standard Model of particle physics. Both the transitions between hadronic and quark matter, as well as the electroweak phase transition associated with the Higgs mechanism, are known to occur smoothly rather than through a first-order transition, much like the liquid-gas transition in a supercritical fluid. However, many models of new physics beyond the Standard Model involve new Higgs fields and predict discontinuous phase transitions in the early universe. In particular, the electroweak Higgs mechanism could have occurred through consecutive phase transitions between Higgs fields of different types, a possibility studied in detail in our recent article (link below).
A technical complication to studying phase transitions within quantum field theory is that the long-distance thermodynamics is dominated by strongly-interacting bosons. Making robust theoretical predictions thus requires nonperturbative methods such as numerical lattice simulations. Unfortunately, due to the relative complexity of these simulations, almost all of existing literature on cosmological phase transitions with multiple Higgs fields is based on low-order perturbative estimates that are often ambiguous because of large intrinsic uncertainties in the calculations.
In our work, we utilized nonperturbative lattice simulations to probe the electroweak phase structure in the presence of a new Higgs field in the adjoint representation. We found that discontinuous transitions are absent in most of the allowed parameter space, whereas perturbation theory predicts a weakly first-order transition. In a relatively narrow region of parameter space the model admits a two-stage electroweak phase transition, with a different realization of the Higgs mechanism at intermediate temperatures and large latent heat available for gravitational-wave production in the latter stage.
Figure 1: Phase transition patterns as Standard Model + adjoint Higgs theory is cooled down from the high-temperature phase labelled O. The axes label the mass (in GeV) of the new neutral Higgs particle and its quadratic coupling to the Standard Model Higgs, and the self interaction strength is kept fixed. Phase with active Higgs mechanism in the adjoint Higgs direction is labelled Σ and the Standard Model -like Higgs phase is labelled ϕ. In regions IV and V there is either a smooth crossover or a first-order transition directly to the ϕ phase, while regions II-III admit an intermediate Σ phase. Region I is ruled out by phenomenology.
Minna Palmroth & the Vlasiator team
Space is the richest reachable plasma laboratory, hence many of the fundamental and universal physics discoveries of the fourth state of matter – plasma – root in space physics. The near-Earth space is the only place one can send spacecraft to study plasmas. But: Normally one can send only a few satellites, leaving gaps in observations – and demanding modelling of space.
Modelling space plasmas has three broad categories from computationally feasible to almost impossible. The easiest is to assume that plasmas are a fluid, allowing using a coarse grid, where each cell are like pixels in a 3D camera picture. The computationally most demanding is to model electrons and protons as particles, in which case the simulation volume needs to be filled with tiny grid cells capturing electron physics. Since space is big electron-scale physics cannot extend to the entire near-Earth space.
There is a midway, in which protons are particles and electrons are fluid. Even this hybrid method is so demanding computationally that it has been feasible only in two spatial dimensions. Until now: the Vlasiator group at UH was able to extend the world’s most accurate space environment simulation Vlasiator to cover all six dimensions.
If you measure the temperature of plasmas in space, you do not get a nice normal distribution of particles like in air. You might get several different temperatures per location, meaning that to model the plasma temperature – and by extension almost everything that matters in plasmas – you need to model how the particles are distributed. This needs an additional three-dimensional space inside the three-dimensional position space containing the model grid pixels. Thus, six dimensions are needed.
With the help of PRACE Tier-0 grant and the HLRS supercomputer Hawk in Stuttgart, the Vlasiator group completed the world’s first 6D simulation of one of the most mysterious questions in space physics: what causes the Earth’s magnetospheric tail to erupt plasma clouds at times? This question has not been answered by observations nor by previous fluid models, because the decisive physics occurs at ion-scales.
The world’s first 6-dimensional simulation of ion-scale dynamics within the near-Earth space. The solar wind flows into the simulation from the right. The Earth’s magnetic field is an obstacle to the solar wind flow, and hence a bullet-shaped magnetosphere is formed. A similar process makes water circulate a rock in a river. The latest Hawk runs are effectively making 4 million self-consistent spacecraft observations of the ion-scale physics within the near-Earth space, making it possible to study long-standing mysteries in space physics.
Mykhailo Girych, Giray Enkavi, Tomasz Rog & Ilpo Vattulainen
The findings of a new study challenge the prevailing thinking on the primary role of serotonin and other neurotransmitters in the effects of antidepressants.
The effects of selective serotonin reuptake inhibitors (SSRIs) and other conventional antidepressants are believed to be based on their increasing the levels of serotonin and noradrenalin in synapses, while ketamine, a new rapid-acting antidepressant, is thought to function by inhibiting receptors for the neurotransmitter glutamate.
Neurotrophic factors regulate the development and plasticity of the nervous system. While all antidepressants increase the quantity and signalling of brain-derived neurotrophic factor (BDNF) in the brain, the drugs have so far been thought to act on BDNF indirectly, through serotonin or glutamate receptors.
A new study combining neuroscience and computational biophysics demonstrated, however, that antidepressants bind directly to a BDNF receptor known as TrkB. This finding challenges the primary role of serotonin or glutamate receptors in the effects of antidepressants. In essence, the effects of antidepressant on plasticity do not require increases in the serotonin levels or the inhibition of glutamate receptors, as previously thought.
The binding site of antidepressants in the transmembrane region of TrkB was identified through biomolecular simulations performed at the Department of Physics, University of Helsinki. Biochemical binding studies and mutations introduced in the TrkB receptor verified the site. Biomolecular simulations also demonstrated that the structure of TrkB is sensitive to the cholesterol concentration of the cell membrane. TrkB is displaced in cholesterol-rich membrane compartments, such as synaptic membranes. In addition to findings pertaining to the effects of antidepressants, the study produced a substantial amount of new information on the structure and function of the growth factor receptor.
Figure: Antidepressant drugs bind to dimerized transmembrane domains of TRKB neurotrophin receptors and promote BDNF signaling in synaptic membranes. (Image: Mykhailo Gyrich, Giray Enkavi).
Fredric Granberg & Kai Nordlund
One of the key hurdles to designing a commercially viable fusion power plant is finding materials that can withstand the enormous, about 100 million degree, heat in the fusion plasma. While this plasma does not directly get in touch with the materials, still some fraction of the very hot hydrogen isotopes and electrons will escape the plasma and interact with the inner wall material of the reactor. These escaping particles can heat the material, erode it, or enter it. In case they do enter, they are lost from the fusion reactor fuel and degrade the material properties. Due to this, it is very important to understand the nature of plasma-material interactions.
After decades of testing different materials, the fusion community has reached a consensus that out of all possible elements, tungsten (W) is the one material that is not prohibitively expensive, yet seems to be able to withstand well the fusion plasma environment. This is due to its high melting point, which gives the best possible tolerance against heating damage, and high cohesive energy, which makes particle-induced erosion unlikely. However, like any other material, also W does have the disadvantage that energetic hydrogen particles may enter it. This again can be a major problem at least from two points of view: the hydrogen that has entered the material is lost from the fusion fuel, and may degrade the material properties. Due to these issues, intensive research is ongoing into the behaviour of H and its isotopes Deuterium (D) and tritium (T) in W.
One of the basic key questions to solve is how much hydrogen will be retained in W in plasma-facing conditions. In a collaboration between the Max-Planck Institute for Plasma Physics in Germany, the Culham Science Center in Oxfordshire, UK and our University, we have recently combined experimental and simulation efforts to understand the limits of D retention . In these experiments, W samples were first irradiated with high-energy 20 MeV W ions to mimic neutron damage in a fusion reactor, and then exposed to a high flux of low-energy (~ 10 eV) D ions from a plasma, corresponding closely to the condition in fusion reactors like ITER under construction in France, and the future DEMOnstration power plant. In our group, we modelled the damage buildup by a combination of the simplified “CRA” (creation-relaxation algorithm) and full molecular dynamics (MD) simulations. This combination, developed in our group, was shown to lead to realistic radiation damage features even for very high damage level (measured in units of “displacements-per-atom", dpa, a special unit for energy deposited in nuclear collisions ) irradiations. We found that especially “cascade annealing” will solve the problem with limited CPU resources and human time, where heavily damaged structures (generated by CRA) are bombarded with full MD impacts. The concentration of D retained in the material was measured experimentally with nuclear reaction methods, and determined in simulations via an analysis of how vacancies are filled with D atoms.
Both the experimental and simulation results show that while the D concentration initially grows rapidly, after a dose of about 0.1 dpa, the D concentration saturates at a reasonable level of about 1.6%. This is a very encouraging result for fusion reactor design, as it indicates that the level of hydrogen isotopes (D and T) lost to the wall material will not rise limitlessly. Moreover, the simulations explain the reason to the saturation: in short, it can be understood based on our earlier observation in systems without D that when neutron-induced collision cascades in metals overlap previous damage, they inherently anneal some of the pre-existing damage . This causes the damage level (vacancy- and interstitial-like defects) to saturate after an irradiation dose of about 0.1 dpa. Since the D ions are retained mainly in vacancy-like defects, this damage saturation also limits the D level. The really good news is that our explanation shows that this is an inherent physical effect not strongly dependent on details of the material properties, so one can trust that it will be effective in any heavy metal placed in a fusion reactor.
Figure 1. Deuterium (D) concentration in W determined by experiment and computer simulations of how deuterium penetrates W wall material in fusion reactors. The agreement between simulations and experiments is remarkably good considering that there is no fitting of simulations to the experimental data. The “CRA” is a simplified simulation model, and the MD and CRA-MD results state-of-the art molecular dynamics simulations from the University of Helsinki Department of Physics. The experimental data is from our collaborators at the Max-Planck Institute for Plasma Physics in Germany. The results show that hydrogen isotopes concentrations saturate in W, which is a very important insight for fusion reactor development, as it implies that the hydrogen fuel will not be limitlessly absorbed into the reactor walls. Note that the displacement damage scale x-axis is split into logarithmic and linear halves, in order to emphasize the saturation level in the high dose limit.
 D. R. Mason, F. Granberg, M. Boleininger, T. Schwarz-Selinger, K. Nordlund and S. L. Dudarev, Parameter-free quantitative simulation of high dose microstructure and hydrogen retention in ion-irradiated tungsten, Phys. Rev. Mater. 5, 095403 (2021).
 K. Nordlund, S. J. Zinkle, A. E. Sand, F. Granberg, R. S. Averback, R. Stoller, T. Suzudo, L. Malerba, F. Banhart, W. J. Weber, F. Willaime, S. Dudarev, and D. Simeone, Improving atomic displacement and replacement calculations with physically realistic damage models, Nature communications 9, 1084 (2018)
 F. Granberg, K. Nordlund, M. W. Ullah, K. Jin, C. Lu, H. Bei, L. M. Wang, F. Djurabekova, W. J. Weber, and Y. Zhang, Mechanism of radiation damage reduction in equiatomic multicomponent single phase alloys, Phys. Rev. Lett. 116, 135504 (2016)
Laurent Forthomme, Tiina Naaroja, Fredrik Oljemark, Kenneth Österberg, Heimo Saarikko & Jan Welti
The building blocks of matter, the quarks, have a property called “colour” and form bound states, whose colour is “neutral” either by having all three colours or colours cancelling, since the antiparticles of quarks have corresponding anti-colour. Good examples of such bound states are protons and neutrons. The quarks are bound together by the exchange of gluons. The gluons can also be bound together by themselves, to form glueballs. Discovering glueballs is not easy, as one must ensure that no quarks were involved both at production and decay of the glueball. So far there exists no firm experimental evidence for glueballs.
TOTEM is an experiment at CERN’s Large Hadron Collider (LHC) focusing on elastic scattering, where the two protons only scatter (“change their direction”) slightly in the collision. The distinct feature of TOTEM is the ability to measure protons with very small scattering angles corresponding to distances of only a few mm from the outgoing beam far (> 200 m) from the collision point.
Elastic proton-proton scattering has been described by the exchange of the “Pomeron”, a 2-gluon combination, where the gluon colours cancel each other, see Figure 1. However, already 50 years ago, the existence of an “Odderon”, corresponding to a 3-gluon combination, was predicted. Contrary to the Pomeron, the Odderon interacts differently with the proton and its antiparticle, the antiproton.
Figure 1: Pomeron and Odderon exchange between protons (p) or a proton and an antiproton (p with bar) depicted. The wiggly lines are gluons. The direction of time goes from left to right.
Comparing elastic proton-proton collisions at the LHC with elastic proton-antiproton collisions at Fermilab’s Tevatron collider at the same energy, the TOTEM and D0 could observe that at some particular scattering angles the probabilities of proton-proton and proton-antiproton scattering were significantly different, see Figure 2. The only viable explanation was that instead of exchanging Pomerons, the two colliding particles were exchanging Odderons.
Figure 2: Comparison of the elastic proton-proton and proton-antiproton interaction probability as function of the momentum transfer square, |t|, proportional to the proton/antiproton scattering angle squared. Reproduced under Creative Commons 4.0 license from Physical Review Letters 127, 062003 (2021).
The Odderon is not an ordinary particle but instead a compound of gluons sufficiently bound together to be exchanged between two protons (or a proton and an antiproton) without the gluons of the Odderon interacting individually with the building blocks of the proton, the quarks and the gluons. After the Pomeron, it is only the second object made up only of gluons ever observed.
Ari-Pekka Honkanen & Simo Huotari
Specialized synchrotron light based methods such as x-ray Raman scattering (XRS) spectroscopy can reveal the chemistry of elements under harsh conditions such as an operating battery or an in-situ chemical catalysis.
We combined the power of XRS spectroscopy and X-ray diffraction (XRD) at the European Synchrotron Radiation Facility (ESRF) beamline ID20  and provided insights into the chemistry of cobalt and carbon by observing the cobalt carbide (Co2C) formation during Fischer-Tropsch Synthesis (FTS) reaction. The observations were made using measuring the core-electron excitation spectra of Co L2,3-edges and C K-edge in a Co/TiO2 catalyst. 
Cobalt-based FTS catalysts are one of the most relevant catalysts for industrial applications. Controversy exists, however, regarding the role of the cobalt oxides and carbides during the reaction. Co2C is suspected to play a key role in the deactivation of the catalyst as a degradation mechanism . While indeed some studies correlate Co2C with the deactivation of the catalyst, others assign it with higher selectivity toward lower olefins and as intermediate species during the reaction. This work focused on the study of the formation process of cobalt carbides at relevant conditions of temperature and pressure.
We could clearly reveal the formation of the cobalt carbide as a function of time and local position of the capillary-based reactor bed. Since Co2C is unstable outside the chemical environment, we could observe for the first time its spectroscopic fingerprint in the cobalt and carbon spectra under these operando conditions.
To maximise Co2C formation, a carburisation experiment was performed where the catalyst was exposed to a carbon monoxide gas. The results are shown in Figure 1.
Figure 1. I) In-situ Co L2,3-edges for the control experiment (carburisation reaction), the spectrum of metallic Co in red and the spectrum after carburisation in black. b) In-situ C K-edge for the control experiment, spectrum at the beginning of the carburisation reaction in blue and carburised spectrum in dark red. c) In-situ XRD patterns collected during the control experiment. Reaction time increases from bottom to top, and the last spectrum corresponds to a rehydrogenation step. Diffraction peaks of fcc- Co are marked with “%” and hcp-Co with “&”, Co2C peaks are marked with “#”, and boron nitride peaks are marked with “$”.
 S. Huotari et al., A large-solid-angle X-ray Raman scattering spectrometer at ID20 of the European Synchrotron Radiation Facility, Journal of Synchrotron Radiation 24, 521 (2017);
 J. Moya-Cancino et al., In Situ X-ray Raman scattering spectroscopy of the formation of cobalt carbides in a Co/TiO2 Fischer-Tropsch synthesis catalyst, ACS Catalysis 11, 809-819 (2021)
 I.C. ten Have and B.M. Weckhuysen, Chem Catal. 1, 339-363 (2021).
Elina Keihänen & Hannu Kurki-Suonio and the Euclid Consortium
Euclid is a cosmology mission of the European Space Agency. Euclid will study the “Dark Energy Question” — why is the expansion of the Universe accelerating, and what is the nature of the dark energy causing this? To this goal, Euclid will survey over one third of the sky, obtaining images of over a billion galaxies and tens of millions of galactic spectra. Euclid is a 1.2-meter wide-field space telescope with two instruments, NISP (Near Infared Spectrometer and Photometer) and VIS (imager at visible wavelengths). The Euclid Consortium will use the observations to determine the 3-dimensional distribution of galaxies and dark matter in the Universe, compare their statistics to cosmological models, and thus constrain the law of gravity and the dark energy equation of state. Euclid will be launched in 2023 and will make observations for 6 years.
The analysis of Euclid data is divided among nine Euclid Science Data Centers (SDC). We operate one of them, SDC-FI, in the national CSC Kajaani Data Center. In 2021 we participated in the Euclid Science Challenge 8 where the current version of the Euclid data analysis pipeline was tested. Science Challenge 8 represented a major upgrade in the maturity of the Euclid pipeline. We participated in the Operational Rehersal to demonstrate the ability of the SDC infrastructure to process the continuous data flow from the satellite. We contributed in the development and validation of the code to produce simulated NISP data, and in the production of the simulated VIS data.
Together with an international team we are developing the 2PCF code, which is used to estimate one of the main cosmology products of Euclid, the 2-point correlation function of the distribution of galaxies. In addition, in the Euclid Theory Working Group we continued preparing forecasts for the constraining power of Euclid on early universe models.
Jonathan Lasham, Amina Djurabekova, Outi Haapanen & Vivek Sharma
Energy plays a central role in our lives. Due to continuously increasing demands of energy, there is a quest to develop novel methods of energy generation and storage. In this vein, we have a lot to learn from microorganisms that generate energy by enzyme catalyzed protonic currents. One such large enzyme is respiratory complex I, which transfers electrons from substrate NADH to quinone, and couples this reaction to proton pumping across a biological membrane. In this work , structural biology and computer simulations were applied to obtain a deeper understanding of biological energy conversion by respiratory complex I. High-resolution structure revealed the position of water molecules in protein interior. These water molecules, together with the amino acid residues can catalyze long range proton transfer. Atomistic molecular dynamics simulations were performed by Jonathan Lasham, Amina Djurabekova and Outi Haapanen (from Vivek Sharma’s group). These simulations allowed identification of novel design features in the enzyme, that makes it highly efficient in pumping protons without failing.
Respiratory complex I structure revealed position of functionally important water molecules. High-resolution dynamic insights obtained by computer simulations allowed researchers to identify molecular valves and novel design features in proteins.
Besides energy, another corner stone of life is health. Despite overall understanding of diseases and drugs used to treat those, molecular picture remains enigmatic. If we would have clearer picture of what happens at an atomic level, drugs with better efficacy and low toxicity can be designed. In this study , researchers discovered an important role for hydrogen bonding in one mitochondrial disease mutation. They find that a single point mutation from glycine to serine perturbs local environment of the protein resulting in enzyme dysfunction. Molecular dynamics simulations and quantum chemical calculations performed by Jonathan Lasham (from Vivek Sharma’s group) not only provided molecular explanation to experimental data, but also led to novel functional insights.
A single amino acid change from glycine to serine perturbs the local environment of complex III of the mitochondrial electron transport chain, resulting in suboptimal operation of enzyme.
Vesa-Matti Leino & Heikki Suhonen
As part of a multidisciplinary research project , we used x-ray microtomography imaging at the X-ray Laboratory  to study biodegradable bone implants composed of bioactive glass S53P4. The porous scaffolds of bioactive glass had been implanted into rabbit femur bones to support and enhance the regeneration of bone tissue. We assessed the rate of regeneration at the inner parts of the samples by measuring the relative areas of the gradually dissolving glass and the newly formed bone tissue. In addition, we characterized the 3D-structure of the scaffold, e.g. porosity and pore sizes. First results of the research were published in Acta Biomaterialia in May 2021 .
Together with results obtained from other biomedical imaging methods and measurements  conducted on the same samples, the x-ray microtomography measurements verified that porous scaffolds of bioactive glass S53P4, in conjunction with the single-staged induced membrane technique described in , enable bone regeneration when the injured bone contains large defects, i.e. gaps, in between the remaining bone. Porous, sintered scaffolds of bioactive glass S53P4 are therefore a prospective alternative to other existing bone implant materials for the treatment of critical-sized diaphyseal defects. The research on this biomaterial will continue, and we expect that the measurements conducted with x-ray microtomography could be significantly refined in the future.
The porous scaffold constructed of bioactive glass S53P4 granules, sintered together to form a 3D-network, is at the center of the images, and cross-sections of the hollow femur bone are visible above and below the scaffold, along with newly formed bone tissue. A metal plate attached to support the femur bone during the healing process and a metal wire attached to hold the cylindrical implant steady cause some imaging artefacts.
 Eriksson et al., S53P4 bioactive glass scaffolds induce BMP expression and integrative bone formation in a critical-sized diaphysis defect treated with a single-staged induced membrane technique, Acta Biomaterialia 126, 463-476 (2021). DOI: 10.1016/j.actbio.2021.03.035