»

Simulating the Universe

The first simulation models were created when cavemen picked up rocks or sticks to show a friend how they did something. Voodoo dolls have evolved into mannequins used for modern medical training. Balsa wood rubber band powered airplanes have evolved into flight simulators. Puppet shows have evolved into virtual reality. Using supercomputers to create modeled simulations is rapidly becoming a new branch of science.

Our computed model simulations now range in scope from particle physics to the cosmology of the universe. We use simulations for traffic engineering, urban planning, economic forecasting, the lifecyle of machines, organizational studies and more. We are finding a never ending list of uses in the areas of design, education and entertainment.

Supercomputer development was originally driven by needs from nuclear blast simulations and weather forecasting, both of which are highly variable complex problems. Over the years, the cost of supercomputing has decreased significantly as CPUs have become faster and cheaper. Clustering, grid computing and cloud computing have changed the way we use processing. Advances in mathematical modeling techniques have also move the field of modeling and simulation forward. Now, supercomputing capability is available to almost everybody in small, inexpensive slices.

The Human Brain Project, part of the Blue Brain Project intends to create a simulation model of the human brain:

The human brain is an immensely powerful, energy efficient, self-learning, self-repairing computer. If we could understand and mimic the way it works, we could revolutionize information technology, medicine and society. To do so we have to bring together everything we know and everything we can learn about the inner workings of the brain’s molecules, cells and circuits. With this goal in mind, the Blue Brain team has recently come together with 12 other European and international partners to propose the Human Brain Project (HBP), a candidate for funding under the EU’s FET Flagship program. The HBP team will include many of Europe’s best neuroscientists, doctors, physicists, mathematicians, computer engineers and ethicists. The goal is to build on the work of the Blue Brain Project and on work by the other partners to integrate everything we know about the brain in massive databases and in detailed computer models. This will require breakthroughs in mathematics and software engineering, an international supercomputing facility more powerful than any before and a strong sense of social responsibility.

One of the tools being developed at FuturICT is a Living Earth Simulator that will combine enough techno-socio-economic simulations to offer future scenario predictions for the entire planet.

Living Earth Simulator
The Living Earth Simulator will enable the exploration of future scenarios at different degrees of detail, integrating heterogeneous data and models and employing a variety of theoretical and modelling perspectives, such as sophisticated agent-based simulations, multi-level mathematical models, and new empirical and experimental approaches. Ideas from complexity science will be compared with graph theoretic approaches and other techniques based on concepts from statistical physics. Exploration will be supported via a ‘World of Modelling’ – an open software platform, comparable to an app-store, to which scientists and developers can upload theoretically informed and empirically validated modelling components that map parts of our real world. This will require the development of interactive, decentralised, scalable computing infrastructures, coupled with access to huge amounts of data. Large-scale simulations and hybrid modelling approaches will require supercomputing capabilities that will be delivered by several of Europe’s leading supercomputing centres.

The project for COLLABORATIVE RESEARCH INTO EXASCALE SYSTEMWARE, TOOLS AND APPLICATIONS (CRESTA) is working to produce the first exascale computer by 2018:

Tomorrow’s HPC systems: The Exascale Challenge

Having demonstrated a small number of scientific applications running at the petascale, the nature of the HPC community, particularly the hardware community, is to look to the next challenge. In this case the challenge is to move from 1015 flop/s to the next milestone of 1018 flop/s – an exaflop. Hence the exascale challenge that has been articulated in detail at the global level by the International Exascale Software Project and in Europe by the European Exascale Software Initiative. Many of the partners in CRESTA are leading members of one or both of these initiatives.

In tackling the delivery of an exaflop/s formidable challenges exist not just in scale, such systems could have over a million cores, but also in reliability, programmability, power consumption and usability (to name a few).

The timescale for demonstrating the world’s first exascale system is estimated to be 2018. From a hardware point of view we can speculate that such systems will consist of:

Large numbers of low-power, many-core microprocessors (possibly millions of cores)
Numerical accelerators with direct access to the same memory as the microprocessors (almost certainly based on evolved GPGPU designs)
High-bandwidth, low-latency novel topology networks (almost certainly custom-designed)
Faster, larger, lower-powered memory modules (perhaps with evolved memory access interfaces)

The complexity level of our simulation models has evolved to the point where they are becoming the equivalent of laboratory research.

REFERENCE:
The Blue Brain Project EPFL

FuturICT

The Exascale Challenge

First-ever model simulation of the structuring of the observable universe

Supercomputer Simulation Shows for the First Time How a Milky Way-Like Galaxy Forms