Speaker
Description
Many scientific codes or data-focused algorithms are being quickly rendered obsolete.
The gargantuan size of the problems that we face is the main responsible for that.
On one hand, the data sets coming from both ground- and space-based observations are already much larger than in the recent past and will still grow by at least an order of magnitude. On the other hand, the target cutting-edge numerical simulations of various kinds require a colossal computational effort to cope with the scientific challenges and will, in turn, produce an amount of data as much colossal.
That obsolescence happens for a number of reasons: it may descend from the lack of distributed-memory capacity (there always be a data set that could not fit in your ram ..), so that they are memory-bound, or it may be due to fundamentally now-inadequate basis ( in terms of threading management or the algorithmic implementation) so that their run-time is sky-rocketing.
It is then becoming essential to acquire a solid understanding of how to achieve "high performance" - in its multiple meanings - on modern architectures, how to design and develop a natively parallel code, and how to assess its profile.
In this talk I try to convince of that every sceptic, also discussing two case studies.