Aug 26โ€‰โ€“โ€‰30, 2013
KIT Campus North, FTU
Europe/Berlin timezone

Session

Plenary talks

Aug 26, 2013, 2:00โ€ฏPM
KIT Campus North, FTU

KIT Campus North, FTU

Presentation materials

There are no materials yet.

  1. Prof. Hartmut Schmeck (KIT, COMMputation)
    8/26/13, 2:00โ€ฏPM
  2. Prof. Achim Streit (KIT-SCC)
    8/26/13, 2:05โ€ฏPM
  3. Dr Pavel Weber (SCC-KIT)
    8/26/13, 2:20โ€ฏPM
  4. Dr Peter Kunszt (ETH Zรผrich)
    8/26/13, 2:30โ€ฏPM
    Big Data and Large Storage Systems
    By today, especially in the natural sciences, computers have become indispensible tools and instruments for research. Recently, due to progress in digital measurement technology, researchers acquire vast amounts of data in ALL domains of science. Not only the amount of data, but also its complexity is continuously increasing. And to top it off, the data needs to be shared within large...
    Go to contribution page
  5. Lukasz Janyst (CERN)
    8/26/13, 4:00โ€ฏPM
    Big Data and Large Storage Systems
    According to IDC forecasts, Big Data-related IT spending is to rise 40% each year between 2012 and 2020, and the total amount of information stored world-wide will about double every two years. It means that the, so called, digital universe will explode from 2.8 zettabytes in 2012 to 40ZB, or 40 trillion GB, in 2020. This is more than 5200 gigabytes for every man, woman, and child alive in...
    Go to contribution page
  6. Leif Nixon (Linkรถping University)
    8/26/13, 4:40โ€ฏPM
    Cloud&Grid Technologies
    The European Grid Infrastructure (EGI, http://egi.eu/) is a distributed environment, spanning roughly 270,000 logical CPUs, 140 PB of disk, and 130 PB of tape storage at 352 sites in 54 countries. More than 20,000 users, organised in more than 200 virtual organisations, from all over the world are currently running approximately 1.4 million jobs per day using this...
    Go to contribution page
  7. Dr Herbert Cornelius (INTEL)
    8/27/13, 9:00โ€ฏAM
    Effective programming and multi-core computing
    As we see Moore's Law alive and well, more and more parallelism is introduced into all computing platforms and on all levels of integration and programming to achieve higher performance and energy efficiency. We will discuss the new Intelยฎ Many Integrated Core (MIC) architecture for highly-parallel workloads with general purpose, energy efficient TFLOPS performance on a single chip. This also...
    Go to contribution page
  8. Mirko Kรคmpf (Cloudera)
    8/27/13, 9:40โ€ฏAM
    Big Data and Large Storage Systems
    A Hadoop cluster is the tool of choice for many large scale analytics applications and a large variety of commercial tools is available for Data Warehouses and for typical SQL-like applications. But how to deal with networks and time series? How to collect data for complex systems studies and what are good practices for working with libraries like Mahout and Giraph? The sample use case...
    Go to contribution page
  9. Christoph Fehling (Uni Stuttgart)
    8/27/13, 10:50โ€ฏAM
    Cloud&Grid Technologies
    The functionality found in different products in the cloud computing market today is often similar, but hidden behind different product names and other provider-specific terminology. We analyzed this multitude of cloud-related products to extract the common underlying behavior as well as the common architectural best practices that developers using these cloud technologies should follow. The...
    Go to contribution page
  10. Axel Koehler (NVIDIA)
    8/27/13, 11:30โ€ฏAM
    Effective programming and multi-core computing
    Computational researchers, scientists and engineers are rapidly shifting to computing solutions running on GPUs as this offers significant advantages in performance and energy efficiency. This presentation will provide a short overview about GPU Computing and NVIDIA's parallel computing platform. It will show how features of the latest Kepler GPU architecture (eg. Hyper-Q, GPU-aware MPI...
    Go to contribution page
  11. Dr Urban Liebel (Accelerator-lab)
    8/28/13, 9:00โ€ฏAM
    Big Data and Large Storage Systems
    Modern robotic microscopy platforms (High content screening platforms) are ideal instruments for large scale genome studies. The image based read outs often generate 10s of TByte data sets per single experiment. 10.000s of experiments are waiting to be done in the next years in hundreds of labs worldwide. Besides cell based assays , transgenic model organism like zebrafish or drosophila allow...
    Go to contribution page
  12. Dr Benedikt Hegner (CERN)
    8/28/13, 9:40โ€ฏAM
    Effective programming and multi-core computing
    Even though the miniaturization of transistors on chips continues like predicted by Moore's law, computer hardware starts to face scaling issues, so-called performance 'walls'. The probably best known one is the 'power wall', which limits clock frequencies. The best way of increasing processor performance remains now to increase the parallelization of the architecture. Soon standard CPUs will...
    Go to contribution page
  13. Dr Jose Luis Vazquez-Poletti (Universidad Complutense de Madrid (Spain))
    8/29/13, 9:00โ€ฏAM
    Cloud&Grid Technologies
    As another tool that Humanity has used for expanding its limits, cloud computing was born and evolved in consonance with the different challenges where it has been applied. Due to its seamless provision of resources, dynamism and elasticity, this paradigm has been brought into the spotlight by the Space scientific community and in particular that devoted to the exploration of Planet Mars....
    Go to contribution page
  14. Dr Oliver Oberst (IBM)
    8/29/13, 9:40โ€ฏAM
    Cloud&Grid Technologies
  15. Dr Stefan Radtke (EMC<b>2</b>)
    8/29/13, 10:50โ€ฏAM
    Cloud&Grid Technologies
    The It infrastructure of todayโ€™s datacenters are getting more and more complex while at the same time the demand of ease of use is changing the whole industry. Petabyte scale datacenters donโ€™t allow traditional operations where administrators and technicians need to investigate failures for singe users or applications at scale. A change towards a policy driven architecture is required that...
    Go to contribution page
  16. Dr Steve Aplin (DESY)
    8/30/13, 9:00โ€ฏAM
    Big Data and Large Storage Systems
    Whilst Big Data is often characterised in terms of its volume in bytes: Tera, Peta, Zeta, there is also the crucial aspect regarding the degree of complexity within the data set to consider. Such complexity means that good data management is an essential element in the creation of high quality research data, without which researchers who collect the data will themselves be unable to realise...
    Go to contribution page
  17. Dr Stephen Burke (European Grid Infrastructure (EGI))
    8/30/13, 9:40โ€ฏAM
    Cloud&Grid Technologies
    In a distributed system it's necessary to be able to get information about the available services and resources. This includes the existence and properties of Grid services and details about their current state. The information is structured according to a schema, which needs to be flexible enough to represent the variety of services in the Grid but simple enough to be usable. It is collected...
    Go to contribution page
  18. Dr Bob Jones (CERN)
    8/30/13, 10:50โ€ฏAM
    Cloud&Grid Technologies
    The feasibility of using commercial cloud services for scientific research is of greatย interest to research organisations such as CERN, ESA and EMBL, to the suppliers of cloud-based services and to the national and European funding agencies. Through the Helix Nebula - the Science Cloud [1] initiative and with the support of the European Commission, these stakeholders are driving a two year...
    Go to contribution page
  19. Dr Pavel Weber (SCC-KIT)
    8/30/13, 11:50โ€ฏAM
Building timetable...