VLDB 2010 , 36th International Conference on Very Large Data Bases
  Singapore : 13 to 17 Sept 2010, Grand Copthorne Waterfront Hotel
  General Information
  Proceedings, Slides,
  Photos
  Program
  Information
  Call for Papers & Proposals
  Previous Conferences
  Links
   VLDB2010 are white and red as the Singapore flag: Photo by Courtesy of Eugene Tang/Singaporesights.com
 10 Year Paper Award 



Gustavo Alonso

Gustavo Alonso is a full professor at the Department of Computer Science (D-INFK) of the Swiss Federal Institute of Technology in Zurich (ETHZ) where he is a member of the Systems Group (www.systems.ethz.ch) and the Enterprise Computing Center (www.ecc.ethz.ch). He holds an engineering degree in telecommunications from the Madrid Technical University (ETSIT, Politecnica de Madrid, 1989) as well as an M.S. and a Ph.D. in Computer Science from UC Santa Barbara. Gustavo's main research interests include distributed systems, middleware, system aspects of software engineering, and data management. He has been general chair of MDM 2007, program chair or co-chair of Middleware 2005, VLDB 2006, BPM 2007, and ICDE 2008. He is in the Board of Trustees of the VLDB Endowment and the Chair of EuroSys, the European Chapter of ACM SIGOPS. He is also the CTO of Zimory GmbH.

Bettina Kemme

Bettina Kemme is Associate Professor at the School of Computer Science of McGill University, Montreal, Canada. She received her undergraduate degree at the Friedrich-Alexander University in Erlangen, Germany, and her Ph.D. at the Swiss Federal Institute of Technology in Zurich (ETHZ). She also was a visiting student at UC Santa Barbara. Bettina's research focus lies in the design and development of distributed information systems, in particular, in data consistency aspects and the interplay between communication and data management. Bettina has been PC member of many database and distributed systems conferences, such as VLDB, SIGMOD, ICDE, EDBT, Middleware, ICDCS, Eurosys, and P2P.
She has been on the Editorial Board for the Encyclopedia of Database Systems, Springer, and track co-chair of ICDE 2009. She is area editor of Information Systems, Elsevier.

 

Replication has always been a key mechanism to achieve scalability and fault-tolerance in databases. In the last years its importance has been even further increased due to its role in providing elasticity at the database layer. In all these contexts, the biggest challenge lies in offering a replication solution that provides both performance and strong data consistency. Traditionally, performance could only be achieved through lazy replication at the expense of transactional guarantees. In contrast, eager, strong
consistency approaches came with a performance penalty and poor scalability.

A decade ago, the way out of this situation involved combining results from distributed systems and databases. The use of group communication primitives with strong ordering and delivery guarantees combined with smart transaction handling (using snapshot isolation, transferring logs instead of re-executing updates, keeping the message overhead per transaction constant) were a radical departure from the state-of-the-art at the time. Now, these techniques are widely used in data centers and cloud computing. In this paper we review the context for the original work and discuss how these ideas have evolved and changed over the last 10 years.


Email Registration | Email Webmaster | Email Committees | NUS Home | SoC

© Copyright 2009-2010 National University of Singapore. All Rights Reserved.
Terms of Use | Privacy | Credits
Last modified on 07 Jul 2010