Book contents
- Frontmatter
- Contents
- List of Contributors
- 1 Data-Intensive Computing: A Challenge for the 21st Century
- 2 Anatomy of Data-Intensive Computing Applications
- 3 Hardware Architectures for Data-Intensive Computing Problems: A Case Study for String Matching
- 4 Data Management Architectures
- 5 Large-Scale Data Management Techniques in Cloud Computing Platforms
- 6 Dimension Reduction for Streaming Data
- 7 Binary Classification with Support Vector Machines
- 8 Beyond MapReduce: New Requirements for Scalable Data Processing
- 9 Let the Data Do the Talking: Hypothesis Discovery from Large-Scale Data Sets in Real Time
- 10 Data-Intensive Visual Analysis for Cyber-Security
- Index
- References
2 - Anatomy of Data-Intensive Computing Applications
Published online by Cambridge University Press: 05 December 2012
- Frontmatter
- Contents
- List of Contributors
- 1 Data-Intensive Computing: A Challenge for the 21st Century
- 2 Anatomy of Data-Intensive Computing Applications
- 3 Hardware Architectures for Data-Intensive Computing Problems: A Case Study for String Matching
- 4 Data Management Architectures
- 5 Large-Scale Data Management Techniques in Cloud Computing Platforms
- 6 Dimension Reduction for Streaming Data
- 7 Binary Classification with Support Vector Machines
- 8 Beyond MapReduce: New Requirements for Scalable Data Processing
- 9 Let the Data Do the Talking: Hypothesis Discovery from Large-Scale Data Sets in Real Time
- 10 Data-Intensive Visual Analysis for Cyber-Security
- Index
- References
Summary
An Architecture Blueprint
As the previous chapter describes, data-intensive applications arise from the interplay of ever-increasing data volumes, complexity, and distribution. Add the needs of applications to process this complex data mélange in ever more interesting and faster ways, and you have an expansive landscape of specific application requirements to address.
Not surprisingly, this breadth of specific requirements leads to many alternative approaches to developing solutions. Different application domains also leverage different technologies, adding further variety to the landscape of dataintensive computing. Despite this inherent diversity, several model solutions for contemporary data-intensive problems have emerged in the last few years. The following briefly describes each one:
Data processing pipelines: Emerging from scientific domains, many large data problems are addressed using processing pipelines. Raw data that originates from a scientific instrument or a simulation is captured and stored. The first stage of processing typically applies techniques to reduce the data in size by removing noise and then processes the data (such as index, summarize, or markup) so that it can be more efficiently manipulated by downstream analytics. Once the capture and initial processing takes place, complex algorithms search and process the data. These algorithms create information and/or knowledge that can be digested by humans or further computational processes. Often, these analytics require large-scale distribution or specialized high-performance computing platforms to execute, making the execution environment of most pipelines both distributed and heterogeneous. Finally, the analysis results are presented to users so that they can be digested and acted upon.
- Type
- Chapter
- Information
- Data-Intensive ComputingArchitectures, Algorithms, and Applications, pp. 12 - 23Publisher: Cambridge University PressPrint publication year: 2012