Leading by Game-Changing Cloud, Big Data and IoT Innovations

Tony Shan

Subscribe to Tony Shan: eMailAlertsEmail Alerts
Get Tony Shan: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: MySQL Journal

Blog Feed Post

Open Source Stack: From LAMP to CHIRPS

LAMP is an acronym for an archetypal model of web-based solution stacks, originally consisting of largely interchangeable components: Linux, the Apache HTTP Server, the MySQL relational database management system, and the PHP/Python programming language. As a solution stack, LAMP is suitable for building dynamic websites and web applications.

  • Linux is a Unix-like computer operating system assembled under the model of free and open source software development and distribution.
  • The Apache HTTP Server has been the most popular web server on the public Internet.
  • MySQL is a multithreaded, multi-user, SQL database management system (DBMS).
  • PHP is a server-side scripting language designed for web development but also used as a general-purpose programming language. Python is a widely used general-purpose, high-level programming language. Python supports multiple programming paradigms, including object-oriented, imperative and functional programming or procedural styles. 

LAMP has dominated the open source space for years. In spite of success, LAMP has some limitations that make it become a barrier to the present Big Data era. For example, LAMP was designed primarily for on-premise solutions and can’t store unstructured or semi-structured data directly. The processing unit for business logic does not support data-intensive workload well. It lacks adequate functions for data visualization and flexible features for data presentation.

There is a strong need for a different type of stack for today’s world. Here I define a new model to meet this need: Cloud, Hadoop, Impala, R, Pentao, Spark/Storm/Solr (CHIRPS).


  • Cloud becomes the default OS for Big Data solutions. Private clouds provide secure hosting facilities. Public clouds offer subscription-based elastic runtime environments. Hybrid clouds furnish the combined benefits from both worlds. OpenStack, for example, is an open source cloud platform for private and public clouds that provides access to large pools of compute, storage and networking resources throughout a corporate IT infrastructure.
  • Hadoop provides a distributed file system to store raw data and a parallel algorithm called MapReduce. It also has a data warehouse infrastructure Hive, columnar store HBase, data analytics package Mahout, as well as other components like Sqoop, Flume, Pig, Ambari, etc.
  • Impala is a parallel database query engine that offers high-performance query processing on Hadoop. The response time is claimed to be 90X faster over Hive.
  • R is a programming language and software environment for statistical computing and graphics. The R language is widely used among statisticians and data miners for developing statistical software and data analysis.
  • Pentaho supplies the data ingestion (ELT) function in Kettle, and data visualization/presentation in the Business Analytics suite, such as reporting, analysis, dashboard, and workflow, as well as a machine learning package Weka.
  • Spark delivers real-time Big Data performance (100X faster than MapReduce) via a fast in-memory or on-disk engine for large-scale data processing. Storm gives the fault-tolerant real-time processing for Big Data streaming. Solr is an open source enterprise search platform, featuring powerful full-text search, hit highlighting, faceted search, near real-time indexing, dynamic clustering, database integration, rich document handling, and geo search.

There are a variety of usage patterns and implementation styles for CHIRPS, depending on the scope and type of the solutions being constructed. For more information, please contact Tony Shan (blog@tonyshan.com). ©Tony Shan. All rights reserved.

Read the original blog entry...

More Stories By Tony Shan

Tony Shan works as a senior consultant, advisor at a global applications and infrastructure solutions firm helping clients realize the greatest value from their IT. Shan is a renowned thought leader and technology visionary with a number of years of field experience and guru-level expertise on cloud computing, Big Data, Hadoop, NoSQL, social, mobile, SOA, BI, technology strategy, IT roadmapping, systems design, architecture engineering, portfolio rationalization, product development, asset management, strategic planning, process standardization, and Web 2.0. He has directed the lifecycle R&D and buildout of large-scale award-winning distributed systems on diverse platforms in Fortune 100 companies and public sector like IBM, Bank of America, Wells Fargo, Cisco, Honeywell, Abbott, etc.

Shan is an inventive expert with a proven track record of influential innovations such as Cloud Engineering. He has authored dozens of top-notch technical papers on next-generation technologies and over ten books that won multiple awards. He is a frequent keynote speaker and Chair/Panel/Advisor/Judge/Organizing Committee in prominent conferences/workshops, an editor/editorial advisory board member of IT research journals/books, and a founder of several user groups, forums, and centers of excellence (CoE).