Hadoop for
                             Enterprise
                             rev 7




Rajesh Nadipalli
Mar 2012
rajesh.nadipalli@gmail.com
Hadoop getting attention
•   Feb 2012: Microsoft, Hortonworks in partnership to develop Excel
    plug-in for Hadoop


•   Jan 2012: Oracle announces Big Data Appliance with Cloudera’s
    Hadoop distribution


•   Dec 2011: EMC released Unified Analytics Platform which includes
    Greenplum Apache Hadoop distribution


•   Oct 2011: Microsoft plans to add Hadoop support to SQL server 2012

•   May 2010: IBM introduces Hadoop based InfoSphereBigInsights




                          Rajesh.nadipalli@gmail.com
In this Presentation…
  Big Data – Big Opportunities
  Hadoop for Enterprise – Reference
   Arch
  Map Reduce Overview
  Hive
  References




              Rajesh.nadipalli@gmail.com
BIG DATA – BIG
OPPORTUNITIES




                 Rajesh.nadipalli@gmail.c
Big Data - Business
Opportunity
Enterprises today are challenged with..
   Exponential data growth
   Complex data needs- structured & unstructured
   Real time insights with key indicators
   Heterogeneous environment: private and
    public clouds
   Tighter budgets and the need to do more with
    less

        Traditional relational databases are not
        able to scale and meet these challenges


                   Rajesh.nadipalli@gmail.com
http://blogs.forrester.com/brian_hopkins/11-08-29-big_data_brewer_and_a_couple_of_webinars




Big Data – 4 V’s (Forrester)




                    Rajesh.nadipalli@gmail.com
Why Hadoop?
      Hadoop provides…
         Distributed File System
         Parallel computing across several nodes
         Support for structured and un-structured
          content
         Fault tolerance and linear scalability
         Open source under Apache foundation
         Increasing support from vendors
         Key Philosophy: “moving compute is cheaper
          than moving data”
Forrester regards Hadoop as the nucleus of the next-generation EDW in the
cloud.
                               Rajesh.nadipalli@gmail.com
Some users of Hadoop…
                                                              http://wiki.apache.org/hadoop/PoweredBy




     • Use Hadoop to store copies of internal log and dimension data sources and use it as
     a source for reporting/analytics and machine learning.
     • Currently we have 2 major clusters: A 1100-machine cluster with 8800 cores and
     about 12 PB raw storage. A 300-machine cluster with 2400 cores and about 3 PB raw
     storage.
     • Each (commodity) node has 8 cores and 12 TB of storage.


     • Hadoop used to analyze the log of search and do some mining work on web page
     database
     • We handle about 3000TB per week Our clusters vary from 10 to 500 nodes



     • 532 nodes cluster (8 * 532 cores, 5.3PB).
     • Heavy usage of Java MapReduce, Pig, Hive, HBase
     • Using it for search optimization and Research

     •5 machine cluster (8 cores/machine, 5TB/machine storage)
     •Existing 19 virtual machine cluster (2 cores/machine 30TB storage
     •Predominantly Hive and Streaming API based jobs (~20,000 jobs a week)
     •Daily batch ETL; Log analysis; Data mining; Machine learning




                                 Rajesh.nadipalli@gmail.com
HADOOP REFERENCE
ARCHITECTURE




            Rajesh.nadipalli@gmail.c
Hadoop for Enterprise – Technology Stack
                                   User Experience
        Ad-hoc           Notifications               Embedded
                                                                                 Analytics
        queries            /Alerts                    Search



                                         Data Access
                                             Excel                                 R (Rhipe,
         Hive            Pig                                  Datameer
                                                                                    RBits)




                                                                                                                                    Zookeeper (Orchestration, Quorum)
                                                                                               Pentaho (Scheduling, Integrations)
                                   Data Processing
                                              Mapreduce


                                  Hadoop Data Store
                                         Hbase (NOSQL DB)

                                                HDFS

                                     Sqoop

                                     Data Sources

        Application    Database                             Log           RSS
                                            Cloud                                  Others
        s (internal)       s                                Files        Feeds

                                   Rajesh.nadipalli@gmail.com
Hadoop for BI – Reference Architecture
 Data       Hadoop Distributed Computing                    Enterprise Apps
                    Environment
                                                                     Dashboards
 RDBMS

 Excel
               M
 XML
               A          N-Node
 JSON          P          scalable
                          cluster                                ERP, Enterprise
                                                                      Apps
 Binary        R
               E
 CSV           D
               U
 Log           C                                   Import     RDBM
               E                                                S
 Java                         Hadoop File
 Objects                      System
                              (HDFS)


                      Rajesh.nadipalli@gmail.com
http://www.oracle.com/technetwork/database/focus-areas/bi-datawarehousing/wp-big-data-with-oracle-
   521209.pdf?ssSourceSiteId=ocomen




   Oracle’s Big Data Solution




• Oracle sees Hadoop is good for unstructured sourcing and map reduce.
• It recommends to use Oracle database for the final analyze stage
• Oracle Data Integrator can make Hive queries (ETL)
• Oracle has a wrapper on top of sqoop which is called Oraoop (see
references)
                                            Rajesh.nadipalli@gmail.com
DATA PROCESSING




             Rajesh.nadipalli@gmail.c
Hadoop Mapreduce Overview
                                                      Map Reduce Process
                                       Node 1
                                  010101010101010101010
                                                                  Node 1
                                           10
                                                               222222222222222222
                                  010101010101010101010 Map
                                                              3333333333333333333
                                           10
                                                              3333333333333333333
  DATA (from HDFS)                010101010101010101010
                                           10
                                                                                              RESULTS
01010101010101010101010
01010101010101010101010                 Node 2                                            2222222222222222
01010101010101010101010                                                                          22
01010101010101010101010           010101010101010101010
                                                                   Node 2                 3333333333333333
01010101010101010101010                                                             Reduc
                          Split            10                                                    33
01010101010101010101010                                       2222222222222222222     e
                                  010101010101010101010                                   4444444444444444
01010101010101010101010                              Map      3333333333333333333
                                           10                                                    44
01010101010101010101010                                       4444444444444444444
                                  010101010101010101010
01010101010101010101010                    10
01010101010101010101010
01010101010101010101010
01010101010101010101010
                                        Node 3
                                                                   Node 3
                                  010101010101010101010
                                           10
                                  010101010101010101010        222222222222222222
                                           10
                                                      Map     3333333333333333333
                                  010101010101010101010       3333333333333333333
                                           10



                                               Rajesh.nadipalli@gmail.com
Map Reduce Tips
   The first step is to understand what data you
    have, and how to feed it into the Hadoop
    distributed computing environment.

   Using distributed applications, provide
    analytics of the massive data sets, while
    simultaneously enabling the surfacing of
    opportunities.

   Hadoop stores your information for future
    queries, enhancing the exploratory
    capabilities (as well as historical reference) of
    your data.     Rajesh.nadipalli@gmail.com
DATA STORE




             Rajesh.nadipalli@gmail.c
HDFS
   Distributed file system consisting of
    ◦ One single node is called “Namenode” and
      has metadata
    ◦ Several “Datanodes”
 Designed to run on commodity hardware
 Data gets imported as blocks (64 MB)
 These Blocks are replicated (typically 3
  copies) to protect for hardware failures
 Access via Java API’s or hadoop
  command line ($hadoop fs…)

                  Rajesh.nadipalli@gmail.com
http://hadoop.apache.org/common/docs/current/hdfs_design.
                                             html




HDFS architecture




Hadoop next revision has a failover Namenode called “Avatar”

                      Rajesh.nadipalli@gmail.com
HBase
   Distributed, column-oriented database
    (NoSQL)
   Failure-tolerant
   Low latency
   HDFS aware
   Access via Java APIs or REST APIs
   It is not a replacement for RDBMS
   Recommended to use Hbase when
    ◦ Data is searched by key (or range)
    ◦ Data does not conform to a schema (for
      instance if you have attributes that change by
      record).
                  Rajesh.nadipalli@gmail.com
Hbase Architecture
                           Zookeeper

                                                Avatar
              Hbase
                                              (Failover of
              Master
                                                master)
    Region       Region                  Region              Region
    Server       Server                  Server              Server



    Zookeeper maintains quorum and knows which server
     is the master
    Master keeps track of regions and region servers
    Region servers store table regions

                       Rajesh.nadipalli@gmail.com
Hbase Column Storage
Hbase stores data like tags for a key;
 for example:


Row         Column                  Column         Cell
            Family
            Cast                    Cast:Actor1    Harrison Ford
Star Wars                           Cast:Actor2    Carrie Fisher
            Reviews                 Review: IMDB   Review URL
                                    Review: ET     Review URL2



                   Rajesh.nadipalli@gmail.com
DATA ACCESS




              Rajesh.nadipalli@gmail.c
Hive Overview
 Data warehouse software built on top
  of Hadoop
 HiveQL provides a SQL like interface
  and performs a map reduce job
 Provides structure to HDFS data
  similar to Oracle External table




            Rajesh.nadipalli@gmail.com
Hive Architecture

         Hive CLI
Browse              Query




                                       Hive QL
  Hive                      Parser
Metastore                                        Execution


                                                 SerDe (Map Reduce)



                                                              HDFS


                               Rajesh.nadipalli@gmail.com
Pig Overview
 Pig is a layer on top of map-reduce for
  statisticians (programmers)
 It provides several standard operators:
  join, order by etc
 It allows user defined functions to be
  included.
 Java or phyton supported for UDF’s




             Rajesh.nadipalli@gmail.com
http://www.datameer.com/




Datameer Overview
Key philosophy: Business users understand Excel; let them
  do the grouping, sorting, filtering, aggregates

Key Steps:
 Datameer’s source is a mapreduce output.
 Datameer takes a quick sample of 5000 records.
 The end user is next presented an Excel like interface on top
  of this 5000 records. This is where the end users can define
  their filters, formula, grouping, aggregations, joins across
  sheets (even join hadoop data with data from a relational
  database table)
 Once the end user has defined what they want as the end
  result, they can submit a job to run on the complete dataset.
 Datameer will then build the necessary map reduce jobs and
  run it on the complete data set.
 Next the user gets the results and can build charts, tables etc
  – all on the browser
                       Rajesh.nadipalli@gmail.com
http://www.informationweek.com/news/software/info_management/232601675?cid=RSSfeed_IWK_News



Excel Integration
Microsoft announced Excel integration with
 Hadoop (Feb 2012) with HortonWorks

Key Highlights:
 Microsoft &Hortonworks will deliver a Hive
  ODBC driver that will enable integration with
  Excel
 Microsoft’s PowerPivot in-memory plug-in
  for Excel will handle larger data sets
 There is also a plan for Javascript framework
  for Hadoop enabling Ajax like iterative
                              Rajesh.nadipalli@gmail.com
INTEGRATION,
SCHEDULING




               Rajesh.nadipalli@gmail.c
Pentaho Data Integration
 Pentaho is considered as “strong
  performer” by Forrestor (Feb 2012)
 It makes building MapReduce easy via
  it’s Data Integration IDE.
 It can read/write to HDFS, run map
  reduce and Pig scripts
 The IDE has several standard
  connectors, transformation, and allows
  custom java code
 http://www.pentaho.com/big-data/
              Rajesh.nadipalli@gmail.com
http://www.youtube.com/watch?v=KZe1UugxXcs&feature=player_emb
                                               edded



       Pentaho Data Integration
                                              Build Reducer
                                          2




1   Build
    Mapper




                 Run Map
             3
                 Reduce




                           Rajesh.nadipalli@gmail.com
Talend - ETL
 Talend is another ETL
  development, scheduling and
  monitoring tool
 It supports HDFS, Pig, Hive, Sqoop
 http://www.talend.com/products-big-
  data/




            Rajesh.nadipalli@gmail.com
Talend ETL – with Hadoop

      • Can invoke Hadoopcalls (generates Hive
      queries)
      • See right slide “Processing”




             Rajesh.nadipalli@gmail.com
USER EXPERIENCE




             Rajesh.nadipalli@gmail.c
User Experience
This layer of stack is generally custom
  development. However some tools
  that work with Hadoop are:
 Tableau for data analysis &
  visualizations
 SAS Enterprise Miner
 IBM BigInsights




             Rajesh.nadipalli@gmail.com
REFERENCES




             Rajesh.nadipalli@gmail.c
References
   http://hadoop.apache.org/
   http://www.cloudera.com/
   http://www-01.ibm.com/software/data/bigdata/
   http://www.cs.duke.edu/starfish/index.html
   http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-
    node-cluster/
   http://karmasphere.com/Download/karmasphere-studio-community-virtual-
    appliance-for-ibm.html
   http://www.slideshare.net/zshao/hive-data-warehousing-analytics-on-hadoop-
    presentation
   http://www.slideshare.net/trihug/trihug-november-pig-talk-by-alan-
    gates?from=ss_embed
   http://www.trihug.org/
   http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/white_paper
    _c11-690561.html
   http://www.cloudera.com/wp-content/uploads/2011/01/oraoopuserguide-With-
    OraHive.pdf
                                Rajesh.nadipalli@gmail.com
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/white_paper_c11-
690561.html




         Rajesh.nadipalli@gmail.com
http://wiki.apache.org/hadoop/PoweredBy



Key Hadoop Players




          Rajesh.nadipalli@gmail.com
MAP-R
 No single point of failure of name node
 Performance improvements (5 times
  faster than HDFS)
 Snapshots, Multi-site copies
 They have separate Mapreduce
  (extended mapreduce)
 MapR is 8K blocks instead of 64MB
  block size of HDFS

             Rajesh.nadipalli@gmail.com
Open Topics – why there is
adoption issue
   Security – no concept of roles
   Backup, Recovery
   ACID not supported




               Rajesh.nadipalli@gmail.com
Thank You to my viewers




         Rajesh.nadipalli@gmail.com
Questions / Comments

Rajesh.nadipalli@gmail.com
Hadoop For Enterprises
Hadoop For Enterprises
Hadoop For Enterprises
Hadoop For Enterprises
Hadoop For Enterprises
Hadoop For Enterprises
Hadoop For Enterprises
Hadoop For Enterprises
Hadoop For Enterprises
Hadoop For Enterprises

Hadoop For Enterprises

  • 1.
    Hadoop for Enterprise rev 7 Rajesh Nadipalli Mar 2012 [email protected]
  • 2.
    Hadoop getting attention • Feb 2012: Microsoft, Hortonworks in partnership to develop Excel plug-in for Hadoop • Jan 2012: Oracle announces Big Data Appliance with Cloudera’s Hadoop distribution • Dec 2011: EMC released Unified Analytics Platform which includes Greenplum Apache Hadoop distribution • Oct 2011: Microsoft plans to add Hadoop support to SQL server 2012 • May 2010: IBM introduces Hadoop based InfoSphereBigInsights [email protected]
  • 3.
    In this Presentation…  Big Data – Big Opportunities  Hadoop for Enterprise – Reference Arch  Map Reduce Overview  Hive  References [email protected]
  • 4.
  • 5.
    Big Data -Business Opportunity Enterprises today are challenged with..  Exponential data growth  Complex data needs- structured & unstructured  Real time insights with key indicators  Heterogeneous environment: private and public clouds  Tighter budgets and the need to do more with less Traditional relational databases are not able to scale and meet these challenges [email protected]
  • 6.
  • 7.
    Why Hadoop? Hadoop provides…  Distributed File System  Parallel computing across several nodes  Support for structured and un-structured content  Fault tolerance and linear scalability  Open source under Apache foundation  Increasing support from vendors  Key Philosophy: “moving compute is cheaper than moving data” Forrester regards Hadoop as the nucleus of the next-generation EDW in the cloud. [email protected]
  • 8.
    Some users ofHadoop… http://wiki.apache.org/hadoop/PoweredBy • Use Hadoop to store copies of internal log and dimension data sources and use it as a source for reporting/analytics and machine learning. • Currently we have 2 major clusters: A 1100-machine cluster with 8800 cores and about 12 PB raw storage. A 300-machine cluster with 2400 cores and about 3 PB raw storage. • Each (commodity) node has 8 cores and 12 TB of storage. • Hadoop used to analyze the log of search and do some mining work on web page database • We handle about 3000TB per week Our clusters vary from 10 to 500 nodes • 532 nodes cluster (8 * 532 cores, 5.3PB). • Heavy usage of Java MapReduce, Pig, Hive, HBase • Using it for search optimization and Research •5 machine cluster (8 cores/machine, 5TB/machine storage) •Existing 19 virtual machine cluster (2 cores/machine 30TB storage •Predominantly Hive and Streaming API based jobs (~20,000 jobs a week) •Daily batch ETL; Log analysis; Data mining; Machine learning [email protected]
  • 9.
  • 10.
    Hadoop for Enterprise– Technology Stack User Experience Ad-hoc Notifications Embedded Analytics queries /Alerts Search Data Access Excel R (Rhipe, Hive Pig Datameer RBits) Zookeeper (Orchestration, Quorum) Pentaho (Scheduling, Integrations) Data Processing Mapreduce Hadoop Data Store Hbase (NOSQL DB) HDFS Sqoop Data Sources Application Database Log RSS Cloud Others s (internal) s Files Feeds [email protected]
  • 11.
    Hadoop for BI– Reference Architecture Data Hadoop Distributed Computing Enterprise Apps Environment Dashboards RDBMS Excel M XML A N-Node JSON P scalable cluster ERP, Enterprise Apps Binary R E CSV D U Log C Import RDBM E S Java Hadoop File Objects System (HDFS) [email protected]
  • 12.
    http://www.oracle.com/technetwork/database/focus-areas/bi-datawarehousing/wp-big-data-with-oracle- 521209.pdf?ssSourceSiteId=ocomen Oracle’s Big Data Solution • Oracle sees Hadoop is good for unstructured sourcing and map reduce. • It recommends to use Oracle database for the final analyze stage • Oracle Data Integrator can make Hive queries (ETL) • Oracle has a wrapper on top of sqoop which is called Oraoop (see references) [email protected]
  • 13.
  • 14.
    Hadoop Mapreduce Overview Map Reduce Process Node 1 010101010101010101010 Node 1 10 222222222222222222 010101010101010101010 Map 3333333333333333333 10 3333333333333333333 DATA (from HDFS) 010101010101010101010 10 RESULTS 01010101010101010101010 01010101010101010101010 Node 2 2222222222222222 01010101010101010101010 22 01010101010101010101010 010101010101010101010 Node 2 3333333333333333 01010101010101010101010 Reduc Split 10 33 01010101010101010101010 2222222222222222222 e 010101010101010101010 4444444444444444 01010101010101010101010 Map 3333333333333333333 10 44 01010101010101010101010 4444444444444444444 010101010101010101010 01010101010101010101010 10 01010101010101010101010 01010101010101010101010 01010101010101010101010 Node 3 Node 3 010101010101010101010 10 010101010101010101010 222222222222222222 10 Map 3333333333333333333 010101010101010101010 3333333333333333333 10 [email protected]
  • 15.
    Map Reduce Tips  The first step is to understand what data you have, and how to feed it into the Hadoop distributed computing environment.  Using distributed applications, provide analytics of the massive data sets, while simultaneously enabling the surfacing of opportunities.  Hadoop stores your information for future queries, enhancing the exploratory capabilities (as well as historical reference) of your data. [email protected]
  • 16.
  • 17.
    HDFS  Distributed file system consisting of ◦ One single node is called “Namenode” and has metadata ◦ Several “Datanodes”  Designed to run on commodity hardware  Data gets imported as blocks (64 MB)  These Blocks are replicated (typically 3 copies) to protect for hardware failures  Access via Java API’s or hadoop command line ($hadoop fs…) [email protected]
  • 18.
    http://hadoop.apache.org/common/docs/current/hdfs_design. html HDFS architecture Hadoop next revision has a failover Namenode called “Avatar” [email protected]
  • 19.
    HBase  Distributed, column-oriented database (NoSQL)  Failure-tolerant  Low latency  HDFS aware  Access via Java APIs or REST APIs  It is not a replacement for RDBMS  Recommended to use Hbase when ◦ Data is searched by key (or range) ◦ Data does not conform to a schema (for instance if you have attributes that change by record). [email protected]
  • 20.
    Hbase Architecture Zookeeper Avatar Hbase (Failover of Master master) Region Region Region Region Server Server Server Server  Zookeeper maintains quorum and knows which server is the master  Master keeps track of regions and region servers  Region servers store table regions [email protected]
  • 21.
    Hbase Column Storage Hbasestores data like tags for a key; for example: Row Column Column Cell Family Cast Cast:Actor1 Harrison Ford Star Wars Cast:Actor2 Carrie Fisher Reviews Review: IMDB Review URL Review: ET Review URL2 [email protected]
  • 22.
  • 23.
    Hive Overview  Datawarehouse software built on top of Hadoop  HiveQL provides a SQL like interface and performs a map reduce job  Provides structure to HDFS data similar to Oracle External table [email protected]
  • 24.
    Hive Architecture Hive CLI Browse Query Hive QL Hive Parser Metastore Execution SerDe (Map Reduce) HDFS [email protected]
  • 25.
    Pig Overview  Pigis a layer on top of map-reduce for statisticians (programmers)  It provides several standard operators: join, order by etc  It allows user defined functions to be included.  Java or phyton supported for UDF’s [email protected]
  • 26.
    http://www.datameer.com/ Datameer Overview Key philosophy:Business users understand Excel; let them do the grouping, sorting, filtering, aggregates Key Steps:  Datameer’s source is a mapreduce output.  Datameer takes a quick sample of 5000 records.  The end user is next presented an Excel like interface on top of this 5000 records. This is where the end users can define their filters, formula, grouping, aggregations, joins across sheets (even join hadoop data with data from a relational database table)  Once the end user has defined what they want as the end result, they can submit a job to run on the complete dataset.  Datameer will then build the necessary map reduce jobs and run it on the complete data set.  Next the user gets the results and can build charts, tables etc – all on the browser [email protected]
  • 27.
    http://www.informationweek.com/news/software/info_management/232601675?cid=RSSfeed_IWK_News Excel Integration Microsoft announcedExcel integration with Hadoop (Feb 2012) with HortonWorks Key Highlights:  Microsoft &Hortonworks will deliver a Hive ODBC driver that will enable integration with Excel  Microsoft’s PowerPivot in-memory plug-in for Excel will handle larger data sets  There is also a plan for Javascript framework for Hadoop enabling Ajax like iterative [email protected]
  • 28.
  • 29.
    Pentaho Data Integration Pentaho is considered as “strong performer” by Forrestor (Feb 2012)  It makes building MapReduce easy via it’s Data Integration IDE.  It can read/write to HDFS, run map reduce and Pig scripts  The IDE has several standard connectors, transformation, and allows custom java code  http://www.pentaho.com/big-data/ [email protected]
  • 30.
  • 31.
    Talend - ETL Talend is another ETL development, scheduling and monitoring tool  It supports HDFS, Pig, Hive, Sqoop  http://www.talend.com/products-big- data/ [email protected]
  • 32.
    Talend ETL –with Hadoop • Can invoke Hadoopcalls (generates Hive queries) • See right slide “Processing” [email protected]
  • 33.
  • 34.
    User Experience This layerof stack is generally custom development. However some tools that work with Hadoop are:  Tableau for data analysis & visualizations  SAS Enterprise Miner  IBM BigInsights [email protected]
  • 35.
  • 36.
    References  http://hadoop.apache.org/  http://www.cloudera.com/  http://www-01.ibm.com/software/data/bigdata/  http://www.cs.duke.edu/starfish/index.html  http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single- node-cluster/  http://karmasphere.com/Download/karmasphere-studio-community-virtual- appliance-for-ibm.html  http://www.slideshare.net/zshao/hive-data-warehousing-analytics-on-hadoop- presentation  http://www.slideshare.net/trihug/trihug-november-pig-talk-by-alan- gates?from=ss_embed  http://www.trihug.org/  http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/white_paper _c11-690561.html  http://www.cloudera.com/wp-content/uploads/2011/01/oraoopuserguide-With- OraHive.pdf [email protected]
  • 37.
  • 38.
  • 39.
    MAP-R  No singlepoint of failure of name node  Performance improvements (5 times faster than HDFS)  Snapshots, Multi-site copies  They have separate Mapreduce (extended mapreduce)  MapR is 8K blocks instead of 64MB block size of HDFS [email protected]
  • 40.
    Open Topics –why there is adoption issue  Security – no concept of roles  Backup, Recovery  ACID not supported [email protected]
  • 41.
  • 42.

Editor's Notes

  • #3 (http://www.zdnet.com/blog/big-data/hadoop-20-mapreduce-in-its-place-hdfs-all-grown-up/267http://www.informationweek.com/news/software/info_management/231900633http://www.informationweek.com/news/software/info_management/232400021http://www.informationweek.com/news/software/info_management/232300181http://www-01.ibm.com/software/data/infosphere/biginsights/basic.htmlhttp://www.informationweek.com/news/galleries/software/enterprise_apps/232500290?pgno=1
  • #6 http://www.informationweek.com/news/software/bi/229900002?cid=RSSfeed_IWK_News
  • #7 http://www.informationweek.com/news/software/bi/229900002?cid=RSSfeed_IWK_News
  • #8 Hadoop implements the core features that are at the heart of most modern EDWs: cloud-facing architectures, MPP, in-database analytics, mixed workload management, and a hybrid storage layer
  • #9 http://wiki.apache.org/hadoop/PoweredBy
  • #11 HBase is a full-fledged database (albeit not relational) which uses HDFS as storage. This means you can run interactive queries and updates on your dataset. Sqoop takes data from any DB that supports JDBC and moves it into HDFSIf you haven’t already, check out Toad® for Cloud Databases, our free, fully functional, commercial-grade cloud solution. With Toad for Cloud Databases, you can easily generate queries, migrate, browse, and edit data, as well as create reports and tables – all in a familiar SQL view. Finally, everyone can experience the productivity gains and cost benefits of NoSQL and big data – without the headachesToad for Cloud Databases provides unrivaled support for Apache Hive, Apache HBase, Apache Cassandra, MongoDB, Amazon SimpleDB, Microsoft Azure Table Services, Microsoft SQL Azure, and all open database connectivity (ODBC)-enabled relational databases (including Oracle, SQL Server, MySQL, DB2, and others)
  • #12 Netflix uses similar reference architecture for movie recommendations. Hadoop is not suited for low latency. Facebook does use Hbase for messaging which is close to a real time functionality
  • #21 http://facility9.com/nosql/glossary/
  • #30 Weka read this… it is similar… Mahoot is AI…