Friday 31 May 2013

Apache Zookeeper Training @ BigDataTraining.IN

Apache ZooKeeper is an effort to develop and maintain an open-source server which enables highly reliable distributed coordination.
ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. All of these kinds of services are used in some form or another by distributed applications. Each time they are implemented there is a lot of work that goes into fixing the bugs and race conditions that are inevitable. Because of the difficulty of implementing these kinds of services, applications initially usually skimp on them ,which make them brittle in the presence of change and difficult to manage. Even when done correctly, different implementations of these services lead to management complexity when the applications are deployed. 

http://www.hadooptrainingchennai.in/courses/

http://www.hadooptrainingchennai.in/hadoop-training/
 


ZooKeeper is a high-performance coordination service for distributed applications. It exposes common services - such as naming, configuration management, synchronization, and group services - in a simple interface so you don't have to write them from scratch. You can use it off-the-shelf to implement consensus, group management, leader election, and presence protocols. And you can build on it for your own, specific needs.

ZooKeeper: A Distributed Coordination Service for Distributed Applications

ZooKeeper is a distributed, open-source coordination service for distributed applications. It exposes a simple set of primitives that distributed applications can build upon to implement higher level services for synchronization, configuration maintenance, and groups and naming. It is designed to be easy to program to, and uses a data model styled after the familiar directory tree structure of file systems. It runs in Java and has bindings for both Java and C.

Design Goals

ZooKeeper is simple. ZooKeeper allows distributed processes to coordinate with each other through a shared hierarchal namespace which is organized similarly to a standard file system. The name space consists of data registers - called znodes, in ZooKeeper parlance - and these are similar to files and directories. Unlike a typical file system, which is designed for storage, ZooKeeper data is kept in-memory, which means ZooKeeper can achieve high throughput and low latency numbers.
The ZooKeeper implementation puts a premium on high performance, highly available, strictly ordered access. The performance aspects of ZooKeeper means it can be used in large, distributed systems. The reliability aspects keep it from being a single point of failure. The strict ordering means that sophisticated synchronization primitives can be implemented at the client.
ZooKeeper is replicated. Like the distributed processes it coordinates, ZooKeeper itself is intended to be replicated over a sets of hosts called an ensemble.
ZooKeeper is ordered. ZooKeeper stamps each update with a number that reflects the order of all ZooKeeper transactions. Subsequent operations can use the order to implement higher-level abstractions, such as synchronization primitives.
ZooKeeper is fast. It is especially fast in "read-dominant" workloads. ZooKeeper applications run on thousands of machines, and it performs best where reads are more common than writes, at ratios of around 10:1.

Data model and the hierarchical namespace

The name space provided by ZooKeeper is much like that of a standard file system. A name is a sequence of path elements separated by a slash (/). Every node in ZooKeeper's name space is identified by a path.

The ZooKeeper Data Model

ZooKeeper has a hierarchal name space, much like a distributed file system. The only difference is that each node in the namespace can have data associated with it as well as children. It is like having a file system that allows a file to also be a directory. Paths to nodes are always expressed as canonical, absolute, slash-separated paths; there are no relative reference. Any unicode character can be used in a path subject to the following constraints:
  • The null character (\u0000) cannot be part of a path name. (This causes problems with the C binding.)
  • The following characters can't be used because they don't display well, or render in confusing ways: \u0001 - \u001F and \u007F - \u009F.
  • The following characters are not allowed: \ud800 - uF8FF, \uFFF0 - uFFFF, \uXFFFE - \uXFFFF (where X is a digit 1 - E), \uF0000 - \uFFFFF.
  • The "." character can be used as part of another name, but "." and ".." cannot alone be used to indicate a node along a path, because ZooKeeper doesn't use relative paths. The following would be invalid: "/a/b/./c" or "/a/b/../c".
  • The token "zookeeper" is reserved.
      

    Getting Started with ZooKeeper

    Standalone Operation

    Setting up a ZooKeeper server in standalone mode is straightforward. The server is contained in a single JAR file, so installation consists of creating a configuration.
    Once you've downloaded a stable ZooKeeper release unpack it and cd to the root
    To start ZooKeeper you need a configuration file. Here is a sample, create it in conf/zoo.cfg:
    tickTime=2000
    dataDir=/var/lib/zookeeper
    clientPort=2181
    
    This file can be called anything, but for the sake of this discussion call it conf/zoo.cfg. Change the value of dataDir to specify an existing (empty to start with) directory. Here are the meanings for each of the fields:
    tickTime
    the basic time unit in milliseconds used by ZooKeeper. It is used to do heartbeats and the minimum session timeout will be twice the tickTime.
    dataDir
    the location to store the in-memory database snapshots and, unless specified otherwise, the transaction log of updates to the database.
    clientPort
    the port to listen for client connections
    Now that you created the configuration file, you can start ZooKeeper:
    bin/zkServer.sh start
    ZooKeeper logs messages using log4j -- more detail available in the Logging section of the Programmer's Guide. You will see log messages coming to the console (default) and/or a log file depending on the log4j configuration.
    The steps outlined here run ZooKeeper in standalone mode. There is no replication, so if ZooKeeper process fails, the service will go down. This is fine for most development situations, but to run ZooKeeper in replicated mode,

    Managing ZooKeeper Storage

    For long running production systems ZooKeeper storage must be managed externally (dataDir and logs).


    In machine learning and pattern recognition, a feature is an individual measurable heuristic property of a phenomenon being observed. Choosing discriminating and independent features is key to any pattern recognition algorithm being successful in classification. Features are usually numeric, but structural features such as strings and graphs are used in syntactic pattern recognition.
    The set of features of a given data instance is often grouped into a feature vector. The reason for doing this is that the vector can be treated mathematically. For example, many algorithms compute a score for classifying an instance into a particular category by linearly combining a feature vector with a vector of weights, using a linear predictor function.
    The concept of "feature" is essentially the same as the concept of explanatory variable used in statistical techniques such as linear regression.
     
    BigDataTraining.IN Technology focus towards project development and professional training in Big Data and Hadoop Technologies. We served the students with our academic projects.
    Machine Learning Training Chennai with POC Projects !
     
    We were able to prove our worth in the following areas.

    Hadoop
    Big Data
    Big Data Analytics
    Big Data & Hadoop Development solutions
    Advanced Hadoop EcoSystems Tools
    MongoDB
    Apache Cassandra
    HBase – Developer & Admin
    Sentiment Analysis
    Prediction Engine
    Recommendation Engine
    Mahout
    CouchDB
    HBase
    CouchBase
    Prediction Engine
    Cloud Computing
    VMware
    Xen
    KVM
    Amazon EC2
    Eucalyptus
    Open Stack
    Android
    IOS-IPHONE
    Mobile Computing

    We assist more number of people with our student projects and provide exposure and support to the students with our Technical Architects every year. Lot of scholars from various colleges and universities are benefitted and hence, we still receive referrals from engineering colleges all over India.

    Learn Big Data from Big Data Solutions Architects! Hadoop Training Chennai with 
    Hands-On Practical Approach ! Reach us to Enroll! 100% Placements
     
    Key Features -
    Cloud Server Access
    Training = Enterprise Scale
    Advanced Technology Coverage + PoC Project Work
    24/7 Technical Support

    http://www.bigdatatraining.in/machine-learning-training/


    http://www.bigdatatraining.in/hadoop-development/training-schedule/

    Mail:
    info@bigdatatraining.in
    Call:
    +91 9789968765
    044 - 42645495

    Visit Us:
    #67, 2nd Floor, Gandhi Nagar 1st Main Road, Adyar, Chennai - 20
    [Opp to Adyar Lifestyle Super Market]




Wednesday 29 May 2013

Apache HBase Default Configuration -Learn Hadoop HBase From Experts

        Apache HBase Configuration             

   hbase-site.xml and hbase-default.xml:

Just as in Hadoop where you add site-specific HDFS configuration to the hdfs-site.xml file, for HBase, site specific customizations go into the file conf/hbase-site.xml

Not all configuration options make it out to hbase-default.xml. Configuration that it is thought rare anyone would change can exist only in code; the only way to turn up such configurations is via a reading of the source code itself.
Currently, changes here will require a cluster restart for HBase to notice the change.

         HBase Default Configuration

        HBase Default Configuration
The documentation below is generated using the default hbase configuration file, hbase-default.xml, as source.
hbase.rootdir
The directory shared by region servers and into which HBase persists. The URL should be 'fully-qualified' to include the filesystem scheme. For example, to specify the HDFS directory '/hbase' where the HDFS instance's namenode is running at namenode.example.org on port 9000, set this value to: hdfs://namenode.example.org:9000/hbase. By default HBase writes into /tmp. Change this configuration else all data will be lost on machine restart.
Default: file:///tmp/hbase-${user.name}/hbase
hbase.master.port
The port the HBase Master should bind to.
Default: 60000
hbase.cluster.distributed
The mode the cluster will be in. Possible values are false for standalone mode and true for distributed mode. If false, startup will run all HBase and ZooKeeper daemons together in the one JVM.
Default: false
hbase.tmp.dir
Temporary directory on the local filesystem. Change this setting to point to a location more permanent than '/tmp' (The '/tmp' directory is often cleared on machine restart).
Default: ${java.io.tmpdir}/hbase-${user.name}
hbase.local.dir
Directory on the local filesystem to be used as a local storage.
Default: ${hbase.tmp.dir}/local/
hbase.master.info.port
The port for the HBase Master web UI. Set to -1 if you do not want a UI instance run.
Default: 60010
hbase.master.info.bindAddress
The bind address for the HBase Master web UI
Default: 0.0.0.0
hbase.client.write.buffer
Default size of the HTable clien write buffer in bytes. A bigger buffer takes more memory -- on both the client and server side since server instantiates the passed write buffer to process it -- but a larger buffer size reduces the number of RPCs made. For an estimate of server-side memory-used, evaluate hbase.client.write.buffer * hbase.regionserver.handler.count
Default: 2097152
hbase.regionserver.port
The port the HBase RegionServer binds to.
Default: 60020
hbase.regionserver.info.port
The port for the HBase RegionServer web UI Set to -1 if you do not want the RegionServer UI to run.
Default: 60030
hbase.regionserver.info.port.auto
Whether or not the Master or RegionServer UI should search for a port to bind to. Enables automatic port search if hbase.regionserver.info.port is already in use. Useful for testing, turned off by default.
Default: false
hbase.regionserver.info.bindAddress
The address for the HBase RegionServer web UI
Default: 0.0.0.0
hbase.client.pause
General client pause value. Used mostly as value to wait before running a retry of a failed get, region lookup, etc.
Default: 1000
hbase.client.retries.number
Maximum retries. Used as maximum for all retryable operations such as fetching of the root region from root region server, getting a cell's value, starting a row update, etc. Default: 10.
Default: 10
hbase.bulkload.retries.number
Maximum retries. This is maximum number of iterations to atomic bulk loads are attempted in the face of splitting operations 0 means never give up. Default: 0.
Default: 0
hbase.client.scanner.caching
Number of rows that will be fetched when calling next on a scanner if it is not served from (local, client) memory. Higher caching values will enable faster scanners but will eat up more memory and some calls of next may take longer and longer times when the cache is empty. Do not set this value such that the time between invocations is greater than the scanner timeout; i.e. hbase.client.scanner.timeout.period
Default: 100
hbase.client.keyvalue.maxsize
Specifies the combined maximum allowed size of a KeyValue instance. This is to set an upper boundary for a single entry saved in a storage file. Since they cannot be split it helps avoiding that a region cannot be split any further because the data is too large. It seems wise to set this to a fraction of the maximum region size. Setting it to zero or less disables the check.
Default: 10485760
hbase.client.scanner.timeout.period
Client scanner lease period in milliseconds. Default is 60 seconds.
Default: 60000
hbase.regionserver.handler.count
Count of RPC Listener instances spun up on RegionServers. Same property is used by the Master for count of master handlers. Default is 10.
Default: 10
hbase.regionserver.msginterval
Interval between messages from the RegionServer to Master in milliseconds.
Default: 3000
hbase.regionserver.optionallogflushinterval
Sync the HLog to the HDFS after this interval if it has not accumulated enough entries to trigger a sync. Default 1 second. Units: milliseconds.
Default: 1000
hbase.regionserver.regionSplitLimit
Limit for the number of regions after which no more region splitting should take place. This is not a hard limit for the number of regions but acts as a guideline for the regionserver to stop splitting after a certain limit. Default is set to MAX_INT; i.e. do not block splitting.
Default: 2147483647
hbase.regionserver.logroll.period
Period at which we will roll the commit log regardless of how many edits it has.
Default: 3600000
hbase.regionserver.logroll.errors.tolerated
The number of consecutive WAL close errors we will allow before triggering a server abort. A setting of 0 will cause the region server to abort if closing the current WAL writer fails during log rolling. Even a small value (2 or 3) will allow a region server to ride over transient HDFS errors.
Default: 2
hbase.regionserver.hlog.reader.impl
The HLog file reader implementation.
Default: org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader
hbase.regionserver.hlog.writer.impl
The HLog file writer implementation.
Default: org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter
hbase.regionserver.nbreservationblocks
The number of resevoir blocks of memory release on OOME so we can cleanup properly before server shutdown.
Default: 4
hbase.zookeeper.dns.interface
The name of the Network Interface from which a ZooKeeper server should report its IP address.
Default: default
hbase.zookeeper.dns.nameserver
The host name or IP address of the name server (DNS) which a ZooKeeper server should use to determine the host name used by the master for communication and display purposes.
Default: default
hbase.regionserver.dns.interface
The name of the Network Interface from which a region server should report its IP address.
Default: default
hbase.regionserver.dns.nameserver
The host name or IP address of the name server (DNS) which a region server should use to determine the host name used by the master for communication and display purposes.
Default: default
hbase.master.dns.interface
The name of the Network Interface from which a master should report its IP address.
Default: default
hbase.master.dns.nameserver
The host name or IP address of the name server (DNS) which a master should use to determine the host name used for communication and display purposes.
Default: default
hbase.balancer.period
Period at which the region balancer runs in the Master.
Default: 300000
hbase.regions.slop
Rebalance if any regionserver has average + (average * slop) regions. Default is 20% slop.
Default: 0.2
hbase.master.logcleaner.ttl
Maximum time a HLog can stay in the .oldlogdir directory, after which it will be cleaned by a Master thread.
Default: 600000
hbase.master.logcleaner.plugins
A comma-separated list of LogCleanerDelegate invoked by the LogsCleaner service. These WAL/HLog cleaners are called in order, so put the HLog cleaner that prunes the most HLog files in front. To implement your own LogCleanerDelegate, just put it in HBase's classpath and add the fully qualified class name here. Always add the above default log cleaners in the list.
Default: org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner
hbase.regionserver.global.memstore.upperLimit
Maximum size of all memstores in a region server before new updates are blocked and flushes are forced. Defaults to 40% of heap. Updates are blocked and flushes are forced until size of all memstores in a region server hits hbase.regionserver.global.memstore.lowerLimit.
Default: 0.4
hbase.regionserver.global.memstore.lowerLimit
Maximum size of all memstores in a region server before flushes are forced. Defaults to 35% of heap. This value equal to hbase.regionserver.global.memstore.upperLimit causes the minimum possible flushing to occur when updates are blocked due to memstore limiting.
Default: 0.35
hbase.server.thread.wakefrequency
Time to sleep in between searches for work (in milliseconds). Used as sleep interval by service threads such as log roller.
Default: 10000
hbase.server.versionfile.writeattempts
How many time to retry attempting to write a version file before just aborting. Each attempt is seperated by the hbase.server.thread.wakefrequency milliseconds.
Default: 3
hbase.regionserver.optionalcacheflushinterval
Maximum amount of time an edit lives in memory before being automatically flushed. Default 1 hour. Set it to 0 to disable automatic flushing.
Default: 3600000
hbase.hregion.memstore.flush.size
Memstore will be flushed to disk if size of the memstore exceeds this number of bytes. Value is checked by a thread that runs every hbase.server.thread.wakefrequency.
Default: 134217728
hbase.hregion.preclose.flush.size
If the memstores in a region are this size or larger when we go to close, run a "pre-flush" to clear out memstores before we put up the region closed flag and take the region offline. On close, a flush is run under the close flag to empty memory. During this time the region is offline and we are not taking on any writes. If the memstore content is large, this flush could take a long time to complete. The preflush is meant to clean out the bulk of the memstore before putting up the close flag and taking the region offline so the flush that runs under the close flag has little to do.
Default: 5242880
hbase.hregion.memstore.block.multiplier
Block updates if memstore has hbase.hregion.block.memstore time hbase.hregion.flush.size bytes. Useful preventing runaway memstore during spikes in update traffic. Without an upper-bound, memstore fills such that when it flushes the resultant flush files take a long time to compact or split, or worse, we OOME.
Default: 2
hbase.hregion.memstore.mslab.enabled
Enables the MemStore-Local Allocation Buffer, a feature which works to prevent heap fragmentation under heavy write loads. This can reduce the frequency of stop-the-world GC pauses on large heaps.
Default: true
hbase.hregion.max.filesize
Maximum HStoreFile size. If any one of a column families' HStoreFiles has grown to exceed this value, the hosting HRegion is split in two. Default: 10G.
Default: 10737418240
hbase.hstore.compactionThreshold
If more than this number of HStoreFiles in any one HStore (one HStoreFile is written per flush of memstore) then a compaction is run to rewrite all HStoreFiles files as one. Larger numbers put off compaction but when it runs, it takes longer to complete.
Default: 3
hbase.hstore.blockingStoreFiles
If more than this number of StoreFiles in any one Store (one StoreFile is written per flush of MemStore) then updates are blocked for this HRegion until a compaction is completed, or until hbase.hstore.blockingWaitTime has been exceeded.
Default: 7
hbase.hstore.blockingWaitTime
The time an HRegion will block updates for after hitting the StoreFile limit defined by hbase.hstore.blockingStoreFiles. After this time has elapsed, the HRegion will stop blocking updates even if a compaction has not been completed. Default: 90 seconds.
Default: 90000
hbase.hstore.compaction.max
Max number of HStoreFiles to compact per 'minor' compaction.
Default: 10
hbase.hregion.majorcompaction
The time (in miliseconds) between 'major' compactions of all HStoreFiles in a region. Default: 1 day. Set to 0 to disable automated major compactions.
Default: 86400000
hbase.storescanner.parallel.seek.enable
Enables StoreFileScanner parallel-seeking in StoreScanner, a feature which can reduce response latency under special conditions.
Default: false
hbase.storescanner.parallel.seek.threads
The default thread pool size if parallel-seeking feature enabled.
Default: 10
hbase.mapreduce.hfileoutputformat.blocksize
The mapreduce HFileOutputFormat writes storefiles/hfiles. This is the minimum hfile blocksize to emit. Usually in hbase, writing hfiles, the blocksize is gotten from the table schema (HColumnDescriptor) but in the mapreduce outputformat context, we don't have access to the schema so get blocksize from Configuration. The smaller you make the blocksize, the bigger your index and the less you fetch on a random-access. Set the blocksize down if you have small cells and want faster random-access of individual cells.
Default: 65536
hfile.block.cache.size
Percentage of maximum heap (-Xmx setting) to allocate to block cache used by HFile/StoreFile. Default of 0.25 means allocate 25%. Set to 0 to disable but it's not recommended.
Default: 0.25
hbase.hash.type
The hashing algorithm for use in HashFunction. Two values are supported now: murmur (MurmurHash) and jenkins (JenkinsHash). Used by bloom filters.
Default: murmur
hfile.block.index.cacheonwrite
This allows to put non-root multi-level index blocks into the block cache at the time the index is being written.
Default: false
hfile.index.block.max.size
When the size of a leaf-level, intermediate-level, or root-level index block in a multi-level block index grows to this size, the block is written out and a new block is started.
Default: 131072
hfile.format.version
The HFile format version to use for new files. Set this to 1 to test backwards-compatibility. The default value of this option should be consistent with FixedFileTrailer.MAX_VERSION.
Default: 2
io.storefile.bloom.block.size
The size in bytes of a single block ("chunk") of a compound Bloom filter. This size is approximate, because Bloom blocks can only be inserted at data block boundaries, and the number of keys per data block varies.
Default: 131072
hfile.block.bloom.cacheonwrite
Enables cache-on-write for inline blocks of a compound Bloom filter.
Default: false
hbase.rs.cacheblocksonwrite
Whether an HFile block should be added to the block cache when the block is finished.
Default: false
hbase.rpc.server.engine
Implementation of org.apache.hadoop.hbase.ipc.RpcServerEngine to be used for server RPC call marshalling.
Default: org.apache.hadoop.hbase.ipc.ProtobufRpcServerEngine
hbase.ipc.client.tcpnodelay
Set no delay on rpc socket connections.
Default: true

hadoop.policy.file
The policy configuration file used by RPC servers to make authorization decisions on client requests. Only used when HBase security is enabled.
Default: hbase-policy.xml
 
hbase.auth.key.update.interval
The update interval for master key for authentication tokens in servers in milliseconds. Only used when HBase security is enabled.
Default: 86400000
hbase.auth.token.max.lifetime
The maximum lifetime in milliseconds after which an authentication token expires. Only used when HBase security is enabled.
Default: 604800000
zookeeper.session.timeout
ZooKeeper session timeout. HBase passes this to the zk quorum as suggested maximum time for a session (This setting becomes zookeeper's 'maxSessionTimeout'). "The client sends a requested timeout, the server responds with the timeout that it can give the client. " In milliseconds.
Default: 180000
zookeeper.znode.parent
Root ZNode for HBase in ZooKeeper. All of HBase's ZooKeeper files that are configured with a relative path will go under this node. By default, all of HBase's ZooKeeper file path are configured with a relative path, so they will all go under this directory unless changed.
Default: /hbase
zookeeper.znode.rootserver
Path to ZNode holding root region location. This is written by the master and read by clients and region servers. If a relative path is given, the parent folder will be ${zookeeper.znode.parent}. By default, this means the root location is stored at /hbase/root-region-server.
Default: root-region-server
zookeeper.znode.acl.parent
Root ZNode for access control lists.
Default: acl
hbase.coprocessor.region.classes
A comma-separated list of Coprocessors that are loaded by default on all tables. For any override coprocessor method, these classes will be called in order. After implementing your own Coprocessor, just put it in HBase's classpath and add the fully qualified class name here. A coprocessor can also be loaded on demand by setting HTableDescriptor.
Default:
hbase.coprocessor.master.classes
A comma-separated list of org.apache.hadoop.hbase.coprocessor.MasterObserver coprocessors that are loaded by default on the active HMaster process. For any implemented coprocessor methods, the listed classes will be called in order. After implementing your own MasterObserver, just put it in HBase's classpath and add the fully qualified class name here.
Default:
hbase.zookeeper.quorum
Comma separated list of servers in the ZooKeeper Quorum. For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com". By default this is set to localhost for local and pseudo-distributed modes of operation. For a fully-distributed setup, this should be set to a full list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh this is the list of servers which we will start/stop ZooKeeper on.
Default: localhost
hbase.zookeeper.useMulti
Instructs HBase to make use of ZooKeeper's multi-update functionality. This allows certain ZooKeeper operations to complete more quickly and prevents some issues with rare Replication failure scenarios
IMPORTANT: only set this to true if all ZooKeeper servers in the cluster are on version 3.4+ and will not be downgraded. ZooKeeper versions before 3.4 do not support multi-update and will not fail gracefully if multi-update is invoked (see ZOOKEEPER-1495).
Default: false
hbase.zookeeper.property.initLimit
Property from ZooKeeper's config zoo.cfg. The number of ticks that the initial synchronization phase can take.
Default: 10
hbase.zookeeper.property.syncLimit
Property from ZooKeeper's config zoo.cfg. The number of ticks that can pass between sending a request and getting an acknowledgment.
Default: 5
hbase.zookeeper.property.dataDir
Property from ZooKeeper's config zoo.cfg. The directory where the snapshot is stored.
Default: ${hbase.tmp.dir}/zookeeper
hbase.zookeeper.property.clientPort
Property from ZooKeeper's config zoo.cfg. The port at which the clients will connect.
Default: 2181
hbase.zookeeper.property.maxClientCnxns
Property from ZooKeeper's config zoo.cfg. Limit on number of concurrent connections (at the socket level) that a single client, identified by IP address, may make to a single member of the ZooKeeper ensemble. Set high to avoid zk connection issues running standalone and pseudo-distributed.
Default: 300
hbase.rest.port
The port for the HBase REST server.
Default: 8080
hbase.rest.readonly
Defines the mode the REST server will be started in. Possible values are: false: All HTTP methods are permitted - GET/PUT/POST/DELETE. true: Only the GET method is permitted.
Default: false
hbase.defaults.for.version.skip
Set to true to skip the 'hbase.defaults.for.version' check. Setting this to true can be useful in contexts other than the other side of a maven generation; i.e. running in an ide. You'll want to set this boolean to true to avoid seeing the RuntimException complaint: "hbase-default.xml file seems to be for and old version of HBase (\${hbase.version}), this version is X.X.X-SNAPSHOT"
Default: false
hbase.coprocessor.abortonerror
Set to true to cause the hosting server (master or regionserver) to abort if a coprocessor throws a Throwable object that is not IOException or a subclass of IOException. Setting it to true might be useful in development environments where one wants to terminate the server as soon as possible to simplify coprocessor failure analysis.
Default: false
hbase.online.schema.update.enable
Set true to enable online schema changes. This is an experimental feature. There are known issues modifying table schemas at the same time a region split is happening so your table needs to be quiescent or else you have to be running with splits disabled.
Default: false
hbase.table.lock.enable
Set to true to enable locking the table in zookeeper for schema change operations. Table locking from master prevents concurrent schema modifications to corrupt table state.
Default: true
dfs.support.append
Does HDFS allow appends to files? This is an hdfs config. set in here so the hdfs client will do append support. You must ensure that this config. is true serverside too when running hbase (You will have to restart your cluster after setting it).
Default: true
hbase.thrift.minWorkerThreads
The "core size" of the thread pool. New threads are created on every connection until this many threads are created.
Default: 16
hbase.thrift.maxWorkerThreads
The maximum size of the thread pool. When the pending request queue overflows, new threads are created until their number reaches this number. After that, the server starts dropping connections.
Default: 1000
hbase.thrift.maxQueuedRequests
The maximum number of pending Thrift connections waiting in the queue. If there are no idle threads in the pool, the server queues requests. Only when the queue overflows, new threads are added, up to hbase.thrift.maxQueuedRequests threads.
Default: 1000
hbase.offheapcache.percentage
The amount of off heap space to be allocated towards the experimental off heap cache. If you desire the cache to be disabled, simply set this value to 0.
Default: 0
hbase.data.umask.enable
Enable, if true, that file permissions should be assigned to the files written by the regionserver
Default: false
hbase.data.umask
File permissions that should be used to write data files when hbase.data.umask.enable is true
Default: 000
hbase.metrics.showTableName
Whether to include the prefix "tbl.tablename" in per-column family metrics. If true, for each metric M, per-cf metrics will be reported for tbl.T.cf.CF.M, if false, per-cf metrics will be aggregated by column-family across tables, and reported for cf.CF.M. In both cases, the aggregated metric M across tables and cfs will be reported.
Default: true
hbase.metrics.exposeOperationTimes
Whether to report metrics about time taken performing an operation on the region server. Get, Put, Delete, Increment, and Append can all have their times exposed through Hadoop metrics per CF and per region.
Default: true
hbase.master.hfilecleaner.plugins
A comma-separated list of HFileCleanerDelegate invoked by the HFileCleaner service. These HFiles cleaners are called in order, so put the cleaner that prunes the most files in front. To implement your own HFileCleanerDelegate, just put it in HBase's classpath and add the fully qualified class name here. Always add the above default log cleaners in the list as they will be overwritten in hbase-site.xml.
Default: org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner
hbase.regionserver.catalog.timeout
Timeout value for the Catalog Janitor from the regionserver to META.
Default: 600000
hbase.master.catalog.timeout
Timeout value for the Catalog Janitor from the master to META.
Default: 600000
hbase.config.read.zookeeper.config
Set to true to allow HBaseConfiguration to read the zoo.cfg file for ZooKeeper properties. Switching this to true is not recommended, since the functionality of reading ZK properties from a zoo.cfg file has been deprecated.
Default: false
hbase.snapshot.enabled
Set to true to allow snapshots to be taken / restored / cloned.
Default: true
hbase.rest.threads.max
The maximum number of threads of the REST server thread pool. Threads in the pool are reused to process REST requests. This controls the maximum number of requests processed concurrently. It may help to control the memory used by the REST server to avoid OOM issues. If the thread pool is full, incoming requests will be queued up and wait for some free threads. The default is 100.
Default: 100
hbase.rest.threads.min
The minimum number of threads of the REST server thread pool. The thread pool always has at least these number of threads so the REST server is ready to serve incoming requests. The default is 2.
Default: 2
hbase.rpc.timeout
This is for the RPC layer to define how long HBase client applications take for a remote call to time out. It uses pings to check connections but will eventually throw a TimeoutException. The default value is 60000ms(60s).
Default: 60000
hbase.server.compactchecker.interval.multiplier
The number that determines how often we scan to see if compaction is necessary. Normally, compactions are done after some events (such as memstore flush), but if region didn't receive a lot of writes for some time, or due to different compaction policies, it may be necessary to check it periodically. The interval between checks is hbase.server.compactchecker.interval.multiplier multiplied by hbase.server.thread.wakefrequency.
Default: 1000

     hbase-env.sh

Set HBase environment variables in this file. Examples include options to pass the JVM on start of an HBase daemon such as heap size and garbarge collector configs. You can also set configurations for HBase configuration, log directories, niceness, ssh options, where to locate process pid files, etc. Open the file at conf/hbase-env.sh and peruse its content. Each option is fairly well documented. Add your own environment variables here if you want them read by HBase daemons on startup.
Changes here will require a cluster restart for HBase to notice the change.

     log4j.properties

Edit this file to change rate at which HBase files are rolled and to change the level at which HBase logs messages.
Changes here will require a cluster restart for HBase to notice the change though log levels can be changed for particular daemons via the HBase UI.

Get Hands-on Training @ BigDataTraining.IN


Contact us:

#67,2nd Floor, 1st Main Road, Gandhi Nagar, Adyar, Chennai- 600020

Thursday 9 May 2013

Hadoop NoSQL Certification Training -BigDataTraining.IN



NoSQL databases and data-processing frameworks are primarily utilized because of their speed, scalability and flexibility. 

Features of NoSQL databases

 One major difference between traditional relational databases and NoSQL is that the latter do not generally provide guarantees for atomicity, consistency, isolation and durability (commonly known as ACID property), although some support is beginning to emerge. Instead of ACID, NoSql databases more or less follow something called "BASE". 
The other major difference is, NoSQL databases are generally schema-less - that is records in these databases do not require to conform to a pre-defined storage schema.
In a relational database, schema is the structure of a database system described in a formal language supported by the DBMS and refers how the database will be constructed and divided into database objects such as tables, fields, relationships, views, indexes, packages, procedures, functions, queues, triggers and other elements.
In NoSQL databases, schema-free collections are utilized instead so that different types and document structures such as {“color”, “blue”} and {“price”, “23.5”} can be stored within a single collection.

Schema-less
"Tables" don't have a pre-defined schema. Records have a variable number of fields that can vary from record to record. Record contents and semantics are enforced by applications.

Shared nothing architecture
  Instead of using a common storage pool (e.g., SAN), each server uses only its own local storage. This allows storage to be accessed at local disk speeds instead of network speeds, and it allows capacity to be increased by adding more nodes. Cost is also reduced since commodity hardware can be used.

Elasticity
Both storage and server capacity can be added on-the-fly by merely adding more servers. No downtime is required. When a new node is added, the database begins giving it something to do and requests to fulfill.

Sharding
Instead of viewing the storage as a monolithic space, records are partitioned into shards. Usually, a shard is small enough to be managed by a single server, though shards are usually replicated. Sharding can be automatic (e.g., an existing shard splits when it gets too big), or applications can assist in data sharding by assigning each record a partition ID.

Asynchronous replication
Compared to RAID storage (mirroring and/or striping) or synchronous replication, NoSQL databases employ asynchronous replication. This allows writes to complete more quickly since they don't depend on extra network traffic. One side effect of this strategy is that data is not immediately replicated and could be lost in certain windows. Also, locking is usually not available to protect all copies of a specific unit of data.

BASE instead of ACID
NoSQL databases emphasize performance and availability. This requires prioritizing the components of the CAP theorem (described elsewhere) that tends to make true ACID transactions implausible.

Types of NoSQLdatabases

NoSQL databases are often categorized according to the way they store data.

  • Key-value stores
  • Columnar (or column-oriented) databases
  • Graph databases
  • Document databases
Big Data to create a new boom in job market
 
The 'Big Data' industry - the ability to access , analyze and use humungous volumes of data through specific technology - will require a whole new army of data workers globally. India itself will require a minimum of 1,00,000 data scientists in the next couple of years, in addition to scores of data managers and data analysts , to support the fast emerging Big Data space.

The exponentially decreasing costs of data storage combined with the soaring volume of data being captured presents challenges and opportunities to those who work in the new frontiers of data science. Businesses, government agencies, and scientists leveraging data-based decisions are more successful than those relying on decades of trial-and-error. But taming and harnessing big data can be a herculean undertaking. The data must be collected, processed and distilled, analyzed, and presented in a manner humans can understand. Because there are no degrees in data science, data scientists must grow into their roles. If you are looking for resources to help you better understand big data and analytics, We have the knowledge and experience needed to help make your systems contribute to the success of your business. Form a tandem with us and take advantage of our capacity to manage, process and analyze big data effectively, quickly and economically.

BigDataTraining.IN has a strong focus and established thought leadership in the area of Big Data and Analytics. We use a global delivery model to help you to evaluate and implement solutions tailored to your specific technical and business context.

http://www.bigdatatraining.in/hadoop-training-chennai/

http://www.hadooptrainingchennai.in/hadoop-training-in-chennai/


email : info@bigdatatraining.in

Phone: +91 9789968765, 044-42645495

#67,2nd Floor, 1st Main Road, Gandhi Nagar, Adyar, Chennai- 600020