How to install python3 and IPython on Centos6.x machines

Steps to install Python3 & IPython on Centos6 machines.

1.    yum -y install centos-release-scl


2. To install latest version of python3 and enable in bash

      yum -y install $(yum info rh-python3* |egrep  'Name' |awk '{print $3}'|grep -i 'rh-python3[0-9]$'|sort -V|tail -1)

     scl enable $(yum info rh-python3* |egrep  'Name' |awk '{print $3}'|grep -i 'rh-python3[0-9]$'|sort -V|tail -1) bash


3.  To install Pip & upgrade  pip
     yum -y install python-pip
     pip install --upgrade pip

4.  To install IPython
      pip install IPython

How to move ambari server from one host to another


Ambari Server Movement


Steps:

  1. ambari-server stop & ambari-agent stop (on all hosts)

  1. Mysql DB backup:
mysqldump ambari> /root/backup_ambari.sql

  1. Install ambari-server on the new host, have the ambari repo in /etc/yum.repos.d
 yum -y install ambari-server

If you need mysql to installed on the same host as well. Then,
yum install mysql-server  
/etc/init.d/mysqld start

Note: {Make sure this installs the right msql version supported by ambari}

To restore the "Ambari" database on new host, login to mysql as root and

Mysql>use mysql;
Mysql> create database ambari;
Mysql> create user ambari identified by 'somepassword';

mysql -u root  ambari < /root/backup_ambari.sql

  1. Ambari-server setup
  2. ambari-server start
  3. Need to run on all hosts of the cluster where agents are running

ambari-agent reset
Ex: ambari-agent reset c249-node3.squadron-labs.com

ambari-agent start

Jstack alternatives for JVM with "kill -3" command

If you are not able to for some reason switch user as hbase to collect Jstack,  you can run the following to collect the similiar information with kill -3 command.

This won't kill the process, but it just dumps the stack threads in to .out file.

echo > /var/log/hbase/hbase-hbase-regionserver-hbase4.openstacklocal.out
kill -3 `cat /var/run/hbase/hbase-hbase-regionserver.pid`
cp /var/log/hbase/hbase-hbase-regionserver-hbase4.openstacklocal.out   "Jstack_$(date +"%Y_%m_%d_%I_%M_%p")_`hostname`.log"

MySQL 5.1 to MySQL 5.6 Upgrade

Note: First take the existing mysql 5.1 db backup using the command
mysqldump $dbname > $outputfilename.sqlsbr

Steps to Upgrade 5.6:

yum remove libaio -y

wget http://repo.mysql.com/mysql-community-release-el6-5.noarch.rpm

rpm -ivh mysql-community-release-el6-5.noarch.rpm

yum install mysql-server -y

/etc/init.d/mysqld start

Java Commands useful for troubleshooting

1. Jinfo -flags < process id >  to print all the JVM options explicitly specified when the process was started.

It will be useful to examine the JVM GC settings.

example:-

Ranger admin process:

jinfo -flags 57374

Attaching to process ID 57374, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 25.112-b15
Non-default VM flags: -XX:CICompilerCount=15 -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=357564416 -XX:MinHeapDeltaBytes=524288 -XX:NewSize=357564416 -XX:OldSize=716177408 -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseFastUnorderedTimeStamps -XX:+UseParallelGC
Command line:  -Dproc_rangeradmin -XX:MaxPermSize=256m -Xmx1024m -Xms1024m -Duser.timezone=UTC -Dservername=rangeradmin -Dlogdir=/var/log/ranger/admin -Dcatalina.base=/usr/hdp/2.6.5.0-292/ranger-admin/ews

2.  To print Java System properties where the JVM is running,

jinfo -sysprops 57374

Attaching to process ID 57374, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 25.112-b15
java.runtime.name = Java(TM) SE Runtime Environment
java.vm.version = 25.112-b15
sun.boot.library.path = /usr/jdk64/jdk1.8.0_112/jre/lib/amd64
java.vendor.url = http://java.oracle.com/
java.vm.vendor = Oracle Corporation
path.separator = :
file.encoding.pkg = sun.io
java.vm.name = Java HotSpot(TM) 64-Bit Server VM
sun.os.patch.level = unknown
sun.java.launcher = SUN_STANDARD
user.country = US
user.dir = /usr/hdp/2.6.5.0-292/ranger-admin/ews
java.vm.specification.name = Java Virtual Machine Specification
java.runtime.version = 1.8.0_112-b15
java.awt.graphicsenv = sun.awt.X11GraphicsEnvironment
os.arch = amd64
java.endorsed.dirs = /usr/jdk64/jdk1.8.0_112/jre/lib/endorsed
line.separator =

java.io.tmpdir = /tmp
proc_rangeradmin =
webapp.root = /usr/hdp/2.6.5.0-292/ranger-admin/ews/webapp/
java.vm.specification.vendor = Oracle Corporation
os.name = Linux
servername = rangeradmin
sun.jnu.encoding = UTF-8
javax.net.ssl.keyStorePassword =
java.library.path = /usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
javax.net.ssl.trustStore = /etc/ranger/admin/conf/ranger-admin-keystore.jks
java.class.version = 52.0
java.specification.name = Java Platform API Specification
sun.management.compiler = HotSpot 64-Bit Tiered Compilers
os.version = 3.10.0-862.9.1.el7.x86_64
user.home = /home/ranger
user.timezone = UTC
catalina.useNaming = false
java.awt.printerjob = sun.print.PSPrinterJob
file.encoding = UTF-8
logdir = /var/log/ranger/admin
java.specification.version = 1.8
javax.net.ssl.trustStoreType = jks
catalina.home = /usr/hdp/2.6.5.0-292/ranger-admin/ews
user.name = ranger
java.class.path = /usr/hdp/2.6.5.0-292/ranger-admin/ews/webapp/WEB-INF/classes/conf:/usr/hdp/2.6.5.0-292/ranger-admin/ews/lib/commons-collections-3.2.2.jar:/usr/hdp/2.6.5.0-292/ranger-admin/ews/lib/commons-configuration-1.10.jar:/usr/hdp/2.6.5.0-292/ranger-admin/ews/lib/commons-lang-2.6.jar:/usr/hdp/2.6.5.0-292/ranger-admin/ews/lib/commons-logging-1.2.jar:/usr/hdp/2.6.5.0-292/ranger-admin/ews/lib/ecj-P20140317-1600.jar:/usr/hdp/2.6.5.0-292/ranger-admin/ews/lib/embeddedwebserver-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/ranger-admin/ews/lib/guava-17.0.jar:/usr/hdp/2.6.5.0-292/ranger-admin/ews/lib/hadoop-auth-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/ranger-admin/ews/lib/hadoop-common-2.7.3.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/ranger-admin/ews/lib/log4j-1.2.17.jar:/usr/hdp/2.6.5.0-292/ranger-admin/ews/lib/ranger-plugins-common-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/ranger-admin/ews/lib/slf4j-api-1.7.5.jar:/usr/hdp/2.6.5.0-292/ranger-admin/ews/lib/tomcat-annotations-api-7.0.82.jar:/usr/hdp/2.6.5.0-292/ranger-admin/ews/lib/tomcat-embed-core-7.0.82.jar:/usr/hdp/2.6.5.0-292/ranger-admin/ews/lib/tomcat-embed-el-7.0.82.jar:/usr/hdp/2.6.5.0-292/ranger-admin/ews/lib/tomcat-embed-jasper-7.0.82.jar:/usr/hdp/2.6.5.0-292/ranger-admin/ews/lib/tomcat-embed-logging-juli-7.0.82.jar:/usr/hdp/2.6.5.0-292/ranger-admin/ews/lib/tomcat-embed-logging-log4j-7.0.82.jar:/usr/hdp/2.6.5.0-292/ranger-admin/ews/lib/tomcat-embed-websocket-7.0.82.jar:/usr/hdp/2.6.5.0-292/ranger-admin/ews/lib/mysql-connector-java.jar:/usr/hdp/2.6.5.0-292/ranger-admin/ews/ranger_jaas/unixauthclient-0.7.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/ranger-admin/ews/webapp/WEB-INF/classes/conf/ranger_jaas:/usr/jdk64/jdk1.8.0_112/lib/javafx-mx.jar:/usr/jdk64/jdk1.8.0_112/lib/packager.jar:/usr/jdk64/jdk1.8.0_112/lib/tools.jar:/usr/jdk64/jdk1.8.0_112/lib/ant-javafx.jar:/usr/jdk64/jdk1.8.0_112/lib/sa-jdi.jar:/usr/jdk64/jdk1.8.0_112/lib/dt.jar:/usr/jdk64/jdk1.8.0_112/lib/jconsole.jar:/*:
java.vm.specification.version = 1.8
sun.arch.data.model = 64
sun.java.command = org.apache.ranger.server.tomcat.EmbeddedServer
java.home = /usr/jdk64/jdk1.8.0_112/jre
user.language = en
java.specification.vendor = Oracle Corporation
awt.toolkit = sun.awt.X11.XToolkit
java.vm.info = mixed mode
java.version = 1.8.0_112
java.ext.dirs = /usr/jdk64/jdk1.8.0_112/jre/lib/ext:/usr/java/packages/lib/ext
sun.boot.class.path = /usr/jdk64/jdk1.8.0_112/jre/lib/resources.jar:/usr/jdk64/jdk1.8.0_112/jre/lib/rt.jar:/usr/jdk64/jdk1.8.0_112/jre/lib/sunrsasign.jar:/usr/jdk64/jdk1.8.0_112/jre/lib/jsse.jar:/usr/jdk64/jdk1.8.0_112/jre/lib/jce.jar:/usr/jdk64/jdk1.8.0_112/jre/lib/charsets.jar:/usr/jdk64/jdk1.8.0_112/jre/lib/jfr.jar:/usr/jdk64/jdk1.8.0_112/jre/classes
java.vendor = Oracle Corporation
catalina.base = /usr/hdp/2.6.5.0-292/ranger-admin/ews
java.security.auth.login.config = /dev/null
file.separator = /
java.vendor.url.bug = http://bugreport.sun.com/bugreport/
sun.io.unicode.encoding = UnicodeLittle
sun.cpu.endian = little
javax.net.ssl.trustStorePassword = _
javax.security.auth.useSubjectCredsOnly = false
sun.cpu.isalist =


To Analyze GC Performance stats of a JVM:

jcmd  < process id >  PerfCounter.print |grep -i 'sun.gc'

example:
jcmd 230530 PerfCounter.print |grep -i 'sun.gc'

sun.gc.cause="No GC"
sun.gc.collector.0.invocations=730
sun.gc.collector.0.lastEntryTime=207628337064585
sun.gc.collector.0.lastExitTime=207628347690432
sun.gc.collector.0.name="PSScavenge"
sun.gc.collector.0.time=12846480737
sun.gc.collector.1.invocations=6
sun.gc.collector.1.lastEntryTime=207473351317925
sun.gc.collector.1.lastExitTime=207474340258374
sun.gc.collector.1.name="PSParallelCompact"
sun.gc.collector.1.time=3276755423
sun.gc.compressedclassspace.capacity=9306112
sun.gc.compressedclassspace.maxCapacity=1073741824
sun.gc.compressedclassspace.minCapacity=0
sun.gc.compressedclassspace.used=8960720
sun.gc.generation.0.capacity=198180864
sun.gc.generation.0.maxCapacity=2684354560
sun.gc.generation.0.minCapacity=111673344
sun.gc.generation.0.name="new"
sun.gc.generation.0.space.0.capacity=166723584
sun.gc.generation.0.space.0.initCapacity=0
sun.gc.generation.0.space.0.maxCapacity=2683305984
sun.gc.generation.0.space.0.name="eden"
sun.gc.generation.0.space.0.used=13092376
sun.gc.generation.0.space.1.capacity=14155776
sun.gc.generation.0.space.1.initCapacity=0
sun.gc.generation.0.space.1.maxCapacity=894435328
sun.gc.generation.0.space.1.name="s0"
sun.gc.generation.0.space.1.used=4941728
sun.gc.generation.0.space.2.capacity=15728640
sun.gc.generation.0.space.2.initCapacity=0
sun.gc.generation.0.space.2.maxCapacity=894435328
sun.gc.generation.0.space.2.name="s1"
sun.gc.generation.0.space.2.used=0
sun.gc.generation.0.spaces=3
sun.gc.generation.1.capacity=970981376
sun.gc.generation.1.maxCapacity=5368709120
sun.gc.generation.1.minCapacity=223870976
sun.gc.generation.1.name="old"
sun.gc.generation.1.space.0.capacity=970981376
sun.gc.generation.1.space.0.initCapacity=223870976
sun.gc.generation.1.space.0.maxCapacity=5368709120
sun.gc.generation.1.space.0.name="old"
sun.gc.generation.1.space.0.used=352206136
sun.gc.generation.1.spaces=1
sun.gc.lastCause="Allocation Failure"
sun.gc.metaspace.capacity=89292800
sun.gc.metaspace.maxCapacity=1155530752
sun.gc.metaspace.minCapacity=0
sun.gc.metaspace.used=87968584
sun.gc.policy.avgBaseFootprint=268435456
sun.gc.policy.avgMajorIntervalTime=32211109
sun.gc.policy.avgMajorPauseTime=491
sun.gc.policy.avgMinorIntervalTime=172683
sun.gc.policy.avgMinorPauseTime=11
sun.gc.policy.avgOldLive=218736016
sun.gc.policy.avgPretenuredPaddedAvg=0
sun.gc.policy.avgPromotedAvg=633612
sun.gc.policy.avgPromotedDev=171678
sun.gc.policy.avgPromotedPaddedAvg=1148648
sun.gc.policy.avgSurvivedAvg=11853649
sun.gc.policy.avgSurvivedDev=1154625
sun.gc.policy.avgSurvivedPaddedAvg=15317526
sun.gc.policy.avgYoungLive=11723719
sun.gc.policy.boundaryMoved=0
sun.gc.policy.changeOldGenForMajPauses=0
sun.gc.policy.changeOldGenForMinPauses=0
sun.gc.policy.changeYoungGenForMajPauses=0
sun.gc.policy.changeYoungGenForMinPauses=0
sun.gc.policy.collectors=2
sun.gc.policy.decideAtFullGc=0
sun.gc.policy.decreaseForFootprint=6
sun.gc.policy.decrementTenuringThresholdForGcCost=0
sun.gc.policy.decrementTenuringThresholdForSurvivorLimit=0
sun.gc.policy.desiredSurvivorSize=15728640
sun.gc.policy.edenSize=166723584
sun.gc.policy.freeSpace=713555968
sun.gc.policy.fullFollowsScavenge=0
sun.gc.policy.gcTimeLimitExceeded=0
sun.gc.policy.generations=3
sun.gc.policy.increaseOldGenForThroughput=0
sun.gc.policy.increaseYoungGenForThroughput=0
sun.gc.policy.incrementTenuringThresholdForGcCost=1
sun.gc.policy.liveAtLastFullGc=352206136
sun.gc.policy.liveSpace=498895168
sun.gc.policy.majorCollectionSlope=0
sun.gc.policy.majorGcCost=0
sun.gc.policy.majorPauseOldSlope=3578
sun.gc.policy.majorPauseYoungSlope=88
sun.gc.policy.maxTenuringThreshold=15
sun.gc.policy.minorCollectionSlope=0
sun.gc.policy.minorGcCost=0
sun.gc.policy.minorPauseOldSlope=-117
sun.gc.policy.minorPauseTime=10
sun.gc.policy.minorPauseYoungSlope=45
sun.gc.policy.mutatorCost=99
sun.gc.policy.name="ParScav:MSC"
sun.gc.policy.oldCapacity=970981376
sun.gc.policy.oldEdenSize=168296448
sun.gc.policy.oldPromoSize=546832384
sun.gc.policy.promoSize=546832384
sun.gc.policy.promoted=0
sun.gc.policy.scavengeSkipped=0
sun.gc.policy.survived=4941728
sun.gc.policy.survivorOverflowed=0
sun.gc.policy.tenuringThreshold=15
sun.gc.policy.youngCapacity=182452224
sun.gc.tlab.alloc=20637522
sun.gc.tlab.allocThreads=23
sun.gc.tlab.fastWaste=0
sun.gc.tlab.fills=452
sun.gc.tlab.gcWaste=58401
sun.gc.tlab.maxFastWaste=0
sun.gc.tlab.maxFills=59
sun.gc.tlab.maxGcWaste=20031
sun.gc.tlab.maxSlowAlloc=46
sun.gc.tlab.maxSlowWaste=5741
sun.gc.tlab.slowAlloc=144
sun.gc.tlab.slowWaste=8771

To get most used classes in your JVM :-

For example in hiveserver2 pid, It gives the memory space it takes in bytes

jcmd 230530 GC.class_histogram

 num     #instances         #bytes  class name
----------------------------------------------
   1:        615170       43494440  [Ljava.lang.Object;
   2:        660200       37493616  [C
   3:        889649       21351576  java.util.ArrayList
   4:        556042       17793344  java.util.HashMap$Node
   5:        659400       15825600  java.lang.String
   6:        156852       13022176  [Ljava.util.HashMap$Node;
   7:        193420        9284160  java.util.HashMap
   8:        110787        5317776  org.apache.ranger.plugin.resourcematcher.RangerDefaultResourceMatcher
   9:        143140        4580480  java.util.Hashtable$Entry
  10:        184702        4432848  org.apache.ranger.plugin.model.RangerPolicy$RangerPolicyItemAccess
  11:         36931        3840824  org.apache.ranger.plugin.model.RangerPolicy
  12:         36931        3545376  org.apache.ranger.plugin.policyevaluator.RangerOptimizedPolicyEvaluator

Ranger test policy creation - 3000 policies to stress test using ranger API calls

Follow this shell script to generate the 3000 ranger policy files,

 You need to update the "create_policy.json" file with the correct service name in your cluster ( "service": "c249_hive")
 ++++++++++++++++++++++++++++++++++
"create_policy.json"
++++++++++++++++++++++++++++++++++
{
    "allowExceptions": [],
    "denyExceptions": [],
    "denyPolicyItems": [
        {
            "accesses": [
                {
                    "isAllowed": true,
                    "type": "drop"
                }
            ],
            "conditions": [],
            "delegateAdmin": true,
            "groups": ["hadoop"],
            "users": []
        }
    ],
    "description": "Policy for Service: c249_hive",
    "isAuditEnabled": true,
    "isEnabled": true,
    "name": "c249_hive_test-1",
    "policyItems": [
        {
            "accesses": [
                {
                    "isAllowed": true,
                    "type": "select"
                },
                {
                    "isAllowed": true,
                    "type": "update"
                },
                {
                    "isAllowed": true,
                    "type": "create"
                },
                {
                    "isAllowed": true,
                    "type": "drop"
                }
            ],
            "conditions": [],
            "delegateAdmin": true,
            "groups": ["public"],
            "users": []
        }
    ],
    "resources": {
        "database": {
            "isExcludes": false,
            "isRecursive": false,
            "values": [
                "rajesh"
            ]
        },
        "table": {
            "isExcludes": false,
            "isRecursive": false,
            "values": [
                "*"
            ]
        },
"column": {
            "isExcludes": false,
            "isRecursive": false,
            "values": [
                "*"
            ]
        }
    },
    "service": "c249_hive",
    "version": 1
}

#++++++++++Shell Script++++++++++++++
#To prepare 3000 Hive policy json files
for i in {1..3000}
do
#cloning the 1 policy file into multiple copies in each iteration
cp create_policy.json create_policy_$i.json
#Updating the database name with some non-existent unique database name rajesh-1, rajesh-2,...till   rajesh-3000
sed -i -e "s/rajesh/rajesh-$i/g"  create_policy_$i.json
#Updating the policy names with unique policy names, c249_hive_test-1, c249_hive_test-2, ..till  c249_hive_test-3000
sed -i -e "s/c249_hive_test/c249_hive_test-$i/g" create_policy_$i.json
done

#To create the ranger policies in Ranger admin through curl
for i in {1..3000}
do
curl -u admin:admin -H "Content-Type: application/json" -X POST http://c249-node5.example.com:6080/service/public/v2/api/policy -d @create_policy_$i.json
done
#++++++++++End of Shell Script++++++++++++++

Ranger - MYSQL schema reorg

Steps to prepare the script for reorg in MYSQL,

Run the following queries in mysql to prepare the script for reorg and analyze table commands to collect statistics.

select table_schema,table_name,engine,data_length, data_free from information_schema.tables where table_schema='ranger' and table_type='BASE TABLE';

select CONCAT('OPTIMIZE table ',table_schema,'.',table_name,';')"Reorg_Script"  from information_schema.tables where table_schema='ranger' and table_type='BASE TABLE';

select CONCAT('ALTER TABLE ',table_schema,'.',table_name,' FORCE;')"Reorg_Script"  from information_schema.tables where table_schema='ranger'  and table_type='BASE TABLE';

select CONCAT('ANALYZE TABLE  ',table_schema,'.',table_name,';')"Analyze_Script"  from information_schema.tables where table_schema='ranger' and table_type='BASE TABLE';

Boost Your Download Speed with lftp Segmentation

Looking for a faster way to download files via sftp to a Linux machine? Try using "lftp" instead. This tool offers segmented downl...

Other relevant topics