[一]、 概述
Hadoop启动后的PID文件默认配置是保存在 /tmp
目录下的,而linux下 /tmp 目录会定时清理,所以在集群运行一段时间后如果在停Hadoop相关服务是会出现类似:n[......]
[一]、 概述
Hadoop启动后的PID文件默认配置是保存在 /tmp
目录下的,而linux下 /tmp 目录会定时清理,所以在集群运行一段时间后如果在停Hadoop相关服务是会出现类似:n[......]
[一]、前提
首先是snappy编译安装和hadoop-snappy编译,这个可以直接参考: Hadoop安装配置snappy压缩,所有前提准备好后,HBase上安装配置snappy压缩算法就相[……]
Hadoop2 NN HA+Zookeeper独立安装的步骤参见:http://www.micmiu.com/bigdata/hadoop/hadoop2-cluster-ha-setup/ ,本文H[……]
Nutch 2.2.1 编译安装后,执行 nutch inject
命令后报错信息如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
micmiu@micmiu-mbp: ~/tmp/nutch $ nutch inject urls -crawlId micmiublog InjectorJob: starting at 2014-12-31 14:13:19 InjectorJob: Injecting urlDir: urls InjectorJob: org.apache.gora.util.GoraException: java.lang.RuntimeException: java.lang.IllegalArgumentException: Not a host:port pair: ?6460@micmiu-mbp.local192.168.1.100,62142,1420001742333 at org.apache.gora.store.DataStoreFactory.createDataStore(DataStoreFactory.java:167) at org.apache.gora.store.DataStoreFactory.createDataStore(DataStoreFactory.java:135) at org.apache.nutch.storage.StorageUtils.createWebStore(StorageUtils.java:75) at org.apache.nutch.crawl.InjectorJob.run(InjectorJob.java:221) at org.apache.nutch.crawl.InjectorJob.inject(InjectorJob.java:251) at org.apache.nutch.crawl.InjectorJob.run(InjectorJob.java:273) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.nutch.crawl.InjectorJob.main(InjectorJob.java:282) Caused by: java.lang.RuntimeException: java.lang.IllegalArgumentException: Not a host:port pair: ?6460@micmiu-mbp.local192.168.1.100,62142,1420001742333 at org.apache.gora.hbase.store.HBaseStore.initialize(HBaseStore.java:127) at org.apache.gora.store.DataStoreFactory.initializeDataStore(DataStoreFactory.java:102) at org.apache.gora.store.DataStoreFactory.createDataStore(DataStoreFactory.java:161) ... 7 more Caused by: java.lang.IllegalArgumentException: Not a host:port pair: ?6460@micmiu-mbp.local192.168.1.100,62142,1420001742333 at org.apache.hadoop.hbase.HServerAddress.<init>(HServerAddress.java:60) at org.apache.hadoop.hbase.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:63) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:354) at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:94) at org.apache.gora.hbase.store.HBaseStore.initialize(HBaseStore.java:109) ... 9 more |
一般这样的错误信息是由于 $NUTCH_[......]
HBase单机模式启动后,很快进程就结束了,日志中错误信息如下:
1 2 3 4 5 6 7 8 9 10 11 |
ERROR org.apache.hadoop.hbase.master.HMasterCommandLine: Failed to start master java.lang.RuntimeException: Failed suppression of fs shutdown hook: Thread[Thread-30,5,main] at org.apache.hadoop.hbase.regionserver.ShutdownHook.suppressHdfsShutdownHook(ShutdownHook.java:196) at org.apache.hadoop.hbase.regionserver.ShutdownHook.install(ShutdownHook.java:83) at org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:191) at org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:420) at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:149) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:104) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2120) |
环境参数:
Sqoop是一个用来将Hadoop(Hive、HBase)和关系型数据库中的数据相互转移的工具,可以将一个关系型数据库(例如:MySQL ,Oracle ,Postgres等)中的数据导入到Hadoo[……]
近期评论