hbase-default.xml file seems to be for an older version

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
spark-sql>SELECT FROM test.test_hbase_table LIMIT 100;
java.lang.RuntimeException:java.lang.RuntimeException:hbase-default.xml file seems to be for an older version of HBase

(1.2.3),this version is 2.0.0.3.0.1.0.187
at org,apache.hadoop.hive.ql.metadata.Table.getStorageHandler(Table.java:292)
at org.apache.spark.sql.hive.client.HiveClientImplsSanon funSgetTableOptions1ssanon funSapplys7.apply(HiveClientImpl.scala:407)
at org.apache.spark.sql.hive.client.HiveClientImplsSanon funsgetTableOptions1ssanon funsapplys7.apply(HiveClientImpl.scala:374)
at scala.Option.map(Option.scala:146)
at org.apache.spark.sql.hive.client.HiveClientImplssanon funSgetTableOptions1.apply(HiveClientImpl.scala:374)
at org.apache.spark.sql.hive.client.HiveClientImplssanon funSgetTableOptions1.apply(HiveClientImpl.scala:372)
at org.apache.spark.sql.hive.client.HiveClientImplsSanonfunSwithHiveStates1.apply(HiveClientImpl.scala:281)
at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTreels1(HiveClientImpl.scala:219)
at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:218)
at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:264)
at org.apache.spark.sql.hive.client.HiveClientImpl.getTableOption(HiveClientImpl.scala:372)
at org.apache.spark.sql.hive.client.Hiveclientsclass.getTable(Hiveclient.scala:81)
at org.apache.spark.sql.hive.client.HiveClientImpl.getTable(HiveClientImpl.scala:84)
at org.apache.spark.sql.hive.HiveExternalCatalog.getRawTable(HiveExternalCatalog.scala:118)
at org.apache.spark.sql.hive.HiveExternalCatalogssanon funSgetTables1.apply(HiveExternalCatalog.scala:700)
at org.apache.spark.sql.hive.HiveExternalCatalogssanon funSgetTables1.apply(HiveExternalCatalog.scala:700)
at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
at org.apache.spark.sql.hive.HiveExternalCatalog.getTable(HiveExternalCatalog.scala:699)

解决方法是在hbase-site.xml文件中添加如下配置:

阅读全文 »

此文是在现在容器实例上修改端口映射,并不希望创建新的容器,这样可以保持原有容器中的数据。

方法是修改容器目录下 hostconfig.json 配置文件中的 PortBindings 配置项内容。如下:

1
"PortBindings":{"8080/tcp":[{"HostIp":"","HostPort":"8080"}]}
阅读全文 »

命令样例:

1
2
3
4
5
6
7
8
9
10
11
12
$ ls
_config.next.yml _config.yml db.json node_modules package.json package-lock.json public scaffolds source themes
$ ls | grep json
db.json
package.json
package-lock.json
$ ls | grep json | sed

"s:$: abc:"
db.json abc
package.json abc
package-lock.json abc

以上命令是在末尾追加“ abc”。关键就是sed工具的使用,详细的可以查看sed使用手册。

阅读全文 »

最近两天发现Hadoop集群中的Datanode存储严重不均衡,有一台DN存储增长非常快,远远超出了其他节点。即使启动了Balance进城也无法解决问题。

经过排查发现是因一个异常任务停留在reduce阶段,在不停的向HDFS写数据。而这个Reduce Task就是在存储增长非常快的节点上运行的。分析原因是Reduce Task会优先向运行在的节点本地写数据,副本会分布在其他节点上。所以,问题节点增长非常快,而其他节点并看不出明显异常。

以下是排查过程的图片:




阅读全文 »

使用 uptime 命令可以方便检查服务器是否发生了重启。uptime 命令手册中说:uptime会在一行中显示下列信息:当前时间、系统运行了多久时间、当前登录的用户有多少,以及前 1、5 和 15 分钟系统的平均负载。当然可以添加参数以不同的方式展示信息,如下:

1
2
3
4
5
6
$ uptime
17:30:22 up 23 min, 1 user, load average: 0.55, 0.63, 0.46
$ uptime -p
up 25 minutes
$ uptime -s
2022-03-09 17:06:48

记录一个加速镜像网站“liquidtelecom”,以TCL

下载为例,原下载地址如下:

1
https://jaist.dl.sourceforge.net/project/tcl/Tcl/8.6.11/tcl8.6.11-src.tar.gz
阅读全文 »

今天在执行 hexo deploy 部署时,出现以下异常:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
remote: error: GH013: Repository rule violations found for refs/heads/master.
remote:
remote: - GITHUB PUSH PROTECTION
remote: —————————————————————————————————————————
remote: Resolve the following violations before pushing again
remote:
remote: - Push cannot contain secrets
remote:
remote:
remote: (?) Learn how to resolve a blocked push
remote: https://docs.github.com/code-security/secret-scanning/working-with-secret-scanning-and-push-protection/working-with-push-protection-from-the-command-line#resolving-a-blocked-push
remote:
remote: (?) This repository does not have Secret Scanning enabled, but is eligible. Enable Secret Scanning to view and manage detected secrets.
remote: Visit the repository settings page, https://github.com/zhang-jc/zhang-jc.github.io/settings/security_analysis
remote:
remote:
remote: —— Amazon AWS Access Key ID ——————————————————————————
remote: locations:
remote: - commit: 2573f22a136850f3aa0bf54402cdf02f882b9e0f
remote: path: search.xml:6170
remote: - commit: 2573f22a136850f3aa0bf54402cdf02f882b9e0f
remote: path: search.xml:6170
remote: - commit: 2573f22a136850f3aa0bf54402cdf02f882b9e0f
remote: path: search.xml:6170
remote: - commit: 2573f22a136850f3aa0bf54402cdf02f882b9e0f
remote: path: search.xml:6170
remote: - commit: 2573f22a136850f3aa0bf54402cdf02f882b9e0f
remote: path: search.xml:6173
remote:
remote: (?) To push, remove secret from commit(s) or follow this URL to allow the secret.
remote: https://github.com/zhang-jc/zhang-jc.github.io/security/secret-scanning/unblock-secret/34dtqbomcHD2FyadsoLZcpSJp6k
remote:
remote:
remote:
To github.com:zhang-jc/zhang-jc.github.io.git
! [remote rejected] HEAD -> master (push declined due to repository rule violations)
error: 无法推送一些引用到 'github.com:zhang-jc/zhang-jc.github.io.git'
FATAL Something's wrong. Maybe you can find the solution here: https://hexo.io/docs/troubleshooting.html
Error: Spawn failed
at ChildProcess.<anonymous> (/home/zhangjc/github/zhangjc/node_modules/hexo-deployer-git/node_modules/hexo-util/lib/spawn.js:51:21)
at ChildProcess.emit (node:events:519:28)
at ChildProcess._handle.onexit (node:internal/child_process:293:12)

其中的关键提示信息是:

阅读全文 »

1
2
3
4
5
6
7
8
9
$ wget -c 'https://camel-builds.s3.amazonaws.com/ActiveTcl/x86_64-linux-glibc-2.17/20210816T193804Z/ActiveTcl-8.6.11.1.0000-x86_64-linux-glibc-2.17-e4e2f327.tar.gz?****************=***************************=******************************************-east-1%2Fs3%2F***********&X-Amz-Date=20220224T060023Z&*******************************=host&***********=****************************************************************'
--2022-02-24 06:01:04-- https://camel-builds.s3.amazonaws.com/ActiveTcl/x86_64-linux-glibc-2.17/20210816T193804Z/ActiveTcl-8.6.11.1.0000-x86_64-linux-glibc-2.17-e4e2f327.tar.gz?****************=***************************=************************************-east-1%2Fs3%2F***********&X-Amz-Date=20220224T060023Z&*******************************=host&***********=*****************************************************
Resolving camel-builds.s3.amazonaws.com (camel-builds.s3.amazonaws.com)... 52.216.240.84
Connecting to camel-builds.s3.amazonaws.com (camel-builds.s3.amazonaws.com)|52.216.240.84|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 27637315 (26M) [application/gzip]
ActiveTcl-8.6.11.1.0000-x86_64-linux-glibc-2.17-e4e2f327.tar.gz?****************=*********************************************************************-east-1%2Fs3%2F***********&X-Amz-Date=20220224T060023Z&*******************************=host&***********=****************************************************************: File name too long

Cannot write to 'ActiveTcl-8.6.11.1.0000-x86_64-linux-glibc-2.17-e4e2f327.tar.gz?****************=********************************************************************-east-1%2Fs3%2F***********&X-Amz-Date=20220224T060023Z&*******************************=host&***********=***************************************************************' (Success).

因为写入文件名太长了,导致写入失败,方法是对下载文件重命名。

1
$ wget -c 'https://camel-builds.s3.amazonaws.com/ActiveTcl/x86_64-linux-glibc-2.17/20210816T193804Z/ActiveTcl-8.6.11.1.0000-x86_64-linux-glibc-2.17-e4e2f327.tar.gz?****************=**********************************************************************-east-1%2Fs3%2F***********&X-Amz-Date=20220224T060023Z&*******************************=host&***********=***************************************************************' -O tcl-8.6.tar.gz
阅读全文 »
0%