代码如下:

1
2
3
4
5
6
7
8
9
mongoClient = MongoClient('mongodb://172.16.72.213:27017/')
opsDb = mongoClient.ops
azScheduled = opsDb.azScheduledFlow

bulkOpers = []
for flow in scheduledFlows.values():
bulkOpers.append(UpdateOne({'opsDt': opsDt, 'projectId': flow['projectId'], 'projectName': flow['projectName'], 'flowName': flow['flowName']}, {'$set': {'opsDateTime': opsDtStr, 'status': flow['status'], 'startTime': flow['startTime'], 'endTime': flow['endTime'], 'elapsed': flow['elapsed']}}, upsert=True))

azScheduled.bulk_write(bulkOpers)

异常描述

如果在MongoDB的SECONDARY上查询数据时会报如下错误信息:

1
2
3
4
5
6
7
> show databases;
2018-09-20T17:40:55.377+0800 E QUERY [thread1] Error: listDatabases failed:{ "ok" : 0, "errmsg" : "not master and slaveOk=false", "code" : 13435 } :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
Mongo.prototype.getDBs@src/mongo/shell/mongo.js:62:1
shellHelper.show@src/mongo/shell/utils.js:781:19
shellHelper@src/mongo/shell/utils.js:671:15
@(shellhelp2):1:1
阅读全文 »

Java读取Properties文件有两种简单方法,就是使用ClassLoader中的资源读取方法。

  • public InputStream getResourceAsStream(String name)
    该方法是非静态方法,所以不能在静态代码中使用。
  • public static InputStream getSystemResourceAsStream(String name)
    该方法是静态方法,可以在静态代码中使用。

使用kafka-console-producer.sh向远端Kafka写入数据时遇到以下错误:

1
2
3
4
5
6
7
8
9
10
11
$ bin/kafka-console-producer.sh --broker-list 172.16.72.202:9092 --topic test
This is a message
[2017-08-24 11:47:48,286] ERROR Error when sending message to topic test with key: null, value: 17 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for test-0: 1523 ms has passed since batch creation plus linger time
八月 24, 2017 11:50:55 上午 sun.rmi.transport.tcp.TCPTransport$AcceptLoop executeAcceptLoop
警告: RMI TCP Accept-0: accept loop for ServerSocket[addr=0.0.0.0/0.0.0.0,localport=38175] throws
java.io.IOException: The server sockets created using the LocalRMIServerSocketFactory only accept connections from clients running on the host where the RMI remote objects have been exported.
at sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(LocalRMIServerSocketFactory.java:114)
at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:400)
at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:372)
at java.lang.Thread.run(Thread.java:748)

在没有配置advertised.host.name的情况下,Kafka并没有广播我们配置的host.name,而是广播了主机配置的hostname。远端的客户端并没有配置hosts,所以自然是连接不上这个hostname的,所以在远端客户端配置hosts。在客户端/etc/hosts中添加以下内容后问题解决:

阅读全文 »

Producer向Kafka写入数据时遇到异常:org.apache.kafka.common.errors.RecordTooLargeException。该异常是因为单条消息大小超过限制导致的。解决方法是将限制参数调大:

(1)server端:server.properties
message.max.bytes参数默认值为1000012,调整为适合的值,如10485760。

(2)producer端:
设置Producer的参数max.request.size的值与server端的message.max.bytes值一致。

阅读全文 »
0%