博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
kafka项目中踩到的一个坑(客户端和服务器端版本不一致问题)
阅读量:6002 次
发布时间:2019-06-20

本文共 4314 字,大约阅读时间需要 14 分钟。

启动项目时控制台抛出的异常信息:

2017-11-16 12:40:33.105  INFO 10232 --- [           main] o.s.s.c.ThreadPoolTaskScheduler          : Initializing ExecutorService  'taskScheduler'2017-11-16 12:40:33.871  INFO 10232 --- [           main] o.s.j.e.a.AnnotationMBeanExporter        : Registering beans for JMX exposure on startup2017-11-16 12:40:33.885  INFO 10232 --- [           main] o.s.c.support.DefaultLifecycleProcessor  : Starting beans in phase 02017-11-16 12:40:34.544  INFO 10232 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka version : 0.10.1.12017-11-16 12:40:34.544  INFO 10232 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka commitId : f10ef2720b03b2472017-11-16 12:40:34.545  INFO 10232 --- [           main] o.a.k.clients.consumer.ConsumerConfig    : ConsumerConfig values:     auto.commit.interval.ms = 100    auto.offset.reset = latest    bootstrap.servers = [192.168.71.11:9092, 192.168.71.12:9092, 192.168.71.13:9092]    check.crcs = true    client.id =     connections.max.idle.ms = 540000    enable.auto.commit = true    exclude.internal.topics = true    fetch.max.bytes = 52428800    fetch.max.wait.ms = 500    fetch.min.bytes = 1    group.id = fg    heartbeat.interval.ms = 3000    interceptor.classes = null    key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer    max.partition.fetch.bytes = 1048576    max.poll.interval.ms = 300000    max.poll.records = 500    metadata.max.age.ms = 300000    metric.reporters = []    metrics.num.samples = 2    metrics.sample.window.ms = 30000    partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]    receive.buffer.bytes = 65536    reconnect.backoff.ms = 50    request.timeout.ms = 305000    retry.backoff.ms = 100    sasl.kerberos.kinit.cmd = /usr/bin/kinit    sasl.kerberos.min.time.before.relogin = 60000    sasl.kerberos.service.name = null    sasl.kerberos.ticket.renew.jitter = 0.05    sasl.kerberos.ticket.renew.window.factor = 0.8    sasl.mechanism = GSSAPI    security.protocol = PLAINTEXT    send.buffer.bytes = 131072    session.timeout.ms = 6000    ssl.cipher.suites = null    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]    ssl.endpoint.identification.algorithm = null    ssl.key.password = null    ssl.keymanager.algorithm = SunX509    ssl.keystore.location = null    ssl.keystore.password = null    ssl.keystore.type = JKS    ssl.protocol = TLS    ssl.provider = null    ssl.secure.random.implementation = null    ssl.trustmanager.algorithm = PKIX    ssl.truststore.location = null    ssl.truststore.password = null    ssl.truststore.type = JKS    value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer2017-11-16 12:40:34.558 ERROR 10232 --- [ntainer#0-1-C-1] essageListenerContainer$ListenerConsumer : Container exceptionorg.apache.kafka.common.protocol.types.SchemaException: Error reading field 'brokers': Error reading field 'host': Error reading string of length 26721, only 149 bytes available    at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:73)    at org.apache.kafka.clients.NetworkClient.parseResponse(NetworkClient.java:380)    at org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:449)    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:269)    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:232)    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:180)    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:193)    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:248)    at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1013)    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979)    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:556)    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)    at java.util.concurrent.FutureTask.run(FutureTask.java:266)    at java.lang.Thread.run(Thread.java:748)

转载于:https://www.cnblogs.com/jun1019/p/7843798.html

你可能感兴趣的文章
Makefile 多目录自动编译
查看>>
学习笔记:Oracle dul数据挖掘 导出Oracle11G数据文件坏块中表中
查看>>
统一Matlab下不同子图的色标colorbar
查看>>
Linux 进程间通信(二) 管道
查看>>
Ajax保留浏览器历史的两种解决方案(Hash&Pjax)
查看>>
深入浅出JQuery (二) 选择器
查看>>
CI框架 -- 驱动器
查看>>
FastMQ V0.2.0 stable版发布
查看>>
对象复制
查看>>
Mongodb内嵌数组的完全匹配查询
查看>>
WARN hdfs.DFSClient: Caught exception java.lang.InterruptedException
查看>>
移动硬盘文件或目录损坏且无法读取怎么解决
查看>>
在shell中使用sed命令替换/为\/
查看>>
JavaSe: 不要小看了 Serializable
查看>>
Node.js 抓取电影天堂新上电影节目单及ftp链接
查看>>
js课程 3-9 js内置对象定时器和超时器怎么使用
查看>>
linux popen函数
查看>>
[游戏开发]关于手游客户端网络带宽压力的一点思考
查看>>
如何成为强大的程序员?
查看>>
How To: 用 SharePoint 计算列做出你自己的KPI列表
查看>>