docker-compose-files/hyperledger_fabric/v1.4.0/kafka/logs/dev_all.log

346 KiB

Attaching to peer0.org1.example.com, peer1.org2.example.com, peer1.org1.example.com, peer0.org2.example.com, orderer0.example.com, orderer1.example.com, kafka1, kafka2, kafka0, kafka3, fabric-cli, zookeeper2, zookeeper1, zookeeper0
peer1.org2.example.com | 2018-12-19 08:16:56.627 UTC [main] InitCmd -> WARN 001 CORE_LOGGING_LEVEL is no longer supported, please use the FABRIC_LOGGING_SPEC environment variable
peer1.org2.example.com | 2018-12-19 08:16:56.971 UTC [nodeCmd] serve -> INFO 002 Starting peer:
peer1.org2.example.com | Version: 1.4.0-rc1
peer1.org2.example.com | Commit SHA: development build
peer1.org2.example.com | Go version: go1.11.2
peer1.org2.example.com | OS/Arch: linux/amd64
peer1.org2.example.com | Chaincode:
peer1.org2.example.com | Base Image Version: 0.4.14
peer1.org2.example.com | Base Docker Namespace: hyperledger
peer1.org2.example.com | Base Docker Label: org.hyperledger.fabric
peer1.org2.example.com | Docker Namespace: hyperledger
peer1.org2.example.com | 2018-12-19 08:16:56.972 UTC [ledgermgmt] initialize -> INFO 003 Initializing ledger mgmt
peer1.org2.example.com | 2018-12-19 08:16:56.972 UTC [kvledger] NewProvider -> INFO 004 Initializing ledger provider
peer1.org2.example.com | 2018-12-19 08:16:57.226 UTC [kvledger] NewProvider -> INFO 005 ledger provider Initialized
peer1.org2.example.com | 2018-12-19 08:16:57.406 UTC [ledgermgmt] initialize -> INFO 006 ledger mgmt initialized
peer1.org2.example.com | 2018-12-19 08:16:57.406 UTC [peer] func1 -> INFO 007 Auto-detected peer address: 172.18.0.13:7051
peer1.org2.example.com | 2018-12-19 08:16:57.406 UTC [peer] func1 -> INFO 008 Returning peer1.org2.example.com:7051
peer1.org2.example.com | 2018-12-19 08:16:57.407 UTC [peer] func1 -> INFO 009 Auto-detected peer address: 172.18.0.13:7051
peer1.org2.example.com | 2018-12-19 08:16:57.407 UTC [peer] func1 -> INFO 00a Returning peer1.org2.example.com:7051
peer1.org2.example.com | 2018-12-19 08:16:57.421 UTC [nodeCmd] serve -> INFO 00b Starting peer with TLS enabled
peer1.org2.example.com | 2018-12-19 08:16:57.438 UTC [nodeCmd] computeChaincodeEndpoint -> INFO 00c Entering computeChaincodeEndpoint with peerHostname: peer1.org2.example.com
peer1.org2.example.com | 2018-12-19 08:16:57.453 UTC [nodeCmd] computeChaincodeEndpoint -> INFO 00d Exit with ccEndpoint: peer1.org2.example.com:7052
peer1.org2.example.com | 2018-12-19 08:16:57.489 UTC [sccapi] registerSysCC -> INFO 00e system chaincode lscc(github.com/hyperledger/fabric/core/scc/lscc) registered
peer1.org2.example.com | 2018-12-19 08:16:57.489 UTC [sccapi] registerSysCC -> INFO 00f system chaincode cscc(github.com/hyperledger/fabric/core/scc/cscc) registered
peer1.org2.example.com | 2018-12-19 08:16:57.492 UTC [sccapi] registerSysCC -> INFO 010 system chaincode qscc(github.com/hyperledger/fabric/core/scc/qscc) registered
peer1.org2.example.com | 2018-12-19 08:16:57.493 UTC [sccapi] registerSysCC -> INFO 011 system chaincode +lifecycle(github.com/hyperledger/fabric/core/chaincode/lifecycle) registered
peer1.org2.example.com | 2018-12-19 08:16:57.530 UTC [gossip.service] func1 -> INFO 012 Initialize gossip with endpoint peer1.org2.example.com:7051 and bootstrap set [peer1.org2.example.com:7051]
peer1.org2.example.com | 2018-12-19 08:16:57.565 UTC [gossip.gossip] NewGossipService -> INFO 013 Creating gossip service with self membership of Endpoint: peer1.org2.example.com:7051, InternalEndpoint: peer1.org2.example.com:7051, PKI-ID: 54071d960ff51087a5562fde4801dfa904c634c6c3c38da0d982a0b1f62f0a27, Metadata:
peer1.org2.example.com | 2018-12-19 08:16:57.567 UTC [gossip.gossip] start -> INFO 014 Gossip instance peer1.org2.example.com:7051 started
peer1.org2.example.com | 2018-12-19 08:16:57.576 UTC [sccapi] deploySysCC -> INFO 015 system chaincode lscc/(github.com/hyperledger/fabric/core/scc/lscc) deployed
peer1.org2.example.com | 2018-12-19 08:16:57.582 UTC [cscc] Init -> INFO 016 Init CSCC
peer1.org2.example.com | 2018-12-19 08:16:57.594 UTC [sccapi] deploySysCC -> INFO 017 system chaincode cscc/(github.com/hyperledger/fabric/core/scc/cscc) deployed
peer1.org2.example.com | 2018-12-19 08:16:57.597 UTC [qscc] Init -> INFO 018 Init QSCC
peer1.org2.example.com | 2018-12-19 08:16:57.607 UTC [sccapi] deploySysCC -> INFO 019 system chaincode qscc/(github.com/hyperledger/fabric/core/scc/qscc) deployed
peer1.org2.example.com | 2018-12-19 08:16:57.609 UTC [sccapi] deploySysCC -> INFO 01a system chaincode +lifecycle/(github.com/hyperledger/fabric/core/chaincode/lifecycle) deployed
peer1.org2.example.com | 2018-12-19 08:16:57.615 UTC [nodeCmd] serve -> INFO 01b Deployed system chaincodes
peer1.org2.example.com | 2018-12-19 08:16:57.616 UTC [discovery] NewService -> INFO 01c Created with config TLS: true, authCacheMaxSize: 1000, authCachePurgeRatio: 0.750000
peer1.org2.example.com | 2018-12-19 08:16:57.619 UTC [nodeCmd] registerDiscoveryService -> INFO 01d Discovery service activated
peer1.org2.example.com | 2018-12-19 08:16:57.620 UTC [nodeCmd] serve -> INFO 01e Starting peer with ID=[name:"peer1.org2.example.com" ], network ID=[dev], address=[peer1.org2.example.com:7051]
peer1.org2.example.com | 2018-12-19 08:16:57.622 UTC [nodeCmd] serve -> INFO 01f Started peer with ID=[name:"peer1.org2.example.com" ], network ID=[dev], address=[peer1.org2.example.com:7051]
peer1.org2.example.com | 2018-12-19 08:17:30.522 UTC [endorser] callChaincode -> INFO 020 [][4e9e78a7] Entry chaincode: name:"cscc"
peer1.org2.example.com | 2018-12-19 08:17:30.523 UTC [ledgermgmt] CreateLedger -> INFO 021 Creating ledger [businesschannel] with genesis block
peer1.org2.example.com | 2018-12-19 08:17:30.533 UTC [fsblkstorage] newBlockfileMgr -> INFO 022 Getting block information from block storage
peer1.org2.example.com | 2018-12-19 08:17:30.560 UTC [kvledger] CommitWithPvtData -> INFO 023 [businesschannel] Committed block [0] with 1 transaction(s) in 17ms (state_validation=3ms block_commit=9ms state_commit=2ms)
peer1.org2.example.com | 2018-12-19 08:17:30.563 UTC [ledgermgmt] CreateLedger -> INFO 024 Created ledger [businesschannel] with genesis block
peer1.org2.example.com | 2018-12-19 08:17:30.573 UTC [gossip.gossip] JoinChan -> INFO 025 Joining gossip network of channel businesschannel with 2 organizations
peer1.org2.example.com | 2018-12-19 08:17:30.574 UTC [gossip.gossip] learnAnchorPeers -> INFO 026 No configured anchor peers of Org1MSP for channel businesschannel to learn about
peer1.org2.example.com | 2018-12-19 08:17:30.574 UTC [gossip.gossip] learnAnchorPeers -> INFO 027 No configured anchor peers of Org2MSP for channel businesschannel to learn about
peer1.org2.example.com | 2018-12-19 08:17:30.597 UTC [gossip.state] NewGossipStateProvider -> INFO 028 Updating metadata information, current ledger sequence is at = 0, next expected block is = 1
peer1.org2.example.com | 2018-12-19 08:17:30.599 UTC [sccapi] deploySysCC -> INFO 029 system chaincode lscc/businesschannel(github.com/hyperledger/fabric/core/scc/lscc) deployed
peer1.org2.example.com | 2018-12-19 08:17:30.600 UTC [cscc] Init -> INFO 02a Init CSCC
peer1.org2.example.com | 2018-12-19 08:17:30.601 UTC [sccapi] deploySysCC -> INFO 02b system chaincode cscc/businesschannel(github.com/hyperledger/fabric/core/scc/cscc) deployed
peer1.org2.example.com | 2018-12-19 08:17:30.602 UTC [qscc] Init -> INFO 02c Init QSCC
peer1.org2.example.com | 2018-12-19 08:17:30.603 UTC [sccapi] deploySysCC -> INFO 02d system chaincode qscc/businesschannel(github.com/hyperledger/fabric/core/scc/qscc) deployed
peer1.org2.example.com | 2018-12-19 08:17:30.604 UTC [sccapi] deploySysCC -> INFO 02e system chaincode +lifecycle/businesschannel(github.com/hyperledger/fabric/core/chaincode/lifecycle) deployed
peer1.org2.example.com | 2018-12-19 08:17:30.605 UTC [endorser] callChaincode -> INFO 02f [][4e9e78a7] Exit chaincode: name:"cscc" (83ms)
peer1.org2.example.com | 2018-12-19 08:17:30.606 UTC [comm.grpc.server] 1 -> INFO 030 unary call completed {"grpc.start_time": "2018-12-19T08:17:30.511Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:58262", "grpc.code": "OK", "grpc.call_duration": "94.7823ms"}
peer1.org2.example.com | 2018-12-19 08:17:31.650 UTC [endorser] callChaincode -> INFO 031 [][cf59538f] Entry chaincode: name:"cscc"
peer1.org2.example.com | 2018-12-19 08:17:31.652 UTC [endorser] callChaincode -> INFO 032 [][cf59538f] Exit chaincode: name:"cscc" (2ms)
peer1.org2.example.com | 2018-12-19 08:17:31.653 UTC [comm.grpc.server] 1 -> INFO 033 unary call completed {"grpc.start_time": "2018-12-19T08:17:31.649Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:58270", "grpc.code": "OK", "grpc.call_duration": "3.5748ms"}
peer1.org2.example.com | 2018-12-19 08:17:32.748 UTC [endorser] callChaincode -> INFO 034 [][f9872d51] Entry chaincode: name:"qscc"
peer1.org2.example.com | 2018-12-19 08:17:32.750 UTC [endorser] callChaincode -> INFO 035 [][f9872d51] Exit chaincode: name:"qscc" (1ms)
peer1.org2.example.com | 2018-12-19 08:17:32.750 UTC [comm.grpc.server] 1 -> INFO 036 unary call completed {"grpc.start_time": "2018-12-19T08:17:32.747Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:58278", "grpc.code": "OK", "grpc.call_duration": "3.3858ms"}
peer1.org2.example.com | 2018-12-19 08:17:36.564 UTC [gossip.election] beLeader -> INFO 037 54071d960ff51087a5562fde4801dfa904c634c6c3c38da0d982a0b1f62f0a27 : Becoming a leader
peer1.org2.example.com | 2018-12-19 08:17:36.564 UTC [gossip.service] func1 -> INFO 038 Elected as a leader, starting delivery service for channel businesschannel
peer1.org2.example.com | 2018-12-19 08:17:36.583 UTC [gossip.privdata] StoreBlock -> INFO 039 [businesschannel] Received block [1] from buffer
peer1.org2.example.com | 2018-12-19 08:17:36.599 UTC [gossip.gossip] JoinChan -> INFO 03a Joining gossip network of channel businesschannel with 2 organizations
peer1.org2.example.com | 2018-12-19 08:17:36.599 UTC [gossip.gossip] learnAnchorPeers -> INFO 03b Learning about the configured anchor peers of Org1MSP for channel businesschannel : [{peer0.org1.example.com 7051}]
peer1.org2.example.com | 2018-12-19 08:17:36.600 UTC [gossip.gossip] learnAnchorPeers -> INFO 03c No configured anchor peers of Org2MSP for channel businesschannel to learn about
peer1.org2.example.com | 2018-12-19 08:17:36.640 UTC [committer.txvalidator] Validate -> INFO 03d [businesschannel] Validated block [1] in 56ms
peer1.org2.example.com | 2018-12-19 08:17:36.672 UTC [kvledger] CommitWithPvtData -> INFO 03e [businesschannel] Committed block [1] with 1 transaction(s) in 31ms (state_validation=1ms block_commit=21ms state_commit=5ms)
peer1.org2.example.com | 2018-12-19 08:17:36.673 UTC [gossip.privdata] StoreBlock -> INFO 03f [businesschannel] Received block [2] from buffer
peer1.org2.example.com | 2018-12-19 08:17:36.694 UTC [gossip.gossip] JoinChan -> INFO 040 Joining gossip network of channel businesschannel with 2 organizations
peer1.org2.example.com | 2018-12-19 08:17:36.695 UTC [gossip.gossip] learnAnchorPeers -> INFO 041 Learning about the configured anchor peers of Org1MSP for channel businesschannel : [{peer0.org1.example.com 7051}]
peer1.org2.example.com | 2018-12-19 08:17:36.695 UTC [gossip.gossip] learnAnchorPeers -> INFO 042 Learning about the configured anchor peers of Org2MSP for channel businesschannel : [{peer0.org2.example.com 7051}]
peer1.org2.example.com | 2018-12-19 08:17:36.720 UTC [committer.txvalidator] Validate -> INFO 043 [businesschannel] Validated block [2] in 46ms
peer1.org2.example.com | 2018-12-19 08:17:36.764 UTC [kvledger] CommitWithPvtData -> INFO 044 [businesschannel] Committed block [2] with 1 transaction(s) in 41ms (state_validation=0ms block_commit=35ms state_commit=3ms)
peer1.org2.example.com | 2018-12-19 08:17:37.369 UTC [gossip.gossip] handleMessage -> WARN 045 Message GossipMessage: tag:EMPTY alive_msg:<membership:<endpoint:"peer1.org1.example.com:7051" pki_id:"\013\341\342C\224\001E\365\343\257,\206_\343\031\345Q\243\003\363\001=\323J\372\t\327\360\\\006S\312" > timestamp:<inc_num:1545207417405978000 seq_num:31 > > , Envelope: 83 bytes, Signature: 70 bytes isn't valid
peer1.org2.example.com | 2018-12-19 08:17:37.397 UTC [gossip.comm] func1 -> WARN 046 peer0.org1.example.com:7051, PKIid:3d21b0bc142d8ddae3c27797c0c2bf16b05e0414b227484fdbfabf9859231106 isn't responsive: EOF
peer1.org2.example.com | 2018-12-19 08:17:37.397 UTC [gossip.discovery] expireDeadMembers -> WARN 047 Entering [3d21b0bc142d8ddae3c27797c0c2bf16b05e0414b227484fdbfabf9859231106]
peer1.org2.example.com | 2018-12-19 08:17:37.397 UTC [gossip.discovery] expireDeadMembers -> WARN 048 Closing connection to Endpoint: peer0.org1.example.com:7051, InternalEndpoint: , PKI-ID: 3d21b0bc142d8ddae3c27797c0c2bf16b05e0414b227484fdbfabf9859231106, Metadata:
peer1.org2.example.com | 2018-12-19 08:17:37.397 UTC [gossip.discovery] expireDeadMembers -> WARN 049 Exiting
peer1.org2.example.com | 2018-12-19 08:17:37.533 UTC [comm.grpc.server] 1 -> INFO 04a unary call completed {"grpc.start_time": "2018-12-19T08:17:37.533Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:17:39.532Z", "grpc.peer_address": "172.18.0.14:45942", "grpc.peer_subject": "CN=peer0.org1.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "161.2µs"}
peer1.org2.example.com | 2018-12-19 08:17:37.993 UTC [gossip.gossip] validateMsg -> WARN 04b StateInfo message GossipMessage: tag:CHAN_OR_ORG state_info:<timestamp:<inc_num:1545207450018590000 seq_num:1545207455907664800 > pki_id:"\013\341\342C\224\001E\365\343\257,\206_\343\031\345Q\243\003\363\001=\323J\372\t\327\360\\\006S\312" channel_MAC:"\344\265j[\3752\300\2262w\333\215z9\310\264&\370\364\372}~\262\333\217\027\344\276\327Lt\315" properties:<ledger_height:3 > > , Envelope: 98 bytes, Signature: 71 bytes is found invalid: PKIID wasn't found
peer1.org2.example.com | 2018-12-19 08:17:37.994 UTC [gossip.gossip] handleMessage -> WARN 04c Message GossipMessage: tag:CHAN_OR_ORG state_info:<timestamp:<inc_num:1545207450018590000 seq_num:1545207455907664800 > pki_id:"\013\341\342C\224\001E\365\343\257,\206_\343\031\345Q\243\003\363\001=\323J\372\t\327\360\\\006S\312" channel_MAC:"\344\265j[\3752\300\2262w\333\215z9\310\264&\370\364\372}~\262\333\217\027\344\276\327Lt\315" properties:<ledger_height:3 > > , Envelope: 98 bytes, Signature: 71 bytes isn't valid
peer1.org2.example.com | 2018-12-19 08:17:40.538 UTC [gossip.channel] reportMembershipChanges -> INFO 04d Membership view has changed. peers went online: [[peer0.org2.example.com:7051 ]] , current view: [[peer0.org2.example.com:7051 ]]
peer1.org2.example.com | 2018-12-19 08:17:42.171 UTC [endorser] callChaincode -> INFO 04e [][75d61ceb] Entry chaincode: name:"lscc"
peer1.org2.example.com | 2018-12-19 08:17:42.173 UTC [lscc] executeInstall -> INFO 04f Installed Chaincode [exp02] Version [1.0] to peer
peer1.org2.example.com | 2018-12-19 08:17:42.173 UTC [endorser] callChaincode -> INFO 050 [][75d61ceb] Exit chaincode: name:"lscc" (2ms)
peer1.org2.example.com | 2018-12-19 08:17:42.174 UTC [comm.grpc.server] 1 -> INFO 051 unary call completed {"grpc.start_time": "2018-12-19T08:17:42.17Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:58338", "grpc.code": "OK", "grpc.call_duration": "3.94ms"}
peer1.org2.example.com | 2018-12-19 08:17:42.365 UTC [gossip.gossip] handleMessage -> WARN 052 Message GossipMessage: tag:EMPTY alive_msg:<membership:<endpoint:"peer1.org1.example.com:7051" pki_id:"\013\341\342C\224\001E\365\343\257,\206_\343\031\345Q\243\003\363\001=\323J\372\t\327\360\\\006S\312" > timestamp:<inc_num:1545207417405978000 seq_num:34 > > , Envelope: 83 bytes, Signature: 71 bytes isn't valid
peer1.org2.example.com | 2018-12-19 08:17:42.432 UTC [gossip.gossip] handleMessage -> WARN 053 Message GossipMessage: tag:EMPTY alive_msg:<membership:<endpoint:"peer1.org1.example.com:7051" pki_id:"\013\341\342C\224\001E\365\343\257,\206_\343\031\345Q\243\003\363\001=\323J\372\t\327\360\\\006S\312" > timestamp:<inc_num:1545207417405978000 seq_num:34 > > , Envelope: 83 bytes, Signature: 71 bytes isn't valid
peer1.org2.example.com | 2018-12-19 08:17:45.538 UTC [gossip.channel] reportMembershipChanges -> INFO 054 Membership view has changed. peers went online: [[peer0.org1.example.com:7051 ]] , current view: [[peer0.org1.example.com:7051 ] [peer0.org2.example.com:7051 ]]
peer1.org2.example.com | 2018-12-19 08:17:50.544 UTC [gossip.channel] reportMembershipChanges -> INFO 055 Membership view has changed. peers went online: [[peer1.org1.example.com:7051 ]] , current view: [[peer1.org1.example.com:7051 ] [peer0.org1.example.com:7051 ] [peer0.org2.example.com:7051 ]]
peer1.org2.example.com | 2018-12-19 08:18:28.871 UTC [comm.grpc.server] 1 -> INFO 056 unary call completed {"grpc.start_time": "2018-12-19T08:18:28.868Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:58352", "grpc.code": "OK", "grpc.call_duration": "2.9777ms"}
peer1.org2.example.com | 2018-12-19 08:18:30.477 UTC [gossip.privdata] StoreBlock -> INFO 057 [businesschannel] Received block [3] from buffer
peer1.org2.example.com | 2018-12-19 08:18:30.484 UTC [committer.txvalidator] Validate -> INFO 058 [businesschannel] Validated block [3] in 5ms
peer1.org2.example.com | 2018-12-19 08:18:30.488 UTC [cceventmgmt] HandleStateUpdates -> INFO 059 Channel [businesschannel]: Handling deploy or update of chaincode [exp02]
peer1.org2.example.com | 2018-12-19 08:18:30.525 UTC [kvledger] CommitWithPvtData -> INFO 05a [businesschannel] Committed block [3] with 1 transaction(s) in 38ms (state_validation=4ms block_commit=9ms state_commit=18ms)
peer1.org2.example.com | 2018-12-19 08:18:31.120 UTC [endorser] callChaincode -> INFO 05b [businesschannel][4d105108] Entry chaincode: name:"exp02"
peer1.org1.example.com | 2018-12-19 08:16:56.630 UTC [main] InitCmd -> WARN 001 CORE_LOGGING_LEVEL is no longer supported, please use the FABRIC_LOGGING_SPEC environment variable
peer1.org1.example.com | 2018-12-19 08:16:56.824 UTC [nodeCmd] serve -> INFO 002 Starting peer:
peer1.org1.example.com | Version: 1.4.0-rc1
peer1.org1.example.com | Commit SHA: development build
peer1.org1.example.com | Go version: go1.11.2
peer1.org1.example.com | OS/Arch: linux/amd64
peer1.org1.example.com | Chaincode:
peer1.org1.example.com | Base Image Version: 0.4.14
peer1.org1.example.com | Base Docker Namespace: hyperledger
peer1.org1.example.com | Base Docker Label: org.hyperledger.fabric
peer1.org1.example.com | Docker Namespace: hyperledger
peer1.org1.example.com | 2018-12-19 08:16:56.829 UTC [ledgermgmt] initialize -> INFO 003 Initializing ledger mgmt
peer1.org1.example.com | 2018-12-19 08:16:56.831 UTC [kvledger] NewProvider -> INFO 004 Initializing ledger provider
peer1.org1.example.com | 2018-12-19 08:16:57.166 UTC [kvledger] NewProvider -> INFO 005 ledger provider Initialized
peer1.org1.example.com | 2018-12-19 08:16:57.281 UTC [ledgermgmt] initialize -> INFO 006 ledger mgmt initialized
peer1.org1.example.com | 2018-12-19 08:16:57.282 UTC [peer] func1 -> INFO 007 Auto-detected peer address: 172.18.0.15:7051
peer1.org1.example.com | 2018-12-19 08:16:57.282 UTC [peer] func1 -> INFO 008 Returning peer1.org1.example.com:7051
peer1.org1.example.com | 2018-12-19 08:16:57.289 UTC [peer] func1 -> INFO 009 Auto-detected peer address: 172.18.0.15:7051
peer1.org1.example.com | 2018-12-19 08:16:57.290 UTC [peer] func1 -> INFO 00a Returning peer1.org1.example.com:7051
peer1.org1.example.com | 2018-12-19 08:16:57.309 UTC [nodeCmd] serve -> INFO 00b Starting peer with TLS enabled
peer1.org1.example.com | 2018-12-19 08:16:57.323 UTC [nodeCmd] computeChaincodeEndpoint -> INFO 00c Entering computeChaincodeEndpoint with peerHostname: peer1.org1.example.com
peer1.org1.example.com | 2018-12-19 08:16:57.323 UTC [nodeCmd] computeChaincodeEndpoint -> INFO 00d Exit with ccEndpoint: peer1.org1.example.com:7052
peer1.org1.example.com | 2018-12-19 08:16:57.343 UTC [sccapi] registerSysCC -> INFO 00e system chaincode lscc(github.com/hyperledger/fabric/core/scc/lscc) registered
peer1.org1.example.com | 2018-12-19 08:16:57.349 UTC [sccapi] registerSysCC -> INFO 00f system chaincode cscc(github.com/hyperledger/fabric/core/scc/cscc) registered
peer1.org1.example.com | 2018-12-19 08:16:57.350 UTC [sccapi] registerSysCC -> INFO 010 system chaincode qscc(github.com/hyperledger/fabric/core/scc/qscc) registered
peer1.org1.example.com | 2018-12-19 08:16:57.354 UTC [sccapi] registerSysCC -> INFO 011 system chaincode +lifecycle(github.com/hyperledger/fabric/core/chaincode/lifecycle) registered
peer1.org1.example.com | 2018-12-19 08:16:57.387 UTC [gossip.service] func1 -> INFO 012 Initialize gossip with endpoint peer1.org1.example.com:7051 and bootstrap set [peer0.org1.example.com:7051]
peer1.org1.example.com | 2018-12-19 08:16:57.406 UTC [gossip.gossip] NewGossipService -> INFO 013 Creating gossip service with self membership of Endpoint: peer1.org1.example.com:7051, InternalEndpoint: peer1.org1.example.com:7051, PKI-ID: 0be1e243940145f5e3af2c865fe319e551a303f3013dd34afa09d7f05c0653ca, Metadata:
peer1.org1.example.com | 2018-12-19 08:16:57.432 UTC [gossip.gossip] start -> INFO 014 Gossip instance peer1.org1.example.com:7051 started
peer1.org1.example.com | 2018-12-19 08:16:57.442 UTC [sccapi] deploySysCC -> INFO 015 system chaincode lscc/(github.com/hyperledger/fabric/core/scc/lscc) deployed
peer1.org1.example.com | 2018-12-19 08:16:57.474 UTC [cscc] Init -> INFO 016 Init CSCC
peer1.org1.example.com | 2018-12-19 08:16:57.480 UTC [sccapi] deploySysCC -> INFO 017 system chaincode cscc/(github.com/hyperledger/fabric/core/scc/cscc) deployed
peer1.org1.example.com | 2018-12-19 08:16:57.486 UTC [qscc] Init -> INFO 018 Init QSCC
peer1.org1.example.com | 2018-12-19 08:16:57.493 UTC [sccapi] deploySysCC -> INFO 019 system chaincode qscc/(github.com/hyperledger/fabric/core/scc/qscc) deployed
peer1.org1.example.com | 2018-12-19 08:16:57.505 UTC [sccapi] deploySysCC -> INFO 01a system chaincode +lifecycle/(github.com/hyperledger/fabric/core/chaincode/lifecycle) deployed
peer1.org1.example.com | 2018-12-19 08:16:57.510 UTC [nodeCmd] serve -> INFO 01b Deployed system chaincodes
peer1.org1.example.com | 2018-12-19 08:16:57.531 UTC [discovery] NewService -> INFO 01c Created with config TLS: true, authCacheMaxSize: 1000, authCachePurgeRatio: 0.750000
peer1.org1.example.com | 2018-12-19 08:16:57.534 UTC [nodeCmd] registerDiscoveryService -> INFO 01d Discovery service activated
peer1.org1.example.com | 2018-12-19 08:16:57.535 UTC [nodeCmd] serve -> INFO 01e Starting peer with ID=[name:"peer1.org1.example.com" ], network ID=[dev], address=[peer1.org1.example.com:7051]
peer1.org1.example.com | 2018-12-19 08:16:57.536 UTC [nodeCmd] serve -> INFO 01f Started peer with ID=[name:"peer1.org1.example.com" ], network ID=[dev], address=[peer1.org1.example.com:7051]
peer1.org1.example.com | 2018-12-19 08:17:29.956 UTC [endorser] callChaincode -> INFO 020 [][5a8f88e1] Entry chaincode: name:"cscc"
peer1.org1.example.com | 2018-12-19 08:17:29.958 UTC [ledgermgmt] CreateLedger -> INFO 021 Creating ledger [businesschannel] with genesis block
peer1.org1.example.com | 2018-12-19 08:17:29.962 UTC [fsblkstorage] newBlockfileMgr -> INFO 022 Getting block information from block storage
peer1.org1.example.com | 2018-12-19 08:17:29.998 UTC [kvledger] CommitWithPvtData -> INFO 023 [businesschannel] Committed block [0] with 1 transaction(s) in 26ms (state_validation=3ms block_commit=10ms state_commit=8ms)
peer1.org1.example.com | 2018-12-19 08:17:30.003 UTC [ledgermgmt] CreateLedger -> INFO 024 Created ledger [businesschannel] with genesis block
peer1.org1.example.com | 2018-12-19 08:17:30.019 UTC [gossip.gossip] JoinChan -> INFO 025 Joining gossip network of channel businesschannel with 2 organizations
peer1.org1.example.com | 2018-12-19 08:17:30.020 UTC [gossip.gossip] learnAnchorPeers -> INFO 026 No configured anchor peers of Org2MSP for channel businesschannel to learn about
peer1.org1.example.com | 2018-12-19 08:17:30.020 UTC [gossip.gossip] learnAnchorPeers -> INFO 027 No configured anchor peers of Org1MSP for channel businesschannel to learn about
peer1.org1.example.com | 2018-12-19 08:17:30.050 UTC [gossip.state] NewGossipStateProvider -> INFO 028 Updating metadata information, current ledger sequence is at = 0, next expected block is = 1
peer1.org1.example.com | 2018-12-19 08:17:30.052 UTC [sccapi] deploySysCC -> INFO 029 system chaincode lscc/businesschannel(github.com/hyperledger/fabric/core/scc/lscc) deployed
peer1.org1.example.com | 2018-12-19 08:17:30.053 UTC [cscc] Init -> INFO 02a Init CSCC
peer1.org1.example.com | 2018-12-19 08:17:30.053 UTC [sccapi] deploySysCC -> INFO 02b system chaincode cscc/businesschannel(github.com/hyperledger/fabric/core/scc/cscc) deployed
peer1.org1.example.com | 2018-12-19 08:17:30.054 UTC [qscc] Init -> INFO 02c Init QSCC
peer1.org1.example.com | 2018-12-19 08:17:30.054 UTC [sccapi] deploySysCC -> INFO 02d system chaincode qscc/businesschannel(github.com/hyperledger/fabric/core/scc/qscc) deployed
peer1.org1.example.com | 2018-12-19 08:17:30.055 UTC [sccapi] deploySysCC -> INFO 02e system chaincode +lifecycle/businesschannel(github.com/hyperledger/fabric/core/chaincode/lifecycle) deployed
peer1.org1.example.com | 2018-12-19 08:17:30.055 UTC [endorser] callChaincode -> INFO 02f [][5a8f88e1] Exit chaincode: name:"cscc" (99ms)
peer1.org1.example.com | 2018-12-19 08:17:30.056 UTC [comm.grpc.server] 1 -> INFO 030 unary call completed {"grpc.start_time": "2018-12-19T08:17:29.955Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:46114", "grpc.code": "OK", "grpc.call_duration": "100.6643ms"}
peer1.org2.example.com | 2018-12-19 08:18:31.132 UTC [chaincode.platform.golang] GenerateDockerBuild -> INFO 05c building chaincode with ldflagsOpt: '-ldflags "-linkmode external -extldflags '-static'"'
peer1.org2.example.com | 2018-12-19 08:19:13.751 UTC [endorser] callChaincode -> INFO 05d [businesschannel][4d105108] Exit chaincode: name:"exp02" (42700ms)
peer1.org2.example.com | 2018-12-19 08:19:13.752 UTC [comm.grpc.server] 1 -> INFO 05e unary call completed {"grpc.start_time": "2018-12-19T08:18:31.118Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:58356", "grpc.code": "OK", "grpc.call_duration": "42.7038585s"}
peer1.org2.example.com | 2018-12-19 08:19:14.312 UTC [endorser] callChaincode -> INFO 05f [businesschannel][fd3bb774] Entry chaincode: name:"exp02"
peer1.org2.example.com | 2018-12-19 08:19:14.315 UTC [endorser] callChaincode -> INFO 060 [businesschannel][fd3bb774] Exit chaincode: name:"exp02" (3ms)
peer1.org2.example.com | 2018-12-19 08:19:14.316 UTC [comm.grpc.server] 1 -> INFO 061 unary call completed {"grpc.start_time": "2018-12-19T08:19:14.31Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:58368", "grpc.code": "OK", "grpc.call_duration": "6.1636ms"}
peer1.org2.example.com | 2018-12-19 08:19:16.099 UTC [gossip.privdata] StoreBlock -> INFO 062 [businesschannel] Received block [4] from buffer
peer1.org2.example.com | 2018-12-19 08:19:16.104 UTC [committer.txvalidator] Validate -> INFO 063 [businesschannel] Validated block [4] in 3ms
peer1.org2.example.com | 2018-12-19 08:19:16.129 UTC [kvledger] CommitWithPvtData -> INFO 064 [businesschannel] Committed block [4] with 1 transaction(s) in 22ms (state_validation=0ms block_commit=15ms state_commit=3ms)
peer1.org2.example.com | 2018-12-19 08:19:16.504 UTC [endorser] callChaincode -> INFO 065 [businesschannel][0430038e] Entry chaincode: name:"exp02"
peer1.org2.example.com | 2018-12-19 08:19:16.507 UTC [endorser] callChaincode -> INFO 066 [businesschannel][0430038e] Exit chaincode: name:"exp02" (3ms)
peer1.org2.example.com | 2018-12-19 08:19:16.508 UTC [comm.grpc.server] 1 -> INFO 067 unary call completed {"grpc.start_time": "2018-12-19T08:19:16.502Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:58372", "grpc.code": "OK", "grpc.call_duration": "5.4967ms"}
peer1.org2.example.com | 2018-12-19 08:19:16.749 UTC [endorser] callChaincode -> INFO 068 [businesschannel][3435cdab] Entry chaincode: name:"exp02"
peer1.org1.example.com | 2018-12-19 08:17:31.313 UTC [endorser] callChaincode -> INFO 031 [][780c8c5c] Entry chaincode: name:"cscc"
peer1.org1.example.com | 2018-12-19 08:17:31.314 UTC [endorser] callChaincode -> INFO 032 [][780c8c5c] Exit chaincode: name:"cscc" (1ms)
peer1.org1.example.com | 2018-12-19 08:17:31.314 UTC [comm.grpc.server] 1 -> INFO 033 unary call completed {"grpc.start_time": "2018-12-19T08:17:31.312Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:46122", "grpc.code": "OK", "grpc.call_duration": "1.9219ms"}
peer1.org1.example.com | 2018-12-19 08:17:32.359 UTC [endorser] callChaincode -> INFO 034 [][b5252380] Entry chaincode: name:"qscc"
peer1.org1.example.com | 2018-12-19 08:17:32.361 UTC [endorser] callChaincode -> INFO 035 [][b5252380] Exit chaincode: name:"qscc" (2ms)
peer1.org1.example.com | 2018-12-19 08:17:32.361 UTC [comm.grpc.server] 1 -> INFO 036 unary call completed {"grpc.start_time": "2018-12-19T08:17:32.358Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:46130", "grpc.code": "OK", "grpc.call_duration": "3.1506ms"}
peer1.org1.example.com | 2018-12-19 08:17:34.985 UTC [gossip.channel] reportMembershipChanges -> INFO 037 Membership view has changed. peers went online: [[peer0.org1.example.com:7051]] , current view: [[peer0.org1.example.com:7051]]
peer1.org1.example.com | 2018-12-19 08:17:35.759 UTC [gossip.election] leaderElection -> INFO 038 0be1e243940145f5e3af2c865fe319e551a303f3013dd34afa09d7f05c0653ca : Some peer is already a leader
peer1.org1.example.com | 2018-12-19 08:17:35.771 UTC [gossip.privdata] StoreBlock -> INFO 039 [businesschannel] Received block [1] from buffer
peer1.org1.example.com | 2018-12-19 08:17:35.785 UTC [gossip.gossip] JoinChan -> INFO 03a Joining gossip network of channel businesschannel with 2 organizations
peer1.org1.example.com | 2018-12-19 08:17:35.786 UTC [gossip.gossip] learnAnchorPeers -> INFO 03b Learning about the configured anchor peers of Org1MSP for channel businesschannel : [{peer0.org1.example.com 7051}]
peer1.org1.example.com | 2018-12-19 08:17:35.786 UTC [gossip.gossip] learnAnchorPeers -> INFO 03c No configured anchor peers of Org2MSP for channel businesschannel to learn about
peer1.org1.example.com | 2018-12-19 08:17:35.786 UTC [gossip.service] updateEndpoints -> WARN 03d Failed to update ordering service endpoints, due to Channel with businesschannel id was not found
peer1.org1.example.com | 2018-12-19 08:17:35.800 UTC [committer.txvalidator] Validate -> INFO 03e [businesschannel] Validated block [1] in 28ms
peer1.org1.example.com | 2018-12-19 08:17:35.826 UTC [kvledger] CommitWithPvtData -> INFO 03f [businesschannel] Committed block [1] with 1 transaction(s) in 25ms (state_validation=0ms block_commit=18ms state_commit=3ms)
peer1.org1.example.com | 2018-12-19 08:17:35.827 UTC [gossip.privdata] StoreBlock -> INFO 040 [businesschannel] Received block [2] from buffer
peer1.org1.example.com | 2018-12-19 08:17:35.865 UTC [gossip.gossip] JoinChan -> INFO 041 Joining gossip network of channel businesschannel with 2 organizations
peer1.org1.example.com | 2018-12-19 08:17:35.872 UTC [gossip.gossip] learnAnchorPeers -> INFO 042 Learning about the configured anchor peers of Org1MSP for channel businesschannel : [{peer0.org1.example.com 7051}]
peer1.org1.example.com | 2018-12-19 08:17:35.873 UTC [gossip.gossip] learnAnchorPeers -> INFO 043 Learning about the configured anchor peers of Org2MSP for channel businesschannel : [{peer0.org2.example.com 7051}]
peer1.org1.example.com | 2018-12-19 08:17:35.873 UTC [gossip.service] updateEndpoints -> WARN 044 Failed to update ordering service endpoints, due to Channel with businesschannel id was not found
peer1.org1.example.com | 2018-12-19 08:17:35.887 UTC [committer.txvalidator] Validate -> INFO 045 [businesschannel] Validated block [2] in 59ms
peer1.org1.example.com | 2018-12-19 08:17:35.907 UTC [kvledger] CommitWithPvtData -> INFO 046 [businesschannel] Committed block [2] with 1 transaction(s) in 19ms (state_validation=1ms block_commit=11ms state_commit=3ms)
peer1.org1.example.com | 2018-12-19 08:17:35.929 UTC [gossip.comm] func1 -> WARN 047 peer0.org1.example.com:7051, PKIid:3d21b0bc142d8ddae3c27797c0c2bf16b05e0414b227484fdbfabf9859231106 isn't responsive: EOF
peer1.org1.example.com | 2018-12-19 08:17:35.933 UTC [gossip.discovery] expireDeadMembers -> WARN 048 Entering [3d21b0bc142d8ddae3c27797c0c2bf16b05e0414b227484fdbfabf9859231106]
peer1.org1.example.com | 2018-12-19 08:17:35.935 UTC [gossip.discovery] expireDeadMembers -> WARN 049 Closing connection to Endpoint: peer0.org1.example.com:7051, InternalEndpoint: peer0.org1.example.com:7051, PKI-ID: 3d21b0bc142d8ddae3c27797c0c2bf16b05e0414b227484fdbfabf9859231106, Metadata:
peer1.org1.example.com | 2018-12-19 08:17:35.935 UTC [gossip.discovery] expireDeadMembers -> WARN 04a Exiting
peer1.org1.example.com | 2018-12-19 08:17:37.517 UTC [gossip.gossip] handleMessage -> WARN 04b Message GossipMessage: tag:EMPTY alive_msg:<membership:<endpoint:"peer1.org2.example.com:7051" pki_id:"T\007\035\226\017\365\020\207\245V/\336H\001\337\251\004\3064\306\303\303\215\240\331\202\240\261\366/\n'" > timestamp:<inc_num:1545207417549655700 seq_num:30 > > , Envelope: 83 bytes, Signature: 71 bytes isn't valid
peer1.org1.example.com | 2018-12-19 08:17:37.527 UTC [gossip.gossip] handleMessage -> WARN 04c Message GossipMessage: tag:EMPTY alive_msg:<membership:<endpoint:"peer1.org2.example.com:7051" pki_id:"T\007\035\226\017\365\020\207\245V/\336H\001\337\251\004\3064\306\303\303\215\240\331\202\240\261\366/\n'" > timestamp:<inc_num:1545207417549655700 seq_num:30 > > , Envelope: 83 bytes, Signature: 71 bytes isn't valid
peer1.org1.example.com | 2018-12-19 08:17:39.984 UTC [gossip.channel] reportMembershipChanges -> INFO 04d Membership view has changed. peers went online: [[peer0.org2.example.com:7051 ]] , current view: [[peer0.org1.example.com:7051] [peer0.org2.example.com:7051 ]]
peer1.org1.example.com | 2018-12-19 08:17:40.674 UTC [endorser] callChaincode -> INFO 04e [][57ff025b] Entry chaincode: name:"lscc"
peer1.org1.example.com | 2018-12-19 08:17:40.675 UTC [lscc] executeInstall -> INFO 04f Installed Chaincode [exp02] Version [1.0] to peer
peer1.org1.example.com | 2018-12-19 08:17:40.676 UTC [endorser] callChaincode -> INFO 050 [][57ff025b] Exit chaincode: name:"lscc" (2ms)
peer1.org1.example.com | 2018-12-19 08:17:40.676 UTC [comm.grpc.server] 1 -> INFO 051 unary call completed {"grpc.start_time": "2018-12-19T08:17:40.673Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:46186", "grpc.code": "OK", "grpc.call_duration": "3.4656ms"}
peer1.org2.example.com | 2018-12-19 08:19:16.754 UTC [endorser] callChaincode -> INFO 069 [businesschannel][3435cdab] Exit chaincode: name:"exp02" (5ms)
peer1.org2.example.com | 2018-12-19 08:19:16.755 UTC [comm.grpc.server] 1 -> INFO 06a unary call completed {"grpc.start_time": "2018-12-19T08:19:16.747Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:58376", "grpc.code": "OK", "grpc.call_duration": "8.3062ms"}
peer1.org2.example.com | 2018-12-19 08:19:18.815 UTC [gossip.privdata] StoreBlock -> INFO 06b [businesschannel] Received block [5] from buffer
peer1.org2.example.com | 2018-12-19 08:19:18.818 UTC [committer.txvalidator] Validate -> INFO 06c [businesschannel] Validated block [5] in 3ms
peer1.org2.example.com | 2018-12-19 08:19:18.845 UTC [kvledger] CommitWithPvtData -> INFO 06d [businesschannel] Committed block [5] with 1 transaction(s) in 26ms (state_validation=0ms block_commit=11ms state_commit=2ms)
peer1.org2.example.com | 2018-12-19 08:19:36.903 UTC [gossip.privdata] StoreBlock -> INFO 06e [businesschannel] Received block [6] from buffer
peer1.org2.example.com | 2018-12-19 08:19:36.912 UTC [cauthdsl] deduplicate -> WARN 06f De-duplicating identity [Org1MSP0270edfed53a78d7d3c66dc25737f57f956e48ef69dca5ecc91c26679dd4eff3] at index 2 in signature set
peer1.org2.example.com | 2018-12-19 08:19:36.918 UTC [cauthdsl] deduplicate -> WARN 070 De-duplicating identity [Org1MSP0270edfed53a78d7d3c66dc25737f57f956e48ef69dca5ecc91c26679dd4eff3] at index 2 in signature set
peer1.org2.example.com | 2018-12-19 08:19:36.949 UTC [gossip.gossip] JoinChan -> INFO 071 Joining gossip network of channel businesschannel with 3 organizations
peer1.org2.example.com | 2018-12-19 08:19:36.949 UTC [gossip.gossip] learnAnchorPeers -> INFO 072 Learning about the configured anchor peers of Org2MSP for channel businesschannel : [{peer0.org2.example.com 7051}]
peer1.org2.example.com | 2018-12-19 08:19:36.949 UTC [gossip.gossip] learnAnchorPeers -> INFO 073 No configured anchor peers of Org3MSP for channel businesschannel to learn about
peer1.org2.example.com | 2018-12-19 08:19:36.951 UTC [gossip.gossip] learnAnchorPeers -> INFO 074 Learning about the configured anchor peers of Org1MSP for channel businesschannel : [{peer0.org1.example.com 7051}]
peer1.org2.example.com | 2018-12-19 08:19:36.973 UTC [comm.grpc.server] 1 -> INFO 075 streaming call completed {"grpc.start_time": "2018-12-19T08:17:37.536Z", "grpc.service": "gossip.Gossip", "grpc.method": "GossipStream", "grpc.peer_address": "172.18.0.14:45942", "grpc.peer_subject": "CN=peer0.org1.example.com,L=San Francisco,ST=California,C=US", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "1m59.577095s"}
peer1.org2.example.com | 2018-12-19 08:19:36.981 UTC [committer.txvalidator] Validate -> INFO 076 [businesschannel] Validated block [6] in 72ms
peer1.org2.example.com | 2018-12-19 08:19:37.037 UTC [kvledger] CommitWithPvtData -> INFO 077 [businesschannel] Committed block [6] with 1 transaction(s) in 55ms (state_validation=1ms block_commit=26ms state_commit=18ms)
peer1.org2.example.com | 2018-12-19 08:19:37.261 UTC [gossip.comm] func1 -> WARN 078 peer0.org2.example.com:7051, PKIid:75b2768046f1cff79a75d29027b3162b4c2d489ed2c22c48403d0760a7c0a76b isn't responsive: EOF
peer1.org2.example.com | 2018-12-19 08:19:37.262 UTC [gossip.discovery] expireDeadMembers -> WARN 079 Entering [75b2768046f1cff79a75d29027b3162b4c2d489ed2c22c48403d0760a7c0a76b]
peer1.org2.example.com | 2018-12-19 08:19:37.262 UTC [gossip.discovery] expireDeadMembers -> WARN 07a Closing connection to Endpoint: peer0.org2.example.com:7051, InternalEndpoint: peer0.org2.example.com:7051, PKI-ID: 75b2768046f1cff79a75d29027b3162b4c2d489ed2c22c48403d0760a7c0a76b, Metadata:
peer1.org2.example.com | 2018-12-19 08:19:37.262 UTC [gossip.discovery] expireDeadMembers -> WARN 07b Exiting
peer1.org2.example.com | 2018-12-19 08:19:37.322 UTC [comm.grpc.server] 1 -> INFO 07c unary call completed {"grpc.start_time": "2018-12-19T08:19:37.322Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:19:39.322Z", "grpc.peer_address": "172.18.0.12:54732", "grpc.peer_subject": "CN=peer0.org2.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "120.8µs"}
peer1.org2.example.com | 2018-12-19 08:19:40.399 UTC [gossip.channel] reportMembershipChanges -> INFO 07d Membership view has changed. peers went offline: [[peer0.org2.example.com:7051]] , current view: [[peer0.org1.example.com:7051 ] [peer1.org1.example.com:7051 ]]
peer1.org2.example.com | 2018-12-19 08:19:45.398 UTC [gossip.channel] reportMembershipChanges -> INFO 07e Membership view has changed. peers went online: [[peer0.org2.example.com:7051]] , current view: [[peer0.org1.example.com:7051 ] [peer0.org2.example.com:7051] [peer1.org1.example.com:7051 ]]
peer1.org2.example.com | 2018-12-19 08:19:55.169 UTC [endorser] callChaincode -> INFO 07f [][3799de4e] Entry chaincode: name:"cscc"
peer1.org2.example.com | 2018-12-19 08:19:55.173 UTC [endorser] callChaincode -> INFO 080 [][3799de4e] Exit chaincode: name:"cscc" (2ms)
peer1.org2.example.com | 2018-12-19 08:19:55.174 UTC [comm.grpc.server] 1 -> INFO 081 unary call completed {"grpc.start_time": "2018-12-19T08:19:55.168Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:58516", "grpc.code": "OK", "grpc.call_duration": "5.39ms"}
peer1.org2.example.com | 2018-12-19 08:19:56.494 UTC [endorser] callChaincode -> INFO 082 [][aa8793e1] Entry chaincode: name:"qscc"
peer1.org2.example.com | 2018-12-19 08:19:56.497 UTC [endorser] callChaincode -> INFO 083 [][aa8793e1] Exit chaincode: name:"qscc" (2ms)
peer1.org2.example.com | 2018-12-19 08:19:56.497 UTC [comm.grpc.server] 1 -> INFO 084 unary call completed {"grpc.start_time": "2018-12-19T08:19:56.492Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:58524", "grpc.code": "OK", "grpc.call_duration": "5.5925ms"}
peer1.org1.example.com | 2018-12-19 08:17:42.526 UTC [gossip.gossip] handleMessage -> WARN 052 Message GossipMessage: tag:EMPTY alive_msg:<membership:<endpoint:"peer1.org2.example.com:7051" pki_id:"T\007\035\226\017\365\020\207\245V/\336H\001\337\251\004\3064\306\303\303\215\240\331\202\240\261\366/\n'" > timestamp:<inc_num:1545207417549655700 seq_num:33 > > , Envelope: 83 bytes, Signature: 71 bytes isn't valid
peer1.org1.example.com | 2018-12-19 08:17:42.560 UTC [gossip.gossip] handleMessage -> WARN 053 Message GossipMessage: tag:EMPTY alive_msg:<membership:<endpoint:"peer1.org2.example.com:7051" pki_id:"T\007\035\226\017\365\020\207\245V/\336H\001\337\251\004\3064\306\303\303\215\240\331\202\240\261\366/\n'" > timestamp:<inc_num:1545207417549655700 seq_num:33 > > , Envelope: 83 bytes, Signature: 71 bytes isn't valid
peer1.org1.example.com | 2018-12-19 08:17:46.545 UTC [comm.grpc.server] 1 -> INFO 054 unary call completed {"grpc.start_time": "2018-12-19T08:17:46.545Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:17:48.544Z", "grpc.peer_address": "172.18.0.13:46358", "grpc.peer_subject": "CN=peer1.org2.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "132.1µs"}
peer1.org1.example.com | 2018-12-19 08:17:54.984 UTC [gossip.channel] reportMembershipChanges -> INFO 055 Membership view has changed. peers went online: [[peer1.org2.example.com:7051 ]] , current view: [[peer0.org1.example.com:7051] [peer0.org2.example.com:7051 ] [peer1.org2.example.com:7051 ]]
peer1.org1.example.com | 2018-12-19 08:18:30.413 UTC [gossip.privdata] StoreBlock -> INFO 056 [businesschannel] Received block [3] from buffer
peer1.org1.example.com | 2018-12-19 08:18:30.420 UTC [committer.txvalidator] Validate -> INFO 057 [businesschannel] Validated block [3] in 7ms
peer1.org1.example.com | 2018-12-19 08:18:30.426 UTC [cceventmgmt] HandleStateUpdates -> INFO 058 Channel [businesschannel]: Handling deploy or update of chaincode [exp02]
peer1.org1.example.com | 2018-12-19 08:18:30.458 UTC [kvledger] CommitWithPvtData -> INFO 059 [businesschannel] Committed block [3] with 1 transaction(s) in 34ms (state_validation=5ms block_commit=17ms state_commit=8ms)
peer1.org1.example.com | 2018-12-19 08:19:16.079 UTC [gossip.privdata] StoreBlock -> INFO 05a [businesschannel] Received block [4] from buffer
peer1.org1.example.com | 2018-12-19 08:19:16.082 UTC [committer.txvalidator] Validate -> INFO 05b [businesschannel] Validated block [4] in 1ms
peer1.org1.example.com | 2018-12-19 08:19:16.105 UTC [kvledger] CommitWithPvtData -> INFO 05c [businesschannel] Committed block [4] with 1 transaction(s) in 22ms (state_validation=0ms block_commit=12ms state_commit=4ms)
peer1.org1.example.com | 2018-12-19 08:19:18.836 UTC [gossip.privdata] StoreBlock -> INFO 05d [businesschannel] Received block [5] from buffer
peer1.org1.example.com | 2018-12-19 08:19:18.838 UTC [committer.txvalidator] Validate -> INFO 05e [businesschannel] Validated block [5] in 2ms
peer1.org1.example.com | 2018-12-19 08:19:18.857 UTC [kvledger] CommitWithPvtData -> INFO 05f [businesschannel] Committed block [5] with 1 transaction(s) in 18ms (state_validation=0ms block_commit=10ms state_commit=3ms)
peer1.org1.example.com | 2018-12-19 08:19:36.931 UTC [gossip.privdata] StoreBlock -> INFO 060 [businesschannel] Received block [6] from buffer
peer1.org1.example.com | 2018-12-19 08:19:36.934 UTC [cauthdsl] deduplicate -> WARN 061 De-duplicating identity [Org1MSP0270edfed53a78d7d3c66dc25737f57f956e48ef69dca5ecc91c26679dd4eff3] at index 2 in signature set
peer1.org1.example.com | 2018-12-19 08:19:36.935 UTC [cauthdsl] deduplicate -> WARN 062 De-duplicating identity [Org1MSP0270edfed53a78d7d3c66dc25737f57f956e48ef69dca5ecc91c26679dd4eff3] at index 2 in signature set
peer1.org1.example.com | 2018-12-19 08:19:36.995 UTC [gossip.gossip] JoinChan -> INFO 063 Joining gossip network of channel businesschannel with 3 organizations
peer1.org1.example.com | 2018-12-19 08:19:36.996 UTC [gossip.gossip] learnAnchorPeers -> INFO 064 Learning about the configured anchor peers of Org1MSP for channel businesschannel : [{peer0.org1.example.com 7051}]
peer1.org1.example.com | 2018-12-19 08:19:36.997 UTC [gossip.gossip] learnAnchorPeers -> INFO 065 Learning about the configured anchor peers of Org2MSP for channel businesschannel : [{peer0.org2.example.com 7051}]
peer1.org1.example.com | 2018-12-19 08:19:36.997 UTC [gossip.gossip] learnAnchorPeers -> INFO 066 No configured anchor peers of Org3MSP for channel businesschannel to learn about
peer1.org1.example.com | 2018-12-19 08:19:36.997 UTC [gossip.service] updateEndpoints -> WARN 067 Failed to update ordering service endpoints, due to Channel with businesschannel id was not found
peer1.org1.example.com | 2018-12-19 08:19:37.071 UTC [committer.txvalidator] Validate -> INFO 068 [businesschannel] Validated block [6] in 138ms
peer1.org1.example.com | 2018-12-19 08:19:37.130 UTC [kvledger] CommitWithPvtData -> INFO 069 [businesschannel] Committed block [6] with 1 transaction(s) in 55ms (state_validation=17ms block_commit=29ms state_commit=5ms)
peer1.org1.example.com | 2018-12-19 08:19:37.252 UTC [gossip.comm] func1 -> WARN 06a peer0.org1.example.com:7051, PKIid:3d21b0bc142d8ddae3c27797c0c2bf16b05e0414b227484fdbfabf9859231106 isn't responsive: EOF
peer1.org1.example.com | 2018-12-19 08:19:37.252 UTC [gossip.discovery] expireDeadMembers -> WARN 06b Entering [3d21b0bc142d8ddae3c27797c0c2bf16b05e0414b227484fdbfabf9859231106]
peer1.org1.example.com | 2018-12-19 08:19:37.253 UTC [gossip.discovery] expireDeadMembers -> WARN 06c Closing connection to Endpoint: peer0.org1.example.com:7051, InternalEndpoint: peer0.org1.example.com:7051, PKI-ID: 3d21b0bc142d8ddae3c27797c0c2bf16b05e0414b227484fdbfabf9859231106, Metadata:
peer1.org1.example.com | 2018-12-19 08:19:37.253 UTC [gossip.discovery] expireDeadMembers -> WARN 06d Exiting
peer1.org1.example.com | 2018-12-19 08:19:37.255 UTC [gossip.comm] func1 -> WARN 06e peer0.org2.example.com:7051, PKIid:75b2768046f1cff79a75d29027b3162b4c2d489ed2c22c48403d0760a7c0a76b isn't responsive: EOF
peer1.org1.example.com | 2018-12-19 08:19:37.255 UTC [gossip.discovery] expireDeadMembers -> WARN 06f Entering [75b2768046f1cff79a75d29027b3162b4c2d489ed2c22c48403d0760a7c0a76b]
peer1.org1.example.com | 2018-12-19 08:19:37.256 UTC [gossip.discovery] expireDeadMembers -> WARN 070 Closing connection to Endpoint: peer0.org2.example.com:7051, InternalEndpoint: , PKI-ID: 75b2768046f1cff79a75d29027b3162b4c2d489ed2c22c48403d0760a7c0a76b, Metadata:
peer1.org1.example.com | 2018-12-19 08:19:37.256 UTC [gossip.discovery] expireDeadMembers -> WARN 071 Exiting
peer1.org1.example.com | 2018-12-19 08:19:37.306 UTC [comm.grpc.server] 1 -> INFO 072 unary call completed {"grpc.start_time": "2018-12-19T08:19:37.306Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:19:39.306Z", "grpc.peer_address": "172.18.0.14:51244", "grpc.peer_subject": "CN=peer0.org1.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "125.4µs"}
peer1.org1.example.com | 2018-12-19 08:19:37.338 UTC [comm.grpc.server] 1 -> INFO 073 unary call completed {"grpc.start_time": "2018-12-19T08:19:37.338Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:19:39.337Z", "grpc.peer_address": "172.18.0.12:36530", "grpc.peer_subject": "CN=peer0.org2.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "126.5µs"}
peer1.org1.example.com | 2018-12-19 08:19:39.844 UTC [gossip.channel] reportMembershipChanges -> INFO 074 Membership view has changed. peers went offline: [[peer0.org2.example.com:7051 ]] , current view: [[peer1.org2.example.com:7051 ] [peer0.org1.example.com:7051]]
peer1.org1.example.com | 2018-12-19 08:19:44.845 UTC [gossip.channel] reportMembershipChanges -> INFO 075 Membership view has changed. peers went online: [[peer0.org2.example.com:7051 ]] , current view: [[peer0.org1.example.com:7051] [peer0.org2.example.com:7051 ] [peer1.org2.example.com:7051 ]]
peer1.org1.example.com | 2018-12-19 08:19:54.833 UTC [endorser] callChaincode -> INFO 076 [][0c838a29] Entry chaincode: name:"cscc"
peer1.org1.example.com | 2018-12-19 08:19:54.834 UTC [endorser] callChaincode -> INFO 077 [][0c838a29] Exit chaincode: name:"cscc" (1ms)
peer1.org1.example.com | 2018-12-19 08:19:54.834 UTC [comm.grpc.server] 1 -> INFO 078 unary call completed {"grpc.start_time": "2018-12-19T08:19:54.832Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:46368", "grpc.code": "OK", "grpc.call_duration": "2.1083ms"}
peer1.org1.example.com | 2018-12-19 08:19:55.942 UTC [endorser] callChaincode -> INFO 079 [][1b40b20b] Entry chaincode: name:"qscc"
peer1.org1.example.com | 2018-12-19 08:19:55.946 UTC [endorser] callChaincode -> INFO 07a [][1b40b20b] Exit chaincode: name:"qscc" (3ms)
peer1.org1.example.com | 2018-12-19 08:19:55.946 UTC [comm.grpc.server] 1 -> INFO 07b unary call completed {"grpc.start_time": "2018-12-19T08:19:55.941Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:46376", "grpc.code": "OK", "grpc.call_duration": "4.9766ms"}
peer0.org1.example.com | 2018-12-19 08:16:56.664 UTC [main] InitCmd -> WARN 001 CORE_LOGGING_LEVEL is no longer supported, please use the FABRIC_LOGGING_SPEC environment variable
peer0.org1.example.com | 2018-12-19 08:16:56.829 UTC [nodeCmd] serve -> INFO 002 Starting peer:
peer0.org1.example.com | Version: 1.4.0-rc1
peer0.org1.example.com | Commit SHA: development build
peer0.org1.example.com | Go version: go1.11.2
peer0.org1.example.com | OS/Arch: linux/amd64
peer0.org1.example.com | Chaincode:
peer0.org1.example.com | Base Image Version: 0.4.14
peer0.org1.example.com | Base Docker Namespace: hyperledger
peer0.org1.example.com | Base Docker Label: org.hyperledger.fabric
peer0.org1.example.com | Docker Namespace: hyperledger
peer0.org1.example.com | 2018-12-19 08:16:56.837 UTC [ledgermgmt] initialize -> INFO 003 Initializing ledger mgmt
peer0.org1.example.com | 2018-12-19 08:16:56.838 UTC [kvledger] NewProvider -> INFO 004 Initializing ledger provider
peer0.org1.example.com | 2018-12-19 08:16:57.129 UTC [kvledger] NewProvider -> INFO 005 ledger provider Initialized
peer0.org1.example.com | 2018-12-19 08:16:57.230 UTC [ledgermgmt] initialize -> INFO 006 ledger mgmt initialized
peer0.org1.example.com | 2018-12-19 08:16:57.231 UTC [peer] func1 -> INFO 007 Auto-detected peer address: 172.18.0.14:7051
peer0.org1.example.com | 2018-12-19 08:16:57.232 UTC [peer] func1 -> INFO 008 Returning peer0.org1.example.com:7051
peer0.org1.example.com | 2018-12-19 08:16:57.234 UTC [peer] func1 -> INFO 009 Auto-detected peer address: 172.18.0.14:7051
peer0.org1.example.com | 2018-12-19 08:16:57.236 UTC [peer] func1 -> INFO 00a Returning peer0.org1.example.com:7051
peer0.org1.example.com | 2018-12-19 08:16:57.252 UTC [nodeCmd] serve -> INFO 00b Starting peer with TLS enabled
peer0.org1.example.com | 2018-12-19 08:16:57.264 UTC [nodeCmd] computeChaincodeEndpoint -> INFO 00c Entering computeChaincodeEndpoint with peerHostname: peer0.org1.example.com
peer0.org1.example.com | 2018-12-19 08:16:57.271 UTC [nodeCmd] computeChaincodeEndpoint -> INFO 00d Exit with ccEndpoint: peer0.org1.example.com:7052
peer0.org1.example.com | 2018-12-19 08:16:57.324 UTC [sccapi] registerSysCC -> INFO 00e system chaincode lscc(github.com/hyperledger/fabric/core/scc/lscc) registered
peer0.org1.example.com | 2018-12-19 08:16:57.326 UTC [sccapi] registerSysCC -> INFO 00f system chaincode cscc(github.com/hyperledger/fabric/core/scc/cscc) registered
peer0.org1.example.com | 2018-12-19 08:16:57.331 UTC [sccapi] registerSysCC -> INFO 010 system chaincode qscc(github.com/hyperledger/fabric/core/scc/qscc) registered
peer0.org1.example.com | 2018-12-19 08:16:57.333 UTC [sccapi] registerSysCC -> INFO 011 system chaincode +lifecycle(github.com/hyperledger/fabric/core/chaincode/lifecycle) registered
peer0.org1.example.com | 2018-12-19 08:16:57.376 UTC [gossip.service] func1 -> INFO 012 Initialize gossip with endpoint peer0.org1.example.com:7051 and bootstrap set [127.0.0.1:7051]
peer0.org1.example.com | 2018-12-19 08:16:57.425 UTC [gossip.gossip] NewGossipService -> INFO 013 Creating gossip service with self membership of Endpoint: peer0.org1.example.com:7051, InternalEndpoint: peer0.org1.example.com:7051, PKI-ID: 3d21b0bc142d8ddae3c27797c0c2bf16b05e0414b227484fdbfabf9859231106, Metadata:
peer0.org1.example.com | 2018-12-19 08:16:57.440 UTC [gossip.gossip] start -> INFO 014 Gossip instance peer0.org1.example.com:7051 started
peer0.org1.example.com | 2018-12-19 08:16:57.450 UTC [sccapi] deploySysCC -> INFO 015 system chaincode lscc/(github.com/hyperledger/fabric/core/scc/lscc) deployed
peer0.org1.example.com | 2018-12-19 08:16:57.464 UTC [cscc] Init -> INFO 016 Init CSCC
peer0.org1.example.com | 2018-12-19 08:16:57.464 UTC [sccapi] deploySysCC -> INFO 017 system chaincode cscc/(github.com/hyperledger/fabric/core/scc/cscc) deployed
peer0.org1.example.com | 2018-12-19 08:16:57.475 UTC [qscc] Init -> INFO 018 Init QSCC
peer0.org1.example.com | 2018-12-19 08:16:57.475 UTC [sccapi] deploySysCC -> INFO 019 system chaincode qscc/(github.com/hyperledger/fabric/core/scc/qscc) deployed
peer0.org1.example.com | 2018-12-19 08:16:57.482 UTC [sccapi] deploySysCC -> INFO 01a system chaincode +lifecycle/(github.com/hyperledger/fabric/core/chaincode/lifecycle) deployed
peer0.org1.example.com | 2018-12-19 08:16:57.482 UTC [nodeCmd] serve -> INFO 01b Deployed system chaincodes
peer0.org1.example.com | 2018-12-19 08:16:57.483 UTC [discovery] NewService -> INFO 01c Created with config TLS: true, authCacheMaxSize: 1000, authCachePurgeRatio: 0.750000
peer0.org1.example.com | 2018-12-19 08:16:57.483 UTC [nodeCmd] registerDiscoveryService -> INFO 01d Discovery service activated
peer0.org1.example.com | 2018-12-19 08:16:57.483 UTC [nodeCmd] serve -> INFO 01e Starting peer with ID=[name:"peer0.org1.example.com" ], network ID=[dev], address=[peer0.org1.example.com:7051]
peer0.org1.example.com | 2018-12-19 08:16:57.483 UTC [nodeCmd] serve -> INFO 01f Started peer with ID=[name:"peer0.org1.example.com" ], network ID=[dev], address=[peer0.org1.example.com:7051]
peer0.org1.example.com | 2018-12-19 08:16:57.539 UTC [comm.grpc.server] 1 -> INFO 020 unary call completed {"grpc.start_time": "2018-12-19T08:16:57.537Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:16:59.536Z", "grpc.peer_address": "172.18.0.15:44922", "grpc.peer_subject": "CN=peer1.org1.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "1.8808ms"}
peer0.org1.example.com | 2018-12-19 08:16:57.564 UTC [comm.grpc.server] 1 -> INFO 021 streaming call completed {"grpc.start_time": "2018-12-19T08:16:57.543Z", "grpc.service": "gossip.Gossip", "grpc.method": "GossipStream", "grpc.request_deadline": "2018-12-19T08:17:07.543Z", "grpc.peer_address": "172.18.0.15:44922", "grpc.peer_subject": "CN=peer1.org1.example.com,L=San Francisco,ST=California,C=US", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "20.8531ms"}
peer0.org1.example.com | 2018-12-19 08:16:57.603 UTC [comm.grpc.server] 1 -> INFO 022 unary call completed {"grpc.start_time": "2018-12-19T08:16:57.603Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:16:59.596Z", "grpc.peer_address": "172.18.0.15:44926", "grpc.peer_subject": "CN=peer1.org1.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "216.3µs"}
peer0.org1.example.com | 2018-12-19 08:17:29.665 UTC [endorser] callChaincode -> INFO 023 [][ebffe71c] Entry chaincode: name:"cscc"
peer0.org1.example.com | 2018-12-19 08:17:29.667 UTC [ledgermgmt] CreateLedger -> INFO 024 Creating ledger [businesschannel] with genesis block
peer0.org1.example.com | 2018-12-19 08:17:29.670 UTC [fsblkstorage] newBlockfileMgr -> INFO 025 Getting block information from block storage
peer0.org1.example.com | 2018-12-19 08:17:29.694 UTC [kvledger] CommitWithPvtData -> INFO 026 [businesschannel] Committed block [0] with 1 transaction(s) in 19ms (state_validation=1ms block_commit=8ms state_commit=6ms)
peer0.org1.example.com | 2018-12-19 08:17:29.697 UTC [ledgermgmt] CreateLedger -> INFO 027 Created ledger [businesschannel] with genesis block
orderer0.example.com | 2018-12-19 08:16:53.522 UTC [localconfig] completeInitialization -> INFO 001 Kafka.Version unset, setting to 0.10.2.0
orderer0.example.com | 2018-12-19 08:16:53.804 UTC [orderer.common.server] prettyPrintStruct -> INFO 002 Orderer config values:
orderer0.example.com | General.LedgerType = "file"
orderer0.example.com | General.ListenAddress = "0.0.0.0"
orderer0.example.com | General.ListenPort = 7050
orderer0.example.com | General.TLS.Enabled = true
orderer0.example.com | General.TLS.PrivateKey = "/var/hyperledger/orderer/tls/server.key"
orderer0.example.com | General.TLS.Certificate = "/var/hyperledger/orderer/tls/server.crt"
orderer0.example.com | General.TLS.RootCAs = [/var/hyperledger/orderer/tls/ca.crt]
orderer0.example.com | General.TLS.ClientAuthRequired = false
orderer0.example.com | General.TLS.ClientRootCAs = []
orderer0.example.com | General.Cluster.RootCAs = [/etc/hyperledger/fabric/tls/ca.crt]
orderer0.example.com | General.Cluster.ClientCertificate = ""
orderer0.example.com | General.Cluster.ClientPrivateKey = ""
orderer0.example.com | General.Cluster.DialTimeout = 5s
orderer0.example.com | General.Cluster.RPCTimeout = 7s
orderer0.example.com | General.Cluster.ReplicationBufferSize = 20971520
orderer0.example.com | General.Cluster.ReplicationPullTimeout = 5s
orderer0.example.com | General.Cluster.ReplicationRetryTimeout = 5s
orderer0.example.com | General.Keepalive.ServerMinInterval = 1m0s
orderer0.example.com | General.Keepalive.ServerInterval = 2h0m0s
orderer0.example.com | General.Keepalive.ServerTimeout = 20s
orderer0.example.com | General.GenesisMethod = "file"
orderer0.example.com | General.GenesisProfile = "SampleInsecureSolo"
orderer0.example.com | General.SystemChannel = "test-system-channel-name"
orderer0.example.com | General.GenesisFile = "/var/hyperledger/orderer/orderer.genesis.block"
orderer0.example.com | General.Profile.Enabled = false
orderer0.example.com | General.Profile.Address = "0.0.0.0:6060"
orderer0.example.com | General.LocalMSPDir = "/var/hyperledger/orderer/msp"
orderer0.example.com | General.LocalMSPID = "OrdererMSP"
orderer0.example.com | General.BCCSP.ProviderName = "SW"
orderer0.example.com | General.BCCSP.SwOpts.SecLevel = 256
orderer0.example.com | General.BCCSP.SwOpts.HashFamily = "SHA2"
orderer0.example.com | General.BCCSP.SwOpts.Ephemeral = false
orderer0.example.com | General.BCCSP.SwOpts.FileKeystore.KeyStorePath = "/var/hyperledger/orderer/msp/keystore"
orderer0.example.com | General.BCCSP.SwOpts.DummyKeystore =
orderer0.example.com | General.BCCSP.SwOpts.InmemKeystore =
orderer0.example.com | General.BCCSP.PluginOpts =
orderer0.example.com | General.Authentication.TimeWindow = 15m0s
orderer0.example.com | FileLedger.Location = "/var/hyperledger/production/orderer"
orderer0.example.com | FileLedger.Prefix = "hyperledger-fabric-ordererledger"
orderer0.example.com | RAMLedger.HistorySize = 1000
orderer0.example.com | Kafka.Retry.ShortInterval = 1s
orderer0.example.com | Kafka.Retry.ShortTotal = 30s
orderer0.example.com | Kafka.Retry.LongInterval = 5m0s
orderer0.example.com | Kafka.Retry.LongTotal = 12h0m0s
orderer0.example.com | Kafka.Retry.NetworkTimeouts.DialTimeout = 10s
orderer0.example.com | Kafka.Retry.NetworkTimeouts.ReadTimeout = 10s
orderer0.example.com | Kafka.Retry.NetworkTimeouts.WriteTimeout = 10s
orderer0.example.com | Kafka.Retry.Metadata.RetryMax = 3
orderer0.example.com | Kafka.Retry.Metadata.RetryBackoff = 250ms
orderer0.example.com | Kafka.Retry.Producer.RetryMax = 3
orderer0.example.com | Kafka.Retry.Producer.RetryBackoff = 100ms
orderer0.example.com | Kafka.Retry.Consumer.RetryBackoff = 2s
orderer0.example.com | Kafka.Verbose = true
orderer0.example.com | Kafka.Version = 0.10.2.0
orderer0.example.com | Kafka.TLS.Enabled = false
orderer0.example.com | Kafka.TLS.PrivateKey = ""
orderer0.example.com | Kafka.TLS.Certificate = ""
orderer0.example.com | Kafka.TLS.RootCAs = []
orderer0.example.com | Kafka.TLS.ClientAuthRequired = false
orderer0.example.com | Kafka.TLS.ClientRootCAs = []
orderer0.example.com | Kafka.SASLPlain.Enabled = false
orderer0.example.com | Kafka.SASLPlain.User = ""
orderer0.example.com | Kafka.SASLPlain.Password = ""
orderer0.example.com | Kafka.Topic.ReplicationFactor = 3
orderer0.example.com | Debug.BroadcastTraceDir = ""
orderer0.example.com | Debug.DeliverTraceDir = ""
orderer0.example.com | Consensus = map[WALDir:/var/hyperledger/production/orderer/etcdraft/wal SnapDir:/var/hyperledger/production/orderer/etcdraft/snapshot]
orderer0.example.com | Operations.ListenAddress = "127.0.0.1:8443"
orderer0.example.com | Operations.TLS.Enabled = false
orderer0.example.com | Operations.TLS.PrivateKey = ""
orderer0.example.com | Operations.TLS.Certificate = ""
orderer0.example.com | Operations.TLS.RootCAs = []
orderer0.example.com | Operations.TLS.ClientAuthRequired = false
orderer0.example.com | Operations.TLS.ClientRootCAs = []
orderer0.example.com | Metrics.Provider = "disabled"
orderer0.example.com | Metrics.Statsd.Network = "udp"
orderer0.example.com | Metrics.Statsd.Address = "127.0.0.1:8125"
orderer0.example.com | Metrics.Statsd.WriteInterval = 30s
orderer0.example.com | Metrics.Statsd.Prefix = ""
peer0.org2.example.com | 2018-12-19 08:16:56.268 UTC [main] InitCmd -> WARN 001 CORE_LOGGING_LEVEL is no longer supported, please use the FABRIC_LOGGING_SPEC environment variable
peer0.org2.example.com | 2018-12-19 08:16:56.448 UTC [nodeCmd] serve -> INFO 002 Starting peer:
peer0.org2.example.com | Version: 1.4.0-rc1
peer0.org2.example.com | Commit SHA: development build
peer0.org2.example.com | Go version: go1.11.2
peer0.org2.example.com | OS/Arch: linux/amd64
peer0.org2.example.com | Chaincode:
peer0.org2.example.com | Base Image Version: 0.4.14
peer0.org2.example.com | Base Docker Namespace: hyperledger
peer0.org2.example.com | Base Docker Label: org.hyperledger.fabric
peer0.org2.example.com | Docker Namespace: hyperledger
peer0.org2.example.com | 2018-12-19 08:16:56.449 UTC [ledgermgmt] initialize -> INFO 003 Initializing ledger mgmt
peer0.org2.example.com | 2018-12-19 08:16:56.449 UTC [kvledger] NewProvider -> INFO 004 Initializing ledger provider
peer0.org2.example.com | 2018-12-19 08:16:56.573 UTC [kvledger] NewProvider -> INFO 005 ledger provider Initialized
peer0.org2.example.com | 2018-12-19 08:16:56.721 UTC [ledgermgmt] initialize -> INFO 006 ledger mgmt initialized
peer0.org2.example.com | 2018-12-19 08:16:56.723 UTC [peer] func1 -> INFO 007 Auto-detected peer address: 172.18.0.12:7051
peer0.org2.example.com | 2018-12-19 08:16:56.725 UTC [peer] func1 -> INFO 008 Returning peer0.org2.example.com:7051
peer0.org2.example.com | 2018-12-19 08:16:56.728 UTC [peer] func1 -> INFO 009 Auto-detected peer address: 172.18.0.12:7051
peer0.org2.example.com | 2018-12-19 08:16:56.728 UTC [peer] func1 -> INFO 00a Returning peer0.org2.example.com:7051
peer0.org2.example.com | 2018-12-19 08:16:56.748 UTC [nodeCmd] serve -> INFO 00b Starting peer with TLS enabled
peer0.org2.example.com | 2018-12-19 08:16:56.754 UTC [nodeCmd] computeChaincodeEndpoint -> INFO 00c Entering computeChaincodeEndpoint with peerHostname: peer0.org2.example.com
peer0.org2.example.com | 2018-12-19 08:16:56.755 UTC [nodeCmd] computeChaincodeEndpoint -> INFO 00d Exit with ccEndpoint: peer0.org2.example.com:7052
peer0.org2.example.com | 2018-12-19 08:16:56.766 UTC [sccapi] registerSysCC -> INFO 00e system chaincode lscc(github.com/hyperledger/fabric/core/scc/lscc) registered
peer0.org2.example.com | 2018-12-19 08:16:56.766 UTC [sccapi] registerSysCC -> INFO 00f system chaincode cscc(github.com/hyperledger/fabric/core/scc/cscc) registered
peer0.org2.example.com | 2018-12-19 08:16:56.766 UTC [sccapi] registerSysCC -> INFO 010 system chaincode qscc(github.com/hyperledger/fabric/core/scc/qscc) registered
peer0.org2.example.com | 2018-12-19 08:16:56.767 UTC [sccapi] registerSysCC -> INFO 011 system chaincode +lifecycle(github.com/hyperledger/fabric/core/chaincode/lifecycle) registered
peer0.org2.example.com | 2018-12-19 08:16:56.783 UTC [gossip.service] func1 -> INFO 012 Initialize gossip with endpoint peer0.org2.example.com:7051 and bootstrap set [peer0.org2.example.com:7051]
peer0.org2.example.com | 2018-12-19 08:16:56.794 UTC [gossip.gossip] NewGossipService -> INFO 013 Creating gossip service with self membership of Endpoint: peer0.org2.example.com:7051, InternalEndpoint: peer0.org2.example.com:7051, PKI-ID: 75b2768046f1cff79a75d29027b3162b4c2d489ed2c22c48403d0760a7c0a76b, Metadata:
peer0.org2.example.com | 2018-12-19 08:16:56.796 UTC [gossip.gossip] start -> INFO 014 Gossip instance peer0.org2.example.com:7051 started
peer0.org2.example.com | 2018-12-19 08:16:56.798 UTC [sccapi] deploySysCC -> INFO 015 system chaincode lscc/(github.com/hyperledger/fabric/core/scc/lscc) deployed
peer0.org2.example.com | 2018-12-19 08:16:56.799 UTC [cscc] Init -> INFO 016 Init CSCC
peer0.org2.example.com | 2018-12-19 08:16:56.800 UTC [sccapi] deploySysCC -> INFO 017 system chaincode cscc/(github.com/hyperledger/fabric/core/scc/cscc) deployed
peer0.org2.example.com | 2018-12-19 08:16:56.801 UTC [qscc] Init -> INFO 018 Init QSCC
peer0.org2.example.com | 2018-12-19 08:16:56.802 UTC [sccapi] deploySysCC -> INFO 019 system chaincode qscc/(github.com/hyperledger/fabric/core/scc/qscc) deployed
peer0.org2.example.com | 2018-12-19 08:16:56.803 UTC [sccapi] deploySysCC -> INFO 01a system chaincode +lifecycle/(github.com/hyperledger/fabric/core/chaincode/lifecycle) deployed
orderer0.example.com | 2018-12-19 08:16:53.939 UTC [orderer.common.server] initializeServerConfig -> INFO 003 Starting orderer with TLS enabled
orderer0.example.com | 2018-12-19 08:16:53.958 UTC [fsblkstorage] newBlockfileMgr -> INFO 004 Getting block information from block storage
orderer0.example.com | 2018-12-19 08:16:54.057 UTC [orderer.consensus.kafka] newChain -> INFO 005 [channel: testchainid] Starting chain with last persisted offset -3 and last recorded block 0
orderer0.example.com | 2018-12-19 08:16:54.058 UTC [orderer.commmon.multichannel] Initialize -> INFO 006 Starting system channel 'testchainid' with genesis block hash 89aa6b0458f547d88023574ecfd47d10b35456026221e446d87e5da9215aee45 and orderer type kafka
orderer0.example.com | 2018-12-19 08:16:54.058 UTC [orderer.common.server] Start -> INFO 007 Starting orderer:
orderer0.example.com | Version: 1.4.0-rc1
orderer0.example.com | Commit SHA: development build
orderer0.example.com | Go version: go1.11.2
orderer0.example.com | OS/Arch: linux/amd64
orderer0.example.com | 2018-12-19 08:16:54.060 UTC [orderer.consensus.kafka] setupTopicForChannel -> INFO 008 [channel: testchainid] Setting up the topic for this channel...
orderer0.example.com | 2018-12-19 08:16:54.058 UTC [orderer.common.server] Start -> INFO 009 Beginning to serve requests
orderer0.example.com | 2018-12-19 08:17:10.759 UTC [orderer.consensus.kafka] setupProducerForChannel -> INFO 00a [channel: testchainid] Setting up the producer for this channel...
orderer0.example.com | 2018-12-19 08:17:10.967 UTC [orderer.consensus.kafka] startThread -> INFO 00b [channel: testchainid] Producer set up successfully
orderer0.example.com | 2018-12-19 08:17:10.967 UTC [orderer.consensus.kafka] sendConnectMessage -> INFO 00c [channel: testchainid] About to post the CONNECT message...
orderer0.example.com | 2018-12-19 08:17:13.615 UTC [orderer.consensus.kafka] startThread -> INFO 00d [channel: testchainid] CONNECT message posted successfully
orderer0.example.com | 2018-12-19 08:17:13.615 UTC [orderer.consensus.kafka] setupParentConsumerForChannel -> INFO 00e [channel: testchainid] Setting up the parent consumer for this channel...
orderer0.example.com | 2018-12-19 08:17:13.623 UTC [orderer.consensus.kafka] startThread -> INFO 00f [channel: testchainid] Parent consumer set up successfully
orderer0.example.com | 2018-12-19 08:17:13.623 UTC [orderer.consensus.kafka] setupChannelConsumerForChannel -> INFO 010 [channel: testchainid] Setting up the channel consumer for this channel (start offset: -2)...
orderer0.example.com | 2018-12-19 08:17:13.657 UTC [orderer.consensus.kafka] startThread -> INFO 011 [channel: testchainid] Channel consumer set up successfully
orderer0.example.com | 2018-12-19 08:17:13.657 UTC [orderer.consensus.kafka] startThread -> INFO 012 [channel: testchainid] Start phase completed successfully
orderer0.example.com | 2018-12-19 08:17:26.903 UTC [comm.grpc.server] 1 -> INFO 013 streaming call completed {"grpc.start_time": "2018-12-19T08:17:26.836Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Broadcast", "grpc.peer_address": "172.18.0.5:44830", "grpc.code": "OK", "grpc.call_duration": "67.2111ms"}
orderer0.example.com | 2018-12-19 08:17:26.904 UTC [comm.grpc.server] 1 -> INFO 014 streaming call completed {"grpc.start_time": "2018-12-19T08:17:26.819Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:44828", "grpc.code": "OK", "grpc.call_duration": "85.5571ms"}
orderer0.example.com | 2018-12-19 08:17:26.964 UTC [fsblkstorage] newBlockfileMgr -> INFO 015 Getting block information from block storage
orderer0.example.com | 2018-12-19 08:17:26.973 UTC [orderer.consensus.kafka] newChain -> INFO 016 [channel: businesschannel] Starting chain with last persisted offset -3 and last recorded block 0
orderer0.example.com | 2018-12-19 08:17:26.973 UTC [orderer.commmon.multichannel] newChain -> INFO 017 Created and starting new chain businesschannel
orderer0.example.com | 2018-12-19 08:17:26.974 UTC [orderer.consensus.kafka] setupTopicForChannel -> INFO 018 [channel: businesschannel] Setting up the topic for this channel...
orderer0.example.com | 2018-12-19 08:17:27.130 UTC [common.deliver] deliverBlocks -> WARN 019 [channel: businesschannel] Rejecting deliver request for 172.18.0.5:44832 because of consenter error
orderer0.example.com | 2018-12-19 08:17:27.131 UTC [comm.grpc.server] 1 -> INFO 01a streaming call completed {"grpc.start_time": "2018-12-19T08:17:26.929Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:44832", "grpc.code": "OK", "grpc.call_duration": "202.0847ms"}
orderer0.example.com | 2018-12-19 08:17:27.348 UTC [common.deliver] deliverBlocks -> WARN 01b [channel: businesschannel] Rejecting deliver request for 172.18.0.5:44838 because of consenter error
orderer0.example.com | 2018-12-19 08:17:27.349 UTC [comm.grpc.server] 1 -> INFO 01c streaming call completed {"grpc.start_time": "2018-12-19T08:17:27.146Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:44838", "grpc.code": "OK", "grpc.call_duration": "203.6062ms"}
orderer0.example.com | 2018-12-19 08:17:27.377 UTC [orderer.consensus.kafka] setupProducerForChannel -> INFO 01d [channel: businesschannel] Setting up the producer for this channel...
orderer0.example.com | 2018-12-19 08:17:27.572 UTC [common.deliver] deliverBlocks -> WARN 01e [channel: businesschannel] Rejecting deliver request for 172.18.0.5:44842 because of consenter error
orderer0.example.com | 2018-12-19 08:17:27.572 UTC [comm.grpc.server] 1 -> INFO 01f streaming call completed {"grpc.start_time": "2018-12-19T08:17:27.371Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:44842", "grpc.code": "OK", "grpc.call_duration": "201.1376ms"}
orderer0.example.com | 2018-12-19 08:17:27.682 UTC [orderer.consensus.kafka] startThread -> INFO 020 [channel: businesschannel] Producer set up successfully
orderer0.example.com | 2018-12-19 08:17:27.683 UTC [orderer.consensus.kafka] sendConnectMessage -> INFO 021 [channel: businesschannel] About to post the CONNECT message...
orderer0.example.com | 2018-12-19 08:17:27.786 UTC [common.deliver] deliverBlocks -> WARN 022 [channel: businesschannel] Rejecting deliver request for 172.18.0.5:44846 because of consenter error
orderer0.example.com | 2018-12-19 08:17:27.789 UTC [comm.grpc.server] 1 -> INFO 023 streaming call completed {"grpc.start_time": "2018-12-19T08:17:27.584Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:44846", "grpc.code": "OK", "grpc.call_duration": "205.1071ms"}
orderer0.example.com | 2018-12-19 08:17:28.001 UTC [common.deliver] deliverBlocks -> WARN 024 [channel: businesschannel] Rejecting deliver request for 172.18.0.5:44850 because of consenter error
orderer0.example.com | 2018-12-19 08:17:28.001 UTC [comm.grpc.server] 1 -> INFO 025 streaming call completed {"grpc.start_time": "2018-12-19T08:17:27.8Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:44850", "grpc.code": "OK", "grpc.call_duration": "201.6579ms"}
orderer0.example.com | 2018-12-19 08:17:28.215 UTC [common.deliver] deliverBlocks -> WARN 026 [channel: businesschannel] Rejecting deliver request for 172.18.0.5:44854 because of consenter error
orderer0.example.com | 2018-12-19 08:17:28.216 UTC [comm.grpc.server] 1 -> INFO 027 streaming call completed {"grpc.start_time": "2018-12-19T08:17:28.014Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:44854", "grpc.code": "OK", "grpc.call_duration": "201.4422ms"}
orderer0.example.com | 2018-12-19 08:17:28.428 UTC [common.deliver] deliverBlocks -> WARN 028 [channel: businesschannel] Rejecting deliver request for 172.18.0.5:44858 because of consenter error
orderer0.example.com | 2018-12-19 08:17:28.429 UTC [comm.grpc.server] 1 -> INFO 029 streaming call completed {"grpc.start_time": "2018-12-19T08:17:28.227Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:44858", "grpc.code": "OK", "grpc.call_duration": "201.5738ms"}
orderer0.example.com | 2018-12-19 08:17:28.646 UTC [common.deliver] deliverBlocks -> WARN 02a [channel: businesschannel] Rejecting deliver request for 172.18.0.5:44862 because of consenter error
orderer0.example.com | 2018-12-19 08:17:28.647 UTC [comm.grpc.server] 1 -> INFO 02b streaming call completed {"grpc.start_time": "2018-12-19T08:17:28.445Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:44862", "grpc.code": "OK", "grpc.call_duration": "202.6554ms"}
orderer0.example.com | 2018-12-19 08:17:28.707 UTC [orderer.consensus.kafka] startThread -> INFO 02c [channel: businesschannel] CONNECT message posted successfully
orderer0.example.com | 2018-12-19 08:17:28.707 UTC [orderer.consensus.kafka] setupParentConsumerForChannel -> INFO 02d [channel: businesschannel] Setting up the parent consumer for this channel...
orderer0.example.com | 2018-12-19 08:17:28.732 UTC [orderer.consensus.kafka] startThread -> INFO 02e [channel: businesschannel] Parent consumer set up successfully
orderer0.example.com | 2018-12-19 08:17:28.732 UTC [orderer.consensus.kafka] setupChannelConsumerForChannel -> INFO 02f [channel: businesschannel] Setting up the channel consumer for this channel (start offset: -2)...
orderer0.example.com | 2018-12-19 08:17:28.794 UTC [orderer.consensus.kafka] startThread -> INFO 030 [channel: businesschannel] Channel consumer set up successfully
orderer0.example.com | 2018-12-19 08:17:28.794 UTC [orderer.consensus.kafka] startThread -> INFO 031 [channel: businesschannel] Start phase completed successfully
orderer0.example.com | 2018-12-19 08:17:28.871 UTC [common.deliver] Handle -> WARN 032 Error reading from 172.18.0.5:44866: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:17:28.872 UTC [comm.grpc.server] 1 -> INFO 033 streaming call completed {"grpc.start_time": "2018-12-19T08:17:28.663Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:44866", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "209.2716ms"}
orderer0.example.com | 2018-12-19 08:17:33.409 UTC [orderer.common.broadcast] Handle -> WARN 034 Error reading from 172.18.0.5:44900: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:17:33.410 UTC [common.deliver] Handle -> WARN 035 Error reading from 172.18.0.5:44898: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:17:33.410 UTC [comm.grpc.server] 1 -> INFO 037 streaming call completed {"grpc.start_time": "2018-12-19T08:17:33.327Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Broadcast", "grpc.peer_address": "172.18.0.5:44900", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "84.1194ms"}
peer0.org1.example.com | 2018-12-19 08:17:29.725 UTC [gossip.gossip] JoinChan -> INFO 028 Joining gossip network of channel businesschannel with 2 organizations
peer0.org1.example.com | 2018-12-19 08:17:29.726 UTC [gossip.gossip] learnAnchorPeers -> INFO 029 No configured anchor peers of Org1MSP for channel businesschannel to learn about
peer0.org1.example.com | 2018-12-19 08:17:29.726 UTC [gossip.gossip] learnAnchorPeers -> INFO 02a No configured anchor peers of Org2MSP for channel businesschannel to learn about
peer0.org1.example.com | 2018-12-19 08:17:29.773 UTC [gossip.state] NewGossipStateProvider -> INFO 02b Updating metadata information, current ledger sequence is at = 0, next expected block is = 1
peer0.org1.example.com | 2018-12-19 08:17:29.777 UTC [sccapi] deploySysCC -> INFO 02c system chaincode lscc/businesschannel(github.com/hyperledger/fabric/core/scc/lscc) deployed
peer0.org1.example.com | 2018-12-19 08:17:29.778 UTC [cscc] Init -> INFO 02d Init CSCC
peer0.org1.example.com | 2018-12-19 08:17:29.779 UTC [sccapi] deploySysCC -> INFO 02e system chaincode cscc/businesschannel(github.com/hyperledger/fabric/core/scc/cscc) deployed
peer0.org1.example.com | 2018-12-19 08:17:29.779 UTC [qscc] Init -> INFO 02f Init QSCC
peer0.org1.example.com | 2018-12-19 08:17:29.780 UTC [sccapi] deploySysCC -> INFO 030 system chaincode qscc/businesschannel(github.com/hyperledger/fabric/core/scc/qscc) deployed
peer0.org1.example.com | 2018-12-19 08:17:29.781 UTC [sccapi] deploySysCC -> INFO 031 system chaincode +lifecycle/businesschannel(github.com/hyperledger/fabric/core/chaincode/lifecycle) deployed
peer0.org1.example.com | 2018-12-19 08:17:29.781 UTC [endorser] callChaincode -> INFO 032 [][ebffe71c] Exit chaincode: name:"cscc" (116ms)
peer0.org1.example.com | 2018-12-19 08:17:29.781 UTC [comm.grpc.server] 1 -> INFO 033 unary call completed {"grpc.start_time": "2018-12-19T08:17:29.663Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:47116", "grpc.code": "OK", "grpc.call_duration": "118.3742ms"}
peer0.org1.example.com | 2018-12-19 08:17:31.131 UTC [endorser] callChaincode -> INFO 034 [][9c2776f0] Entry chaincode: name:"cscc"
peer0.org1.example.com | 2018-12-19 08:17:31.133 UTC [endorser] callChaincode -> INFO 035 [][9c2776f0] Exit chaincode: name:"cscc" (1ms)
peer0.org1.example.com | 2018-12-19 08:17:31.133 UTC [comm.grpc.server] 1 -> INFO 036 unary call completed {"grpc.start_time": "2018-12-19T08:17:31.131Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:47124", "grpc.code": "OK", "grpc.call_duration": "2.3126ms"}
peer0.org1.example.com | 2018-12-19 08:17:32.159 UTC [endorser] callChaincode -> INFO 037 [][9ede0dd4] Entry chaincode: name:"qscc"
peer0.org1.example.com | 2018-12-19 08:17:32.161 UTC [endorser] callChaincode -> INFO 038 [][9ede0dd4] Exit chaincode: name:"qscc" (1ms)
peer0.org1.example.com | 2018-12-19 08:17:32.162 UTC [comm.grpc.server] 1 -> INFO 039 unary call completed {"grpc.start_time": "2018-12-19T08:17:32.159Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:47132", "grpc.code": "OK", "grpc.call_duration": "3.047ms"}
peer0.org1.example.com | 2018-12-19 08:17:34.691 UTC [gossip.channel] reportMembershipChanges -> INFO 03a Membership view has changed. peers went online: [[peer1.org1.example.com:7051]] , current view: [[peer1.org1.example.com:7051]]
peer0.org1.example.com | 2018-12-19 08:17:35.744 UTC [gossip.election] beLeader -> INFO 03b 3d21b0bc142d8ddae3c27797c0c2bf16b05e0414b227484fdbfabf9859231106 : Becoming a leader
peer0.org1.example.com | 2018-12-19 08:17:35.745 UTC [gossip.service] func1 -> INFO 03c Elected as a leader, starting delivery service for channel businesschannel
peer0.org1.example.com | 2018-12-19 08:17:35.762 UTC [gossip.privdata] StoreBlock -> INFO 03d [businesschannel] Received block [1] from buffer
peer0.org1.example.com | 2018-12-19 08:17:35.784 UTC [gossip.gossip] JoinChan -> INFO 03e Joining gossip network of channel businesschannel with 2 organizations
peer0.org1.example.com | 2018-12-19 08:17:35.784 UTC [gossip.gossip] learnAnchorPeers -> INFO 03f Learning about the configured anchor peers of Org1MSP for channel businesschannel : [{peer0.org1.example.com 7051}]
peer0.org1.example.com | 2018-12-19 08:17:35.785 UTC [gossip.gossip] learnAnchorPeers -> INFO 040 Anchor peer with same endpoint, skipping connecting to myself
peer0.org1.example.com | 2018-12-19 08:17:35.785 UTC [gossip.gossip] learnAnchorPeers -> INFO 041 No configured anchor peers of Org2MSP for channel businesschannel to learn about
peer0.org1.example.com | 2018-12-19 08:17:35.802 UTC [comm.grpc.server] 1 -> INFO 042 unary call completed {"grpc.start_time": "2018-12-19T08:17:35.801Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:17:37.801Z", "grpc.peer_address": "172.18.0.15:45224", "grpc.peer_subject": "CN=peer1.org1.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "135.5µs"}
peer0.org1.example.com | 2018-12-19 08:17:35.815 UTC [comm.grpc.server] 1 -> INFO 043 streaming call completed {"grpc.start_time": "2018-12-19T08:16:57.607Z", "grpc.service": "gossip.Gossip", "grpc.method": "GossipStream", "grpc.peer_address": "172.18.0.15:44926", "grpc.peer_subject": "CN=peer1.org1.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "38.2785261s"}
peer0.org1.example.com | 2018-12-19 08:17:35.816 UTC [comm.grpc.server] 1 -> INFO 044 streaming call completed {"grpc.start_time": "2018-12-19T08:17:35.814Z", "grpc.service": "gossip.Gossip", "grpc.method": "GossipStream", "grpc.request_deadline": "2018-12-19T08:17:45.813Z", "grpc.peer_address": "172.18.0.15:45224", "grpc.peer_subject": "CN=peer1.org1.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "2.282ms"}
peer0.org1.example.com | 2018-12-19 08:17:35.817 UTC [committer.txvalidator] Validate -> INFO 045 [businesschannel] Validated block [1] in 53ms
peer0.org1.example.com | 2018-12-19 08:17:35.836 UTC [kvledger] CommitWithPvtData -> INFO 046 [businesschannel] Committed block [1] with 1 transaction(s) in 18ms (state_validation=0ms block_commit=12ms state_commit=2ms)
peer0.org1.example.com | 2018-12-19 08:17:35.836 UTC [gossip.privdata] StoreBlock -> INFO 047 [businesschannel] Received block [2] from buffer
peer0.org1.example.com | 2018-12-19 08:17:35.851 UTC [gossip.gossip] JoinChan -> INFO 048 Joining gossip network of channel businesschannel with 2 organizations
peer0.org1.example.com | 2018-12-19 08:17:35.851 UTC [gossip.gossip] learnAnchorPeers -> INFO 049 Learning about the configured anchor peers of Org1MSP for channel businesschannel : [{peer0.org1.example.com 7051}]
peer0.org1.example.com | 2018-12-19 08:17:35.851 UTC [gossip.gossip] learnAnchorPeers -> INFO 04a Anchor peer with same endpoint, skipping connecting to myself
peer0.org1.example.com | 2018-12-19 08:17:35.851 UTC [gossip.gossip] learnAnchorPeers -> INFO 04b Learning about the configured anchor peers of Org2MSP for channel businesschannel : [{peer0.org2.example.com 7051}]
peer0.org1.example.com | 2018-12-19 08:17:35.867 UTC [committer.txvalidator] Validate -> INFO 04c [businesschannel] Validated block [2] in 30ms
peer0.org1.example.com | 2018-12-19 08:17:35.902 UTC [kvledger] CommitWithPvtData -> INFO 04d [businesschannel] Committed block [2] with 1 transaction(s) in 33ms (state_validation=0ms block_commit=12ms state_commit=15ms)
peer0.org1.example.com | 2018-12-19 08:17:35.920 UTC [comm.grpc.server] 1 -> INFO 04e unary call completed {"grpc.start_time": "2018-12-19T08:17:35.919Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:17:37.919Z", "grpc.peer_address": "172.18.0.15:45228", "grpc.peer_subject": "CN=peer1.org1.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "149.1µs"}
orderer0.example.com | 2018-12-19 08:17:33.411 UTC [comm.grpc.server] 1 -> INFO 036 streaming call completed {"grpc.start_time": "2018-12-19T08:17:33.307Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:44898", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "104.3827ms"}
orderer0.example.com | 2018-12-19 08:17:35.703 UTC [orderer.common.broadcast] Handle -> WARN 038 Error reading from 172.18.0.5:44904: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:17:35.703 UTC [comm.grpc.server] 1 -> INFO 039 streaming call completed {"grpc.start_time": "2018-12-19T08:17:35.594Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Broadcast", "grpc.peer_address": "172.18.0.5:44904", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "109.595ms"}
orderer0.example.com | 2018-12-19 08:17:35.705 UTC [common.deliver] Handle -> WARN 03a Error reading from 172.18.0.5:44902: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:17:35.706 UTC [comm.grpc.server] 1 -> INFO 03b streaming call completed {"grpc.start_time": "2018-12-19T08:17:35.58Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:44902", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "126.7331ms"}
orderer0.example.com | 2018-12-19 08:18:28.340 UTC [orderer.common.broadcast] Handle -> WARN 03c Error reading from 172.18.0.5:44964: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:18:28.340 UTC [comm.grpc.server] 1 -> INFO 03d streaming call completed {"grpc.start_time": "2018-12-19T08:17:42.766Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Broadcast", "grpc.peer_address": "172.18.0.5:44964", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "45.6083242s"}
orderer0.example.com | 2018-12-19 08:19:14.046 UTC [orderer.common.broadcast] Handle -> WARN 03e Error reading from 172.18.0.5:44984: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:14.047 UTC [comm.grpc.server] 1 -> INFO 03f streaming call completed {"grpc.start_time": "2018-12-19T08:19:13.98Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Broadcast", "grpc.peer_address": "172.18.0.5:44984", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "66.4805ms"}
orderer0.example.com | 2018-12-19 08:19:16.777 UTC [orderer.common.broadcast] Handle -> WARN 040 Error reading from 172.18.0.5:44998: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:16.778 UTC [comm.grpc.server] 1 -> INFO 041 streaming call completed {"grpc.start_time": "2018-12-19T08:19:16.746Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Broadcast", "grpc.peer_address": "172.18.0.5:44998", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "31.5779ms"}
orderer0.example.com | 2018-12-19 08:19:23.075 UTC [common.deliver] Handle -> WARN 042 Error reading from 172.18.0.5:45048: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:23.076 UTC [comm.grpc.server] 1 -> INFO 043 streaming call completed {"grpc.start_time": "2018-12-19T08:19:23.062Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45048", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "14.2388ms"}
orderer0.example.com | 2018-12-19 08:19:23.395 UTC [common.deliver] Handle -> WARN 044 Error reading from 172.18.0.5:45050: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:23.396 UTC [comm.grpc.server] 1 -> INFO 045 streaming call completed {"grpc.start_time": "2018-12-19T08:19:23.376Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45050", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "19.5282ms"}
orderer0.example.com | 2018-12-19 08:19:23.574 UTC [common.deliver] Handle -> WARN 046 Error reading from 172.18.0.5:45052: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:23.574 UTC [comm.grpc.server] 1 -> INFO 047 streaming call completed {"grpc.start_time": "2018-12-19T08:19:23.565Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45052", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "8.8704ms"}
orderer0.example.com | 2018-12-19 08:19:23.781 UTC [common.deliver] Handle -> WARN 048 Error reading from 172.18.0.5:45054: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:23.781 UTC [comm.grpc.server] 1 -> INFO 049 streaming call completed {"grpc.start_time": "2018-12-19T08:19:23.773Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45054", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "8.1769ms"}
orderer0.example.com | 2018-12-19 08:19:23.978 UTC [common.deliver] Handle -> WARN 04a Error reading from 172.18.0.5:45056: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:23.978 UTC [comm.grpc.server] 1 -> INFO 04b streaming call completed {"grpc.start_time": "2018-12-19T08:19:23.968Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45056", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "10.3709ms"}
orderer0.example.com | 2018-12-19 08:19:24.189 UTC [common.deliver] Handle -> WARN 04c Error reading from 172.18.0.5:45058: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:24.192 UTC [comm.grpc.server] 1 -> INFO 04d streaming call completed {"grpc.start_time": "2018-12-19T08:19:24.18Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45058", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "12.3995ms"}
orderer0.example.com | 2018-12-19 08:19:24.394 UTC [common.deliver] Handle -> WARN 04e Error reading from 172.18.0.5:45060: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:24.395 UTC [comm.grpc.server] 1 -> INFO 04f streaming call completed {"grpc.start_time": "2018-12-19T08:19:24.385Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45060", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "10.3067ms"}
orderer0.example.com | 2018-12-19 08:19:24.593 UTC [common.deliver] Handle -> WARN 050 Error reading from 172.18.0.5:45062: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:24.593 UTC [comm.grpc.server] 1 -> INFO 051 streaming call completed {"grpc.start_time": "2018-12-19T08:19:24.582Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45062", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "11.4931ms"}
orderer0.example.com | 2018-12-19 08:19:24.862 UTC [common.deliver] Handle -> WARN 052 Error reading from 172.18.0.5:45064: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:24.863 UTC [comm.grpc.server] 1 -> INFO 053 streaming call completed {"grpc.start_time": "2018-12-19T08:19:24.853Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45064", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "9.2239ms"}
orderer0.example.com | 2018-12-19 08:19:25.104 UTC [common.deliver] Handle -> WARN 054 Error reading from 172.18.0.5:45066: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:25.105 UTC [comm.grpc.server] 1 -> INFO 055 streaming call completed {"grpc.start_time": "2018-12-19T08:19:25.088Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45066", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "17.648ms"}
orderer0.example.com | 2018-12-19 08:19:25.282 UTC [common.deliver] Handle -> WARN 056 Error reading from 172.18.0.5:45068: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:25.283 UTC [comm.grpc.server] 1 -> INFO 057 streaming call completed {"grpc.start_time": "2018-12-19T08:19:25.269Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45068", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "14.2865ms"}
orderer0.example.com | 2018-12-19 08:19:25.531 UTC [common.deliver] Handle -> WARN 058 Error reading from 172.18.0.5:45070: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:25.532 UTC [comm.grpc.server] 1 -> INFO 059 streaming call completed {"grpc.start_time": "2018-12-19T08:19:25.524Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45070", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "7.6486ms"}
orderer0.example.com | 2018-12-19 08:19:36.812 UTC [cauthdsl] deduplicate -> WARN 05a De-duplicating identity [Org1MSP0270edfed53a78d7d3c66dc25737f57f956e48ef69dca5ecc91c26679dd4eff3] at index 2 in signature set
orderer0.example.com | 2018-12-19 08:19:36.813 UTC [cauthdsl] deduplicate -> WARN 05b De-duplicating identity [Org1MSP0270edfed53a78d7d3c66dc25737f57f956e48ef69dca5ecc91c26679dd4eff3] at index 2 in signature set
orderer0.example.com | 2018-12-19 08:19:36.852 UTC [cauthdsl] deduplicate -> WARN 05c De-duplicating identity [Org1MSP0270edfed53a78d7d3c66dc25737f57f956e48ef69dca5ecc91c26679dd4eff3] at index 2 in signature set
orderer0.example.com | 2018-12-19 08:19:36.853 UTC [cauthdsl] deduplicate -> WARN 05d De-duplicating identity [Org1MSP0270edfed53a78d7d3c66dc25737f57f956e48ef69dca5ecc91c26679dd4eff3] at index 2 in signature set
orderer0.example.com | 2018-12-19 08:19:36.875 UTC [common.deliver] Handle -> WARN 05e Error reading from 172.18.0.5:45072: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:36.876 UTC [comm.grpc.server] 1 -> INFO 05f streaming call completed {"grpc.start_time": "2018-12-19T08:19:36.791Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45072", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "85.7552ms"}
orderer0.example.com | 2018-12-19 08:19:36.880 UTC [orderer.common.broadcast] Handle -> WARN 060 Error reading from 172.18.0.5:45074: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:36.880 UTC [comm.grpc.server] 1 -> INFO 061 streaming call completed {"grpc.start_time": "2018-12-19T08:19:36.805Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Broadcast", "grpc.peer_address": "172.18.0.5:45074", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "75.0811ms"}
orderer0.example.com | 2018-12-19 08:19:39.031 UTC [common.deliver] Handle -> WARN 062 Error reading from 172.18.0.5:45098: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:39.031 UTC [comm.grpc.server] 1 -> INFO 063 streaming call completed {"grpc.start_time": "2018-12-19T08:19:39.022Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45098", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "9.6567ms"}
orderer0.example.com | 2018-12-19 08:19:39.220 UTC [common.deliver] Handle -> WARN 064 Error reading from 172.18.0.5:45100: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:39.220 UTC [comm.grpc.server] 1 -> INFO 065 streaming call completed {"grpc.start_time": "2018-12-19T08:19:39.209Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45100", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "10.8915ms"}
orderer0.example.com | 2018-12-19 08:19:39.850 UTC [common.deliver] Handle -> WARN 066 Error reading from 172.18.0.5:45102: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:39.850 UTC [comm.grpc.server] 1 -> INFO 067 streaming call completed {"grpc.start_time": "2018-12-19T08:19:39.84Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45102", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "10.3435ms"}
orderer0.example.com | 2018-12-19 08:19:40.138 UTC [common.deliver] Handle -> WARN 068 Error reading from 172.18.0.5:45104: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:40.138 UTC [comm.grpc.server] 1 -> INFO 069 streaming call completed {"grpc.start_time": "2018-12-19T08:19:40.12Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45104", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "17.8007ms"}
orderer0.example.com | 2018-12-19 08:19:40.345 UTC [common.deliver] Handle -> WARN 06a Error reading from 172.18.0.5:45106: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:40.345 UTC [comm.grpc.server] 1 -> INFO 06b streaming call completed {"grpc.start_time": "2018-12-19T08:19:40.328Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45106", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "16.9849ms"}
orderer0.example.com | 2018-12-19 08:19:40.571 UTC [common.deliver] Handle -> WARN 06c Error reading from 172.18.0.5:45108: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:40.571 UTC [comm.grpc.server] 1 -> INFO 06d streaming call completed {"grpc.start_time": "2018-12-19T08:19:40.559Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45108", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "12.1608ms"}
orderer0.example.com | 2018-12-19 08:19:40.822 UTC [common.deliver] Handle -> WARN 06e Error reading from 172.18.0.5:45110: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:40.822 UTC [comm.grpc.server] 1 -> INFO 06f streaming call completed {"grpc.start_time": "2018-12-19T08:19:40.808Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45110", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "13.8362ms"}
orderer0.example.com | 2018-12-19 08:19:41.074 UTC [common.deliver] Handle -> WARN 070 Error reading from 172.18.0.5:45112: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:41.074 UTC [comm.grpc.server] 1 -> INFO 071 streaming call completed {"grpc.start_time": "2018-12-19T08:19:41.064Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45112", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "10.2293ms"}
orderer0.example.com | 2018-12-19 08:19:41.370 UTC [common.deliver] Handle -> WARN 072 Error reading from 172.18.0.5:45114: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:41.370 UTC [comm.grpc.server] 1 -> INFO 073 streaming call completed {"grpc.start_time": "2018-12-19T08:19:41.358Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45114", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "12.4052ms"}
orderer0.example.com | 2018-12-19 08:19:41.695 UTC [common.deliver] Handle -> WARN 074 Error reading from 172.18.0.5:45116: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:41.696 UTC [comm.grpc.server] 1 -> INFO 075 streaming call completed {"grpc.start_time": "2018-12-19T08:19:41.679Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45116", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "16.4394ms"}
orderer0.example.com | 2018-12-19 08:19:41.974 UTC [common.deliver] Handle -> WARN 076 Error reading from 172.18.0.5:45118: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:41.974 UTC [comm.grpc.server] 1 -> INFO 077 streaming call completed {"grpc.start_time": "2018-12-19T08:19:41.964Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45118", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "9.4821ms"}
orderer0.example.com | 2018-12-19 08:19:42.219 UTC [common.deliver] Handle -> WARN 078 Error reading from 172.18.0.5:45120: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:42.219 UTC [comm.grpc.server] 1 -> INFO 079 streaming call completed {"grpc.start_time": "2018-12-19T08:19:42.211Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45120", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "8.6496ms"}
orderer0.example.com | 2018-12-19 08:19:42.552 UTC [common.deliver] Handle -> WARN 07a Error reading from 172.18.0.5:45122: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:42.553 UTC [comm.grpc.server] 1 -> INFO 07b streaming call completed {"grpc.start_time": "2018-12-19T08:19:42.539Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45122", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "13.6867ms"}
orderer0.example.com | 2018-12-19 08:19:42.756 UTC [common.deliver] Handle -> WARN 07c Error reading from 172.18.0.5:45124: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:42.757 UTC [comm.grpc.server] 1 -> INFO 07d streaming call completed {"grpc.start_time": "2018-12-19T08:19:42.746Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45124", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "11.1377ms"}
orderer0.example.com | 2018-12-19 08:19:43.007 UTC [common.deliver] Handle -> WARN 07e Error reading from 172.18.0.5:45126: rpc error: code = Canceled desc = context canceled
orderer0.example.com | 2018-12-19 08:19:43.007 UTC [comm.grpc.server] 1 -> INFO 07f streaming call completed {"grpc.start_time": "2018-12-19T08:19:42.995Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.5:45126", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "12.9653ms"}
peer0.org2.example.com | 2018-12-19 08:16:56.803 UTC [nodeCmd] serve -> INFO 01b Deployed system chaincodes
peer0.org2.example.com | 2018-12-19 08:16:56.804 UTC [discovery] NewService -> INFO 01c Created with config TLS: true, authCacheMaxSize: 1000, authCachePurgeRatio: 0.750000
peer0.org2.example.com | 2018-12-19 08:16:56.804 UTC [nodeCmd] registerDiscoveryService -> INFO 01d Discovery service activated
peer0.org2.example.com | 2018-12-19 08:16:56.805 UTC [nodeCmd] serve -> INFO 01e Starting peer with ID=[name:"peer0.org2.example.com" ], network ID=[dev], address=[peer0.org2.example.com:7051]
peer0.org2.example.com | 2018-12-19 08:16:56.806 UTC [nodeCmd] serve -> INFO 01f Started peer with ID=[name:"peer0.org2.example.com" ], network ID=[dev], address=[peer0.org2.example.com:7051]
peer0.org2.example.com | 2018-12-19 08:17:30.255 UTC [endorser] callChaincode -> INFO 020 [][6ca181be] Entry chaincode: name:"cscc"
peer0.org2.example.com | 2018-12-19 08:17:30.256 UTC [ledgermgmt] CreateLedger -> INFO 021 Creating ledger [businesschannel] with genesis block
peer0.org2.example.com | 2018-12-19 08:17:30.260 UTC [fsblkstorage] newBlockfileMgr -> INFO 022 Getting block information from block storage
peer0.org2.example.com | 2018-12-19 08:17:30.285 UTC [kvledger] CommitWithPvtData -> INFO 023 [businesschannel] Committed block [0] with 1 transaction(s) in 15ms (state_validation=1ms block_commit=7ms state_commit=3ms)
peer0.org2.example.com | 2018-12-19 08:17:30.288 UTC [ledgermgmt] CreateLedger -> INFO 024 Created ledger [businesschannel] with genesis block
peer0.org2.example.com | 2018-12-19 08:17:30.297 UTC [gossip.gossip] JoinChan -> INFO 025 Joining gossip network of channel businesschannel with 2 organizations
peer0.org2.example.com | 2018-12-19 08:17:30.297 UTC [gossip.gossip] learnAnchorPeers -> INFO 026 No configured anchor peers of Org1MSP for channel businesschannel to learn about
peer0.org2.example.com | 2018-12-19 08:17:30.298 UTC [gossip.gossip] learnAnchorPeers -> INFO 027 No configured anchor peers of Org2MSP for channel businesschannel to learn about
peer0.org2.example.com | 2018-12-19 08:17:30.319 UTC [gossip.state] NewGossipStateProvider -> INFO 028 Updating metadata information, current ledger sequence is at = 0, next expected block is = 1
peer0.org2.example.com | 2018-12-19 08:17:30.321 UTC [sccapi] deploySysCC -> INFO 029 system chaincode lscc/businesschannel(github.com/hyperledger/fabric/core/scc/lscc) deployed
peer0.org2.example.com | 2018-12-19 08:17:30.322 UTC [cscc] Init -> INFO 02a Init CSCC
peer0.org2.example.com | 2018-12-19 08:17:30.322 UTC [sccapi] deploySysCC -> INFO 02b system chaincode cscc/businesschannel(github.com/hyperledger/fabric/core/scc/cscc) deployed
peer0.org2.example.com | 2018-12-19 08:17:30.323 UTC [qscc] Init -> INFO 02c Init QSCC
peer0.org2.example.com | 2018-12-19 08:17:30.323 UTC [sccapi] deploySysCC -> INFO 02d system chaincode qscc/businesschannel(github.com/hyperledger/fabric/core/scc/qscc) deployed
peer0.org2.example.com | 2018-12-19 08:17:30.324 UTC [sccapi] deploySysCC -> INFO 02e system chaincode +lifecycle/businesschannel(github.com/hyperledger/fabric/core/chaincode/lifecycle) deployed
peer0.org2.example.com | 2018-12-19 08:17:30.325 UTC [endorser] callChaincode -> INFO 02f [][6ca181be] Exit chaincode: name:"cscc" (70ms)
peer0.org2.example.com | 2018-12-19 08:17:30.325 UTC [comm.grpc.server] 1 -> INFO 030 unary call completed {"grpc.start_time": "2018-12-19T08:17:30.254Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:58040", "grpc.code": "OK", "grpc.call_duration": "71.4857ms"}
peer0.org2.example.com | 2018-12-19 08:17:31.442 UTC [endorser] callChaincode -> INFO 031 [][11f5ff22] Entry chaincode: name:"cscc"
peer0.org2.example.com | 2018-12-19 08:17:31.443 UTC [endorser] callChaincode -> INFO 032 [][11f5ff22] Exit chaincode: name:"cscc" (1ms)
peer0.org2.example.com | 2018-12-19 08:17:31.444 UTC [comm.grpc.server] 1 -> INFO 033 unary call completed {"grpc.start_time": "2018-12-19T08:17:31.441Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:58048", "grpc.code": "OK", "grpc.call_duration": "2.0888ms"}
peer0.org2.example.com | 2018-12-19 08:17:32.538 UTC [endorser] callChaincode -> INFO 034 [][29ef54e2] Entry chaincode: name:"qscc"
peer0.org2.example.com | 2018-12-19 08:17:32.542 UTC [endorser] callChaincode -> INFO 035 [][29ef54e2] Exit chaincode: name:"qscc" (3ms)
peer0.org2.example.com | 2018-12-19 08:17:32.544 UTC [comm.grpc.server] 1 -> INFO 036 unary call completed {"grpc.start_time": "2018-12-19T08:17:32.537Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:58056", "grpc.code": "OK", "grpc.call_duration": "7.3365ms"}
peer0.org2.example.com | 2018-12-19 08:17:35.863 UTC [comm.grpc.server] 1 -> INFO 037 unary call completed {"grpc.start_time": "2018-12-19T08:17:35.863Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:17:37.862Z", "grpc.peer_address": "172.18.0.14:43740", "grpc.peer_subject": "CN=peer0.org1.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "243.7µs"}
peer0.org2.example.com | 2018-12-19 08:17:35.892 UTC [comm.grpc.server] 1 -> INFO 038 streaming call completed {"grpc.start_time": "2018-12-19T08:17:35.867Z", "grpc.service": "gossip.Gossip", "grpc.method": "GossipStream", "grpc.request_deadline": "2018-12-19T08:17:45.867Z", "grpc.peer_address": "172.18.0.14:43740", "grpc.peer_subject": "CN=peer0.org1.example.com,L=San Francisco,ST=California,C=US", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "25.8522ms"}
peer0.org2.example.com | 2018-12-19 08:17:35.911 UTC [comm.grpc.server] 1 -> INFO 039 unary call completed {"grpc.start_time": "2018-12-19T08:17:35.91Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:17:37.903Z", "grpc.peer_address": "172.18.0.15:42752", "grpc.peer_subject": "CN=peer1.org1.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "163.8µs"}
peer0.org2.example.com | 2018-12-19 08:17:35.922 UTC [comm.grpc.server] 1 -> INFO 03a unary call completed {"grpc.start_time": "2018-12-19T08:17:35.922Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:17:37.919Z", "grpc.peer_address": "172.18.0.14:43746", "grpc.peer_subject": "CN=peer0.org1.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "185.6µs"}
peer0.org2.example.com | 2018-12-19 08:17:35.938 UTC [comm.grpc.server] 1 -> INFO 03b streaming call completed {"grpc.start_time": "2018-12-19T08:17:35.912Z", "grpc.service": "gossip.Gossip", "grpc.method": "GossipStream", "grpc.request_deadline": "2018-12-19T08:17:45.912Z", "grpc.peer_address": "172.18.0.15:42752", "grpc.peer_subject": "CN=peer1.org1.example.com,L=San Francisco,ST=California,C=US", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "25.6465ms"}
peer0.org2.example.com | 2018-12-19 08:17:35.942 UTC [comm.grpc.server] 1 -> INFO 03c unary call completed {"grpc.start_time": "2018-12-19T08:17:35.942Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:17:37.941Z", "grpc.peer_address": "172.18.0.15:42756", "grpc.peer_subject": "CN=peer1.org1.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "119.5µs"}
peer0.org2.example.com | 2018-12-19 08:17:36.287 UTC [gossip.election] beLeader -> INFO 03d 75b2768046f1cff79a75d29027b3162b4c2d489ed2c22c48403d0760a7c0a76b : Becoming a leader
peer0.org2.example.com | 2018-12-19 08:17:36.287 UTC [gossip.service] func1 -> INFO 03e Elected as a leader, starting delivery service for channel businesschannel
peer0.org2.example.com | 2018-12-19 08:17:36.304 UTC [gossip.privdata] StoreBlock -> INFO 03f [businesschannel] Received block [1] from buffer
peer0.org2.example.com | 2018-12-19 08:17:36.323 UTC [gossip.gossip] JoinChan -> INFO 040 Joining gossip network of channel businesschannel with 2 organizations
peer0.org2.example.com | 2018-12-19 08:17:36.323 UTC [gossip.gossip] learnAnchorPeers -> INFO 041 Learning about the configured anchor peers of Org1MSP for channel businesschannel : [{peer0.org1.example.com 7051}]
peer0.org2.example.com | 2018-12-19 08:17:36.324 UTC [gossip.gossip] learnAnchorPeers -> INFO 042 No configured anchor peers of Org2MSP for channel businesschannel to learn about
peer0.org2.example.com | 2018-12-19 08:17:36.347 UTC [committer.txvalidator] Validate -> INFO 043 [businesschannel] Validated block [1] in 42ms
peer0.org2.example.com | 2018-12-19 08:17:36.350 UTC [comm.grpc.server] 1 -> INFO 044 streaming call completed {"grpc.start_time": "2018-12-19T08:17:35.927Z", "grpc.service": "gossip.Gossip", "grpc.method": "GossipStream", "grpc.peer_address": "172.18.0.14:43746", "grpc.peer_subject": "CN=peer0.org1.example.com,L=San Francisco,ST=California,C=US", "error": "EOF", "grpc.code": "Unknown", "grpc.call_duration": "423.0414ms"}
peer0.org2.example.com | 2018-12-19 08:17:36.368 UTC [kvledger] CommitWithPvtData -> INFO 045 [businesschannel] Committed block [1] with 1 transaction(s) in 20ms (state_validation=4ms block_commit=9ms state_commit=4ms)
peer0.org2.example.com | 2018-12-19 08:17:36.368 UTC [gossip.privdata] StoreBlock -> INFO 046 [businesschannel] Received block [2] from buffer
peer0.org2.example.com | 2018-12-19 08:17:36.381 UTC [gossip.gossip] JoinChan -> INFO 047 Joining gossip network of channel businesschannel with 2 organizations
peer0.org2.example.com | 2018-12-19 08:17:36.381 UTC [gossip.gossip] learnAnchorPeers -> INFO 048 Learning about the configured anchor peers of Org1MSP for channel businesschannel : [{peer0.org1.example.com 7051}]
peer0.org2.example.com | 2018-12-19 08:17:36.382 UTC [gossip.gossip] learnAnchorPeers -> INFO 049 Learning about the configured anchor peers of Org2MSP for channel businesschannel : [{peer0.org2.example.com 7051}]
peer0.org2.example.com | 2018-12-19 08:17:36.385 UTC [gossip.gossip] learnAnchorPeers -> INFO 04a Anchor peer with same endpoint, skipping connecting to myself
peer0.org2.example.com | 2018-12-19 08:17:36.408 UTC [committer.txvalidator] Validate -> INFO 04b [businesschannel] Validated block [2] in 39ms
peer0.org2.example.com | 2018-12-19 08:17:36.437 UTC [kvledger] CommitWithPvtData -> INFO 04c [businesschannel] Committed block [2] with 1 transaction(s) in 28ms (state_validation=4ms block_commit=14ms state_commit=5ms)
peer0.org2.example.com | 2018-12-19 08:17:36.811 UTC [comm.grpc.server] 1 -> INFO 04d unary call completed {"grpc.start_time": "2018-12-19T08:17:36.811Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:17:38.81Z", "grpc.peer_address": "172.18.0.13:51218", "grpc.peer_subject": "CN=peer1.org2.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "304.9µs"}
peer0.org2.example.com | 2018-12-19 08:17:36.827 UTC [comm.grpc.server] 1 -> INFO 04e streaming call completed {"grpc.start_time": "2018-12-19T08:17:36.819Z", "grpc.service": "gossip.Gossip", "grpc.method": "GossipStream", "grpc.request_deadline": "2018-12-19T08:17:46.818Z", "grpc.peer_address": "172.18.0.13:51218", "grpc.peer_subject": "CN=peer1.org2.example.com,L=San Francisco,ST=California,C=US", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "7.9247ms"}
peer0.org2.example.com | 2018-12-19 08:17:36.873 UTC [comm.grpc.server] 1 -> INFO 04f unary call completed {"grpc.start_time": "2018-12-19T08:17:36.872Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:17:38.872Z", "grpc.peer_address": "172.18.0.13:51220", "grpc.peer_subject": "CN=peer1.org2.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "623.9µs"}
peer0.org2.example.com | 2018-12-19 08:17:40.262 UTC [gossip.channel] reportMembershipChanges -> INFO 050 Membership view has changed. peers went online: [[peer0.org1.example.com:7051 ] [peer1.org1.example.com:7051 ] [peer1.org2.example.com:7051]] , current view: [[peer0.org1.example.com:7051 ] [peer1.org1.example.com:7051 ] [peer1.org2.example.com:7051]]
peer0.org2.example.com | 2018-12-19 08:17:41.307 UTC [gossip.election] stopBeingLeader -> INFO 051 75b2768046f1cff79a75d29027b3162b4c2d489ed2c22c48403d0760a7c0a76b Stopped being a leader
peer0.org2.example.com | 2018-12-19 08:17:41.308 UTC [gossip.service] func1 -> INFO 052 Renounced leadership, stopping delivery service for channel businesschannel
peer0.org2.example.com | 2018-12-19 08:17:41.308 UTC [deliveryClient] try -> WARN 053 Got error: rpc error: code = Canceled desc = context canceled , at 1 attempt. Retrying in 1s
peer0.org2.example.com | 2018-12-19 08:17:41.309 UTC [blocksProvider] DeliverBlocks -> WARN 054 [businesschannel] Receive error: client is closing
peer0.org2.example.com | 2018-12-19 08:17:41.463 UTC [endorser] callChaincode -> INFO 055 [][a7937b97] Entry chaincode: name:"lscc"
peer0.org2.example.com | 2018-12-19 08:17:41.465 UTC [lscc] executeInstall -> INFO 056 Installed Chaincode [exp02] Version [1.0] to peer
peer0.org2.example.com | 2018-12-19 08:17:41.466 UTC [endorser] callChaincode -> INFO 057 [][a7937b97] Exit chaincode: name:"lscc" (2ms)
peer0.org2.example.com | 2018-12-19 08:17:41.466 UTC [comm.grpc.server] 1 -> INFO 058 unary call completed {"grpc.start_time": "2018-12-19T08:17:41.463Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:58114", "grpc.code": "OK", "grpc.call_duration": "3.4847ms"}
peer0.org2.example.com | 2018-12-19 08:18:30.480 UTC [gossip.privdata] StoreBlock -> INFO 059 [businesschannel] Received block [3] from buffer
peer0.org2.example.com | 2018-12-19 08:18:30.486 UTC [committer.txvalidator] Validate -> INFO 05a [businesschannel] Validated block [3] in 5ms
peer0.org2.example.com | 2018-12-19 08:18:30.488 UTC [cceventmgmt] HandleStateUpdates -> INFO 05b Channel [businesschannel]: Handling deploy or update of chaincode [exp02]
peer0.org2.example.com | 2018-12-19 08:18:30.525 UTC [kvledger] CommitWithPvtData -> INFO 05c [businesschannel] Committed block [3] with 1 transaction(s) in 37ms (state_validation=1ms block_commit=30ms state_commit=2ms)
peer0.org2.example.com | 2018-12-19 08:19:16.109 UTC [gossip.privdata] StoreBlock -> INFO 05d [businesschannel] Received block [4] from buffer
peer0.org2.example.com | 2018-12-19 08:19:16.112 UTC [committer.txvalidator] Validate -> INFO 05e [businesschannel] Validated block [4] in 2ms
peer0.org2.example.com | 2018-12-19 08:19:16.128 UTC [kvledger] CommitWithPvtData -> INFO 05f [businesschannel] Committed block [4] with 1 transaction(s) in 15ms (state_validation=0ms block_commit=10ms state_commit=1ms)
peer0.org2.example.com | 2018-12-19 08:19:18.820 UTC [gossip.privdata] StoreBlock -> INFO 060 [businesschannel] Received block [5] from buffer
peer0.org2.example.com | 2018-12-19 08:19:18.844 UTC [committer.txvalidator] Validate -> INFO 061 [businesschannel] Validated block [5] in 24ms
peer0.org2.example.com | 2018-12-19 08:19:18.873 UTC [kvledger] CommitWithPvtData -> INFO 062 [businesschannel] Committed block [5] with 1 transaction(s) in 24ms (state_validation=0ms block_commit=17ms state_commit=3ms)
peer0.org2.example.com | 2018-12-19 08:19:36.919 UTC [gossip.privdata] StoreBlock -> INFO 063 [businesschannel] Received block [6] from buffer
peer0.org2.example.com | 2018-12-19 08:19:36.926 UTC [cauthdsl] deduplicate -> WARN 064 De-duplicating identity [Org1MSP0270edfed53a78d7d3c66dc25737f57f956e48ef69dca5ecc91c26679dd4eff3] at index 2 in signature set
peer0.org2.example.com | 2018-12-19 08:19:36.928 UTC [cauthdsl] deduplicate -> WARN 065 De-duplicating identity [Org1MSP0270edfed53a78d7d3c66dc25737f57f956e48ef69dca5ecc91c26679dd4eff3] at index 2 in signature set
peer0.org2.example.com | 2018-12-19 08:19:36.959 UTC [comm.grpc.server] 1 -> INFO 066 unary call completed {"grpc.start_time": "2018-12-19T08:19:36.959Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:19:38.958Z", "grpc.peer_address": "172.18.0.13:51356", "grpc.peer_subject": "CN=peer1.org2.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "142.4µs"}
peer0.org2.example.com | 2018-12-19 08:19:36.962 UTC [comm.grpc.server] 1 -> INFO 067 streaming call completed {"grpc.start_time": "2018-12-19T08:17:36.875Z", "grpc.service": "gossip.Gossip", "grpc.method": "GossipStream", "grpc.peer_address": "172.18.0.13:51220", "grpc.peer_subject": "CN=peer1.org2.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "2m0.2273496s"}
peer0.org2.example.com | 2018-12-19 08:19:36.963 UTC [comm.grpc.server] 1 -> INFO 068 streaming call completed {"grpc.start_time": "2018-12-19T08:19:36.961Z", "grpc.service": "gossip.Gossip", "grpc.method": "GossipStream", "grpc.request_deadline": "2018-12-19T08:19:46.961Z", "grpc.peer_address": "172.18.0.13:51356", "grpc.peer_subject": "CN=peer1.org2.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "2.5285ms"}
peer0.org2.example.com | 2018-12-19 08:19:36.995 UTC [gossip.gossip] JoinChan -> INFO 069 Joining gossip network of channel businesschannel with 3 organizations
peer0.org2.example.com | 2018-12-19 08:19:36.995 UTC [gossip.gossip] learnAnchorPeers -> INFO 06a Learning about the configured anchor peers of Org1MSP for channel businesschannel : [{peer0.org1.example.com 7051}]
peer0.org2.example.com | 2018-12-19 08:19:36.995 UTC [gossip.gossip] learnAnchorPeers -> INFO 06b Learning about the configured anchor peers of Org2MSP for channel businesschannel : [{peer0.org2.example.com 7051}]
peer0.org2.example.com | 2018-12-19 08:19:36.995 UTC [gossip.gossip] learnAnchorPeers -> INFO 06c Anchor peer with same endpoint, skipping connecting to myself
peer0.org2.example.com | 2018-12-19 08:19:37.000 UTC [gossip.gossip] learnAnchorPeers -> INFO 06d No configured anchor peers of Org3MSP for channel businesschannel to learn about
peer0.org2.example.com | 2018-12-19 08:19:37.000 UTC [gossip.service] updateEndpoints -> WARN 06e Failed to update ordering service endpoints, due to Channel with businesschannel id was not found
peer0.org2.example.com | 2018-12-19 08:19:37.034 UTC [committer.txvalidator] Validate -> INFO 06f [businesschannel] Validated block [6] in 114ms
peer0.org2.example.com | 2018-12-19 08:19:37.063 UTC [comm.grpc.server] 1 -> INFO 070 unary call completed {"grpc.start_time": "2018-12-19T08:19:37.063Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:19:39.062Z", "grpc.peer_address": "172.18.0.15:42924", "grpc.peer_subject": "CN=peer1.org1.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "797.2µs"}
peer0.org2.example.com | 2018-12-19 08:19:37.076 UTC [kvledger] CommitWithPvtData -> INFO 071 [businesschannel] Committed block [6] with 1 transaction(s) in 41ms (state_validation=6ms block_commit=24ms state_commit=4ms)
peer0.org2.example.com | 2018-12-19 08:19:37.079 UTC [comm.grpc.server] 1 -> INFO 072 unary call completed {"grpc.start_time": "2018-12-19T08:19:37.079Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:19:39.078Z", "grpc.peer_address": "172.18.0.14:43912", "grpc.peer_subject": "CN=peer0.org1.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "245.5µs"}
peer0.org2.example.com | 2018-12-19 08:19:37.082 UTC [comm.grpc.server] 1 -> INFO 073 streaming call completed {"grpc.start_time": "2018-12-19T08:17:35.943Z", "grpc.service": "gossip.Gossip", "grpc.method": "GossipStream", "grpc.peer_address": "172.18.0.15:42756", "grpc.peer_subject": "CN=peer1.org1.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "2m1.2784543s"}
peer0.org2.example.com | 2018-12-19 08:19:37.083 UTC [comm.grpc.server] 1 -> INFO 074 streaming call completed {"grpc.start_time": "2018-12-19T08:19:37.068Z", "grpc.service": "gossip.Gossip", "grpc.method": "GossipStream", "grpc.request_deadline": "2018-12-19T08:19:47.068Z", "grpc.peer_address": "172.18.0.15:42924", "grpc.peer_subject": "CN=peer1.org1.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "14.9145ms"}
peer0.org2.example.com | 2018-12-19 08:19:37.101 UTC [comm.grpc.server] 1 -> INFO 075 streaming call completed {"grpc.start_time": "2018-12-19T08:19:37.086Z", "grpc.service": "gossip.Gossip", "grpc.method": "GossipStream", "grpc.request_deadline": "2018-12-19T08:19:47.086Z", "grpc.peer_address": "172.18.0.14:43912", "grpc.peer_subject": "CN=peer0.org1.example.com,L=San Francisco,ST=California,C=US", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "15.4549ms"}
peer0.org2.example.com | 2018-12-19 08:19:37.128 UTC [comm.grpc.server] 1 -> INFO 076 unary call completed {"grpc.start_time": "2018-12-19T08:19:37.128Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:19:39.127Z", "grpc.peer_address": "172.18.0.14:43918", "grpc.peer_subject": "CN=peer0.org1.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "150µs"}
peer0.org2.example.com | 2018-12-19 08:19:54.997 UTC [endorser] callChaincode -> INFO 077 [][bf9003e1] Entry chaincode: name:"cscc"
peer0.org2.example.com | 2018-12-19 08:19:54.999 UTC [endorser] callChaincode -> INFO 078 [][bf9003e1] Exit chaincode: name:"cscc" (2ms)
peer0.org2.example.com | 2018-12-19 08:19:55.001 UTC [comm.grpc.server] 1 -> INFO 079 unary call completed {"grpc.start_time": "2018-12-19T08:19:54.996Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:58294", "grpc.code": "OK", "grpc.call_duration": "4.6659ms"}
peer0.org2.example.com | 2018-12-19 08:19:56.222 UTC [endorser] callChaincode -> INFO 07a [][ad651bd5] Entry chaincode: name:"qscc"
peer0.org2.example.com | 2018-12-19 08:19:56.225 UTC [endorser] callChaincode -> INFO 07b [][ad651bd5] Exit chaincode: name:"qscc" (3ms)
peer0.org2.example.com | 2018-12-19 08:19:56.225 UTC [comm.grpc.server] 1 -> INFO 07c unary call completed {"grpc.start_time": "2018-12-19T08:19:56.219Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:58302", "grpc.code": "OK", "grpc.call_duration": "5.7344ms"}
peer0.org1.example.com | 2018-12-19 08:17:35.932 UTC [comm.grpc.server] 1 -> INFO 04f streaming call completed {"grpc.start_time": "2018-12-19T08:17:35.923Z", "grpc.service": "gossip.Gossip", "grpc.method": "GossipStream", "grpc.request_deadline": "2018-12-19T08:17:45.922Z", "grpc.peer_address": "172.18.0.15:45228", "grpc.peer_subject": "CN=peer1.org1.example.com,L=San Francisco,ST=California,C=US", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "9.7156ms"}
peer0.org1.example.com | 2018-12-19 08:17:36.333 UTC [comm.grpc.server] 1 -> INFO 050 unary call completed {"grpc.start_time": "2018-12-19T08:17:36.333Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:17:38.332Z", "grpc.peer_address": "172.18.0.12:37430", "grpc.peer_subject": "CN=peer0.org2.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "145.4µs"}
peer0.org1.example.com | 2018-12-19 08:17:36.344 UTC [comm.grpc.server] 1 -> INFO 051 streaming call completed {"grpc.start_time": "2018-12-19T08:17:36.336Z", "grpc.service": "gossip.Gossip", "grpc.method": "GossipStream", "grpc.request_deadline": "2018-12-19T08:17:46.335Z", "grpc.peer_address": "172.18.0.12:37430", "grpc.peer_subject": "CN=peer0.org2.example.com,L=San Francisco,ST=California,C=US", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "8.8058ms"}
peer0.org1.example.com | 2018-12-19 08:17:36.389 UTC [comm.grpc.server] 1 -> INFO 052 unary call completed {"grpc.start_time": "2018-12-19T08:17:36.389Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:17:38.388Z", "grpc.peer_address": "172.18.0.12:37432", "grpc.peer_subject": "CN=peer0.org2.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "155.7µs"}
peer0.org1.example.com | 2018-12-19 08:17:36.403 UTC [comm.grpc.server] 1 -> INFO 053 streaming call completed {"grpc.start_time": "2018-12-19T08:17:36.392Z", "grpc.service": "gossip.Gossip", "grpc.method": "GossipStream", "grpc.request_deadline": "2018-12-19T08:17:46.391Z", "grpc.peer_address": "172.18.0.12:37432", "grpc.peer_subject": "CN=peer0.org2.example.com,L=San Francisco,ST=California,C=US", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "11.0906ms"}
peer0.org1.example.com | 2018-12-19 08:17:36.410 UTC [comm.grpc.server] 1 -> INFO 054 unary call completed {"grpc.start_time": "2018-12-19T08:17:36.41Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:17:38.409Z", "grpc.peer_address": "172.18.0.12:37434", "grpc.peer_subject": "CN=peer0.org2.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "294.2µs"}
peer0.org1.example.com | 2018-12-19 08:17:36.609 UTC [comm.grpc.server] 1 -> INFO 055 unary call completed {"grpc.start_time": "2018-12-19T08:17:36.609Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:17:38.607Z", "grpc.peer_address": "172.18.0.13:60320", "grpc.peer_subject": "CN=peer1.org2.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "139.3µs"}
peer0.org1.example.com | 2018-12-19 08:17:36.623 UTC [comm.grpc.server] 1 -> INFO 056 streaming call completed {"grpc.start_time": "2018-12-19T08:17:36.611Z", "grpc.service": "gossip.Gossip", "grpc.method": "GossipStream", "grpc.request_deadline": "2018-12-19T08:17:46.61Z", "grpc.peer_address": "172.18.0.13:60320", "grpc.peer_subject": "CN=peer1.org2.example.com,L=San Francisco,ST=California,C=US", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "12.5791ms"}
peer0.org1.example.com | 2018-12-19 08:17:36.631 UTC [comm.grpc.server] 1 -> INFO 057 unary call completed {"grpc.start_time": "2018-12-19T08:17:36.631Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:17:38.631Z", "grpc.peer_address": "172.18.0.13:60322", "grpc.peer_subject": "CN=peer1.org2.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "129.2µs"}
peer0.org1.example.com | 2018-12-19 08:17:36.767 UTC [comm.grpc.server] 1 -> INFO 058 unary call completed {"grpc.start_time": "2018-12-19T08:17:36.767Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:17:38.753Z", "grpc.peer_address": "172.18.0.13:60324", "grpc.peer_subject": "CN=peer1.org2.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "192.8µs"}
peer0.org1.example.com | 2018-12-19 08:17:36.777 UTC [comm.grpc.server] 1 -> INFO 059 unary call completed {"grpc.start_time": "2018-12-19T08:17:36.774Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:17:38.774Z", "grpc.peer_address": "172.18.0.15:45252", "grpc.peer_subject": "CN=peer1.org1.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "222.6µs"}
peer0.org1.example.com | 2018-12-19 08:17:36.816 UTC [gossip.comm] func1 -> WARN 05a peer1.org2.example.com:7051, PKIid:54071d960ff51087a5562fde4801dfa904c634c6c3c38da0d982a0b1f62f0a27 isn't responsive: rpc error: code = Unavailable desc = transport is closing
peer0.org1.example.com | 2018-12-19 08:17:36.829 UTC [comm.grpc.server] 1 -> INFO 05c streaming call completed {"grpc.start_time": "2018-12-19T08:17:36.633Z", "grpc.service": "gossip.Gossip", "grpc.method": "GossipStream", "grpc.peer_address": "172.18.0.13:60322", "grpc.peer_subject": "CN=peer1.org2.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "195.2888ms"}
peer0.org1.example.com | 2018-12-19 08:17:36.831 UTC [gossip.discovery] expireDeadMembers -> WARN 05d Entering [54071d960ff51087a5562fde4801dfa904c634c6c3c38da0d982a0b1f62f0a27]
peer0.org1.example.com | 2018-12-19 08:17:36.831 UTC [gossip.discovery] expireDeadMembers -> WARN 05e Closing connection to Endpoint: peer1.org2.example.com:7051, InternalEndpoint: , PKI-ID: 54071d960ff51087a5562fde4801dfa904c634c6c3c38da0d982a0b1f62f0a27, Metadata:
peer0.org1.example.com | 2018-12-19 08:17:36.836 UTC [gossip.discovery] expireDeadMembers -> WARN 05f Exiting
peer0.org1.example.com | 2018-12-19 08:17:36.816 UTC [comm.grpc.server] 1 -> INFO 05b streaming call completed {"grpc.start_time": "2018-12-19T08:17:36.775Z", "grpc.service": "gossip.Gossip", "grpc.method": "GossipStream", "grpc.request_deadline": "2018-12-19T08:17:46.775Z", "grpc.peer_address": "172.18.0.13:60324", "grpc.peer_subject": "CN=peer1.org2.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "40.8868ms"}
peer0.org1.example.com | 2018-12-19 08:17:39.689 UTC [gossip.channel] reportMembershipChanges -> INFO 060 Membership view has changed. peers went online: [[peer0.org2.example.com:7051 ] [peer1.org2.example.com:7051 ]] , current view: [[peer0.org2.example.com:7051 ] [peer1.org2.example.com:7051 ] [peer1.org1.example.com:7051]]
peer0.org1.example.com | 2018-12-19 08:17:39.871 UTC [endorser] callChaincode -> INFO 061 [][2cec213a] Entry chaincode: name:"lscc"
peer0.org1.example.com | 2018-12-19 08:17:39.873 UTC [lscc] executeInstall -> INFO 062 Installed Chaincode [exp02] Version [1.0] to peer
peer0.org1.example.com | 2018-12-19 08:17:39.874 UTC [endorser] callChaincode -> INFO 063 [][2cec213a] Exit chaincode: name:"lscc" (2ms)
peer0.org1.example.com | 2018-12-19 08:17:39.874 UTC [comm.grpc.server] 1 -> INFO 064 unary call completed {"grpc.start_time": "2018-12-19T08:17:39.87Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:47186", "grpc.code": "OK", "grpc.call_duration": "3.9431ms"}
peer0.org1.example.com | 2018-12-19 08:17:42.772 UTC [endorser] callChaincode -> INFO 065 [businesschannel][66a822c9] Entry chaincode: name:"lscc"
peer0.org1.example.com | 2018-12-19 08:17:42.791 UTC [chaincode.platform.golang] GenerateDockerBuild -> INFO 066 building chaincode with ldflagsOpt: '-ldflags "-linkmode external -extldflags '-static'"'
peer0.org1.example.com | 2018-12-19 08:18:28.310 UTC [endorser] callChaincode -> INFO 067 [businesschannel][66a822c9] Exit chaincode: name:"lscc" (45572ms)
peer0.org1.example.com | 2018-12-19 08:18:28.311 UTC [comm.grpc.server] 1 -> INFO 068 unary call completed {"grpc.start_time": "2018-12-19T08:17:42.77Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:47202", "grpc.code": "OK", "grpc.call_duration": "45.5744211s"}
peer0.org1.example.com | 2018-12-19 08:18:30.399 UTC [gossip.privdata] StoreBlock -> INFO 069 [businesschannel] Received block [3] from buffer
peer0.org1.example.com | 2018-12-19 08:18:30.415 UTC [committer.txvalidator] Validate -> INFO 06a [businesschannel] Validated block [3] in 14ms
peer0.org1.example.com | 2018-12-19 08:18:30.417 UTC [cceventmgmt] HandleStateUpdates -> INFO 06b Channel [businesschannel]: Handling deploy or update of chaincode [exp02]
peer0.org1.example.com | 2018-12-19 08:18:30.447 UTC [kvledger] CommitWithPvtData -> INFO 06c [businesschannel] Committed block [3] with 1 transaction(s) in 29ms (state_validation=3ms block_commit=15ms state_commit=6ms)
peer0.org1.example.com | 2018-12-19 08:19:13.984 UTC [endorser] callChaincode -> INFO 06d [businesschannel][523afcf2] Entry chaincode: name:"exp02"
peer0.org1.example.com | 2018-12-19 08:19:14.009 UTC [endorser] callChaincode -> INFO 06e [businesschannel][523afcf2] Exit chaincode: name:"exp02" (25ms)
peer0.org1.example.com | 2018-12-19 08:19:14.010 UTC [comm.grpc.server] 1 -> INFO 06f unary call completed {"grpc.start_time": "2018-12-19T08:19:13.981Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:47222", "grpc.code": "OK", "grpc.call_duration": "29.0968ms"}
peer0.org1.example.com | 2018-12-19 08:19:16.072 UTC [gossip.privdata] StoreBlock -> INFO 070 [businesschannel] Received block [4] from buffer
peer0.org1.example.com | 2018-12-19 08:19:16.074 UTC [committer.txvalidator] Validate -> INFO 071 [businesschannel] Validated block [4] in 1ms
peer0.org1.example.com | 2018-12-19 08:19:16.105 UTC [kvledger] CommitWithPvtData -> INFO 072 [businesschannel] Committed block [4] with 1 transaction(s) in 29ms (state_validation=0ms block_commit=20ms state_commit=5ms)
peer0.org1.example.com | 2018-12-19 08:19:16.963 UTC [endorser] callChaincode -> INFO 073 [businesschannel][1f1ccf7f] Entry chaincode: name:"exp02"
peer0.org1.example.com | 2018-12-19 08:19:16.966 UTC [endorser] callChaincode -> INFO 074 [businesschannel][1f1ccf7f] Exit chaincode: name:"exp02" (3ms)
peer0.org1.example.com | 2018-12-19 08:19:16.967 UTC [comm.grpc.server] 1 -> INFO 075 unary call completed {"grpc.start_time": "2018-12-19T08:19:16.962Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:47242", "grpc.code": "OK", "grpc.call_duration": "5.0536ms"}
peer0.org1.example.com | 2018-12-19 08:19:18.825 UTC [gossip.privdata] StoreBlock -> INFO 076 [businesschannel] Received block [5] from buffer
peer0.org1.example.com | 2018-12-19 08:19:18.840 UTC [committer.txvalidator] Validate -> INFO 077 [businesschannel] Validated block [5] in 12ms
peer0.org1.example.com | 2018-12-19 08:19:18.869 UTC [kvledger] CommitWithPvtData -> INFO 078 [businesschannel] Committed block [5] with 1 transaction(s) in 26ms (state_validation=3ms block_commit=15ms state_commit=3ms)
peer0.org1.example.com | 2018-12-19 08:19:19.160 UTC [endorser] callChaincode -> INFO 079 [businesschannel][8b010a4e] Entry chaincode: name:"exp02"
peer0.org1.example.com | 2018-12-19 08:19:19.166 UTC [endorser] callChaincode -> INFO 07a [businesschannel][8b010a4e] Exit chaincode: name:"exp02" (5ms)
peer0.org1.example.com | 2018-12-19 08:19:19.168 UTC [comm.grpc.server] 1 -> INFO 07b unary call completed {"grpc.start_time": "2018-12-19T08:19:19.159Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:47246", "grpc.code": "OK", "grpc.call_duration": "9.2088ms"}
peer0.org1.example.com | 2018-12-19 08:19:19.683 UTC [endorser] callChaincode -> INFO 07c [businesschannel][24608b0e] Entry chaincode: name:"lscc"
peer0.org1.example.com | 2018-12-19 08:19:19.685 UTC [endorser] callChaincode -> INFO 07d [businesschannel][24608b0e] Exit chaincode: name:"lscc" (1ms)
peer0.org1.example.com | 2018-12-19 08:19:19.685 UTC [comm.grpc.server] 1 -> INFO 07e unary call completed {"grpc.start_time": "2018-12-19T08:19:19.682Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:47250", "grpc.code": "OK", "grpc.call_duration": "3.4313ms"}
peer0.org1.example.com | 2018-12-19 08:19:19.870 UTC [endorser] callChaincode -> INFO 07f [businesschannel][178de837] Entry chaincode: name:"lscc"
peer0.org1.example.com | 2018-12-19 08:19:19.872 UTC [endorser] callChaincode -> INFO 080 [businesschannel][178de837] Exit chaincode: name:"lscc" (2ms)
peer0.org1.example.com | 2018-12-19 08:19:19.873 UTC [comm.grpc.server] 1 -> INFO 081 unary call completed {"grpc.start_time": "2018-12-19T08:19:19.869Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:47254", "grpc.code": "OK", "grpc.call_duration": "3.5209ms"}
peer0.org1.example.com | 2018-12-19 08:19:20.102 UTC [endorser] callChaincode -> INFO 082 [businesschannel][bdba8405] Entry chaincode: name:"lscc"
peer0.org1.example.com | 2018-12-19 08:19:20.103 UTC [endorser] callChaincode -> INFO 083 [businesschannel][bdba8405] Exit chaincode: name:"lscc" (1ms)
peer0.org1.example.com | 2018-12-19 08:19:20.104 UTC [comm.grpc.server] 1 -> INFO 084 unary call completed {"grpc.start_time": "2018-12-19T08:19:20.101Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:47258", "grpc.code": "OK", "grpc.call_duration": "2.8815ms"}
peer0.org1.example.com | 2018-12-19 08:19:20.299 UTC [endorser] callChaincode -> INFO 085 [businesschannel][94cf6673] Entry chaincode: name:"lscc"
peer0.org1.example.com | 2018-12-19 08:19:20.300 UTC [endorser] callChaincode -> INFO 086 [businesschannel][94cf6673] Exit chaincode: name:"lscc" (1ms)
peer0.org1.example.com | 2018-12-19 08:19:20.302 UTC [comm.grpc.server] 1 -> INFO 087 unary call completed {"grpc.start_time": "2018-12-19T08:19:20.298Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:47262", "grpc.code": "OK", "grpc.call_duration": "4.8372ms"}
peer0.org1.example.com | 2018-12-19 08:19:20.506 UTC [endorser] callChaincode -> INFO 088 [businesschannel][5590a7cc] Entry chaincode: name:"lscc"
peer0.org1.example.com | 2018-12-19 08:19:20.509 UTC [endorser] callChaincode -> INFO 089 [businesschannel][5590a7cc] Exit chaincode: name:"lscc" (3ms)
peer0.org1.example.com | 2018-12-19 08:19:20.511 UTC [comm.grpc.server] 1 -> INFO 08a unary call completed {"grpc.start_time": "2018-12-19T08:19:20.504Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:47266", "grpc.code": "OK", "grpc.call_duration": "6.768ms"}
peer0.org1.example.com | 2018-12-19 08:19:21.082 UTC [endorser] callChaincode -> INFO 08b [businesschannel][06e67525] Entry chaincode: name:"qscc"
peer0.org1.example.com | 2018-12-19 08:19:21.083 UTC [endorser] callChaincode -> INFO 08c [businesschannel][06e67525] Exit chaincode: name:"qscc" (1ms)
peer0.org1.example.com | 2018-12-19 08:19:21.084 UTC [comm.grpc.server] 1 -> INFO 08d unary call completed {"grpc.start_time": "2018-12-19T08:19:21.081Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:47270", "grpc.code": "OK", "grpc.call_duration": "2.9442ms"}
peer0.org1.example.com | 2018-12-19 08:19:21.296 UTC [endorser] callChaincode -> INFO 08e [businesschannel][3f5c4452] Entry chaincode: name:"qscc"
peer0.org1.example.com | 2018-12-19 08:19:21.305 UTC [endorser] callChaincode -> INFO 08f [businesschannel][3f5c4452] Exit chaincode: name:"qscc" (6ms)
peer0.org1.example.com | 2018-12-19 08:19:21.306 UTC [comm.grpc.server] 1 -> INFO 090 unary call completed {"grpc.start_time": "2018-12-19T08:19:21.295Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:47274", "grpc.code": "OK", "grpc.call_duration": "10.313ms"}
peer0.org1.example.com | 2018-12-19 08:19:22.031 UTC [endorser] callChaincode -> INFO 091 [businesschannel][56ab1be7] Entry chaincode: name:"cscc"
peer0.org1.example.com | 2018-12-19 08:19:22.033 UTC [endorser] callChaincode -> INFO 092 [businesschannel][56ab1be7] Exit chaincode: name:"cscc" (2ms)
peer0.org1.example.com | 2018-12-19 08:19:22.033 UTC [comm.grpc.server] 1 -> INFO 093 unary call completed {"grpc.start_time": "2018-12-19T08:19:22.03Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:47278", "grpc.code": "OK", "grpc.call_duration": "3.3015ms"}
peer0.org1.example.com | 2018-12-19 08:19:22.279 UTC [endorser] callChaincode -> INFO 094 [businesschannel][78807671] Entry chaincode: name:"cscc"
peer0.org1.example.com | 2018-12-19 08:19:22.281 UTC [endorser] callChaincode -> INFO 095 [businesschannel][78807671] Exit chaincode: name:"cscc" (1ms)
peer0.org1.example.com | 2018-12-19 08:19:22.281 UTC [comm.grpc.server] 1 -> INFO 096 unary call completed {"grpc.start_time": "2018-12-19T08:19:22.278Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:47282", "grpc.code": "OK", "grpc.call_duration": "3.2475ms"}
peer0.org1.example.com | 2018-12-19 08:19:22.543 UTC [endorser] callChaincode -> INFO 097 [businesschannel][61249e3b] Entry chaincode: name:"cscc"
peer0.org1.example.com | 2018-12-19 08:19:22.544 UTC [endorser] callChaincode -> INFO 098 [businesschannel][61249e3b] Exit chaincode: name:"cscc" (1ms)
peer0.org1.example.com | 2018-12-19 08:19:22.545 UTC [comm.grpc.server] 1 -> INFO 099 unary call completed {"grpc.start_time": "2018-12-19T08:19:22.542Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:47286", "grpc.code": "OK", "grpc.call_duration": "3.2292ms"}
peer0.org1.example.com | 2018-12-19 08:19:36.920 UTC [gossip.privdata] StoreBlock -> INFO 09a [businesschannel] Received block [6] from buffer
peer0.org1.example.com | 2018-12-19 08:19:36.925 UTC [cauthdsl] deduplicate -> WARN 09b De-duplicating identity [Org1MSP0270edfed53a78d7d3c66dc25737f57f956e48ef69dca5ecc91c26679dd4eff3] at index 2 in signature set
peer0.org1.example.com | 2018-12-19 08:19:36.928 UTC [cauthdsl] deduplicate -> WARN 09c De-duplicating identity [Org1MSP0270edfed53a78d7d3c66dc25737f57f956e48ef69dca5ecc91c26679dd4eff3] at index 2 in signature set
peer0.org1.example.com | 2018-12-19 08:19:36.965 UTC [comm.grpc.server] 1 -> INFO 09d unary call completed {"grpc.start_time": "2018-12-19T08:19:36.965Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:19:38.965Z", "grpc.peer_address": "172.18.0.13:60468", "grpc.peer_subject": "CN=peer1.org2.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "125.9µs"}
peer0.org1.example.com | 2018-12-19 08:19:36.971 UTC [comm.grpc.server] 1 -> INFO 09e streaming call completed {"grpc.start_time": "2018-12-19T08:19:36.966Z", "grpc.service": "gossip.Gossip", "grpc.method": "GossipStream", "grpc.request_deadline": "2018-12-19T08:19:46.966Z", "grpc.peer_address": "172.18.0.13:60468", "grpc.peer_subject": "CN=peer1.org2.example.com,L=San Francisco,ST=California,C=US", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "4.6886ms"}
peer0.org1.example.com | 2018-12-19 08:19:37.008 UTC [gossip.gossip] JoinChan -> INFO 09f Joining gossip network of channel businesschannel with 3 organizations
peer0.org1.example.com | 2018-12-19 08:19:37.015 UTC [gossip.gossip] learnAnchorPeers -> INFO 0a0 Learning about the configured anchor peers of Org2MSP for channel businesschannel : [{peer0.org2.example.com 7051}]
peer0.org1.example.com | 2018-12-19 08:19:37.018 UTC [gossip.gossip] learnAnchorPeers -> INFO 0a1 No configured anchor peers of Org3MSP for channel businesschannel to learn about
peer0.org1.example.com | 2018-12-19 08:19:37.018 UTC [gossip.gossip] learnAnchorPeers -> INFO 0a2 Learning about the configured anchor peers of Org1MSP for channel businesschannel : [{peer0.org1.example.com 7051}]
peer0.org1.example.com | 2018-12-19 08:19:37.019 UTC [gossip.gossip] learnAnchorPeers -> INFO 0a3 Anchor peer with same endpoint, skipping connecting to myself
peer0.org1.example.com | 2018-12-19 08:19:37.040 UTC [comm.grpc.server] 1 -> INFO 0a4 unary call completed {"grpc.start_time": "2018-12-19T08:19:37.039Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:19:39.038Z", "grpc.peer_address": "172.18.0.12:37588", "grpc.peer_subject": "CN=peer0.org2.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "1.6929ms"}
peer0.org1.example.com | 2018-12-19 08:19:37.045 UTC [committer.txvalidator] Validate -> INFO 0a5 [businesschannel] Validated block [6] in 124ms
peer0.org1.example.com | 2018-12-19 08:19:37.073 UTC [comm.grpc.server] 1 -> INFO 0a6 unary call completed {"grpc.start_time": "2018-12-19T08:19:37.072Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:19:39.07Z", "grpc.peer_address": "172.18.0.15:45400", "grpc.peer_subject": "CN=peer1.org1.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "231.1µs"}
peer0.org1.example.com | 2018-12-19 08:19:37.078 UTC [comm.grpc.server] 1 -> INFO 0a7 streaming call completed {"grpc.start_time": "2018-12-19T08:17:36.413Z", "grpc.service": "gossip.Gossip", "grpc.method": "GossipStream", "grpc.peer_address": "172.18.0.12:37434", "grpc.peer_subject": "CN=peer0.org2.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "2m0.8047803s"}
peer0.org1.example.com | 2018-12-19 08:19:37.078 UTC [comm.grpc.server] 1 -> INFO 0a8 streaming call completed {"grpc.start_time": "2018-12-19T08:19:37.053Z", "grpc.service": "gossip.Gossip", "grpc.method": "GossipStream", "grpc.request_deadline": "2018-12-19T08:19:47.052Z", "grpc.peer_address": "172.18.0.12:37588", "grpc.peer_subject": "CN=peer0.org2.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "25.4393ms"}
peer0.org1.example.com | 2018-12-19 08:19:37.089 UTC [kvledger] CommitWithPvtData -> INFO 0a9 [businesschannel] Committed block [6] with 1 transaction(s) in 42ms (state_validation=2ms block_commit=27ms state_commit=4ms)
peer0.org1.example.com | 2018-12-19 08:19:37.095 UTC [comm.grpc.server] 1 -> INFO 0aa streaming call completed {"grpc.start_time": "2018-12-19T08:17:36.779Z", "grpc.service": "gossip.Gossip", "grpc.method": "GossipStream", "grpc.peer_address": "172.18.0.15:45252", "grpc.peer_subject": "CN=peer1.org1.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "2m0.4553056s"}
peer0.org1.example.com | 2018-12-19 08:19:37.097 UTC [comm.grpc.server] 1 -> INFO 0ab streaming call completed {"grpc.start_time": "2018-12-19T08:19:37.089Z", "grpc.service": "gossip.Gossip", "grpc.method": "GossipStream", "grpc.request_deadline": "2018-12-19T08:19:47.089Z", "grpc.peer_address": "172.18.0.15:45400", "grpc.peer_subject": "CN=peer1.org1.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "7.6967ms"}
peer0.org1.example.com | 2018-12-19 08:19:37.273 UTC [comm.grpc.server] 1 -> INFO 0ac unary call completed {"grpc.start_time": "2018-12-19T08:19:37.273Z", "grpc.service": "gossip.Gossip", "grpc.method": "Ping", "grpc.request_deadline": "2018-12-19T08:19:39.273Z", "grpc.peer_address": "172.18.0.13:60480", "grpc.peer_subject": "CN=peer1.org2.example.com,L=San Francisco,ST=California,C=US", "grpc.code": "OK", "grpc.call_duration": "134µs"}
peer0.org1.example.com | 2018-12-19 08:19:54.654 UTC [endorser] callChaincode -> INFO 0ad [][d454850e] Entry chaincode: name:"cscc"
peer0.org1.example.com | 2018-12-19 08:19:54.655 UTC [endorser] callChaincode -> INFO 0ae [][d454850e] Exit chaincode: name:"cscc" (0ms)
peer0.org1.example.com | 2018-12-19 08:19:54.656 UTC [comm.grpc.server] 1 -> INFO 0af unary call completed {"grpc.start_time": "2018-12-19T08:19:54.654Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:47370", "grpc.code": "OK", "grpc.call_duration": "2.2139ms"}
peer0.org1.example.com | 2018-12-19 08:19:55.740 UTC [endorser] callChaincode -> INFO 0b0 [][68f70709] Entry chaincode: name:"qscc"
peer0.org1.example.com | 2018-12-19 08:19:55.743 UTC [endorser] callChaincode -> INFO 0b1 [][68f70709] Exit chaincode: name:"qscc" (3ms)
peer0.org1.example.com | 2018-12-19 08:19:55.743 UTC [comm.grpc.server] 1 -> INFO 0b2 unary call completed {"grpc.start_time": "2018-12-19T08:19:55.738Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.18.0.5:47378", "grpc.code": "OK", "grpc.call_duration": "4.7798ms"}
zookeeper0 | ZooKeeper JMX enabled by default
zookeeper0 | Using config: /conf/zoo.cfg
zookeeper0 | 2018-12-19 08:16:54,252 [myid:] - INFO [main:QuorumPeerConfig@124] - Reading configuration from: /conf/zoo.cfg
zookeeper0 | 2018-12-19 08:16:54,695 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: zookeeper2 to address: zookeeper2/172.18.0.4
zookeeper0 | 2018-12-19 08:16:54,709 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: zookeeper1 to address: zookeeper1/172.18.0.2
zookeeper0 | 2018-12-19 08:16:54,756 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: zookeeper0 to address: zookeeper0/172.18.0.3
zookeeper0 | 2018-12-19 08:16:54,757 [myid:] - INFO [main:QuorumPeerConfig@352] - Defaulting to majority quorums
kafka3 | [2018-12-19 08:17:02,513] INFO KafkaConfig values:
kafka3 | advertised.host.name = null
kafka3 | advertised.listeners = null
kafka3 | advertised.port = null
kafka3 | alter.config.policy.class.name = null
kafka3 | authorizer.class.name =
kafka3 | auto.create.topics.enable = true
kafka3 | auto.leader.rebalance.enable = true
kafka3 | background.threads = 10
kafka3 | broker.id = 3
zookeeper2 | ZooKeeper JMX enabled by default
zookeeper2 | Using config: /conf/zoo.cfg
zookeeper2 | 2018-12-19 08:16:54,272 [myid:] - INFO [main:QuorumPeerConfig@124] - Reading configuration from: /conf/zoo.cfg
kafka3 | broker.id.generation.enable = true
kafka3 | broker.rack = null
kafka3 | compression.type = producer
kafka3 | connections.max.idle.ms = 600000
kafka3 | controlled.shutdown.enable = true
kafka3 | controlled.shutdown.max.retries = 3
kafka3 | controlled.shutdown.retry.backoff.ms = 5000
kafka3 | controller.socket.timeout.ms = 30000
kafka3 | create.topic.policy.class.name = null
kafka3 | default.replication.factor = 3
kafka0 | [2018-12-19 08:17:02,791] INFO KafkaConfig values:
zookeeper2 | 2018-12-19 08:16:54,745 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: zookeeper2 to address: zookeeper2/172.18.0.4
zookeeper2 | 2018-12-19 08:16:54,752 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: zookeeper1 to address: zookeeper1/172.18.0.2
kafka2 | [2018-12-19 08:17:02,253] INFO KafkaConfig values:
kafka2 | advertised.host.name = null
kafka2 | advertised.listeners = null
kafka2 | advertised.port = null
kafka2 | alter.config.policy.class.name = null
kafka2 | authorizer.class.name =
kafka2 | auto.create.topics.enable = true
kafka2 | auto.leader.rebalance.enable = true
kafka2 | background.threads = 10
kafka2 | broker.id = 2
kafka2 | broker.id.generation.enable = true
kafka2 | broker.rack = null
kafka2 | compression.type = producer
kafka2 | connections.max.idle.ms = 600000
kafka2 | controlled.shutdown.enable = true
kafka2 | controlled.shutdown.max.retries = 3
kafka2 | controlled.shutdown.retry.backoff.ms = 5000
kafka2 | controller.socket.timeout.ms = 30000
zookeeper0 | 2018-12-19 08:16:54,828 [myid:1] - INFO [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
zookeeper0 | 2018-12-19 08:16:54,828 [myid:1] - INFO [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 1
zookeeper0 | 2018-12-19 08:16:54,866 [myid:1] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started.
zookeeper0 | 2018-12-19 08:16:55,172 [myid:1] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed.
zookeeper0 | 2018-12-19 08:16:55,208 [myid:1] - INFO [main:QuorumPeerMain@127] - Starting quorum peer
zookeeper0 | 2018-12-19 08:16:55,579 [myid:1] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2181
zookeeper0 | 2018-12-19 08:16:55,594 [myid:1] - INFO [main:QuorumPeer@1019] - tickTime set to 2000
zookeeper0 | 2018-12-19 08:16:55,623 [myid:1] - INFO [main:QuorumPeer@1039] - minSessionTimeout set to -1
zookeeper0 | 2018-12-19 08:16:55,623 [myid:1] - INFO [main:QuorumPeer@1050] - maxSessionTimeout set to -1
zookeeper0 | 2018-12-19 08:16:55,624 [myid:1] - INFO [main:QuorumPeer@1065] - initLimit set to 5
zookeeper2 | 2018-12-19 08:16:54,765 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: zookeeper0 to address: zookeeper0/172.18.0.3
zookeeper2 | 2018-12-19 08:16:54,766 [myid:] - INFO [main:QuorumPeerConfig@352] - Defaulting to majority quorums
zookeeper2 | 2018-12-19 08:16:54,813 [myid:3] - INFO [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
zookeeper2 | 2018-12-19 08:16:54,813 [myid:3] - INFO [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 1
zookeeper2 | 2018-12-19 08:16:54,875 [myid:3] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started.
zookeeper2 | 2018-12-19 08:16:55,125 [myid:3] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed.
zookeeper2 | 2018-12-19 08:16:55,216 [myid:3] - INFO [main:QuorumPeerMain@127] - Starting quorum peer
zookeeper2 | 2018-12-19 08:16:55,563 [myid:3] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2181
zookeeper2 | 2018-12-19 08:16:55,604 [myid:3] - INFO [main:QuorumPeer@1019] - tickTime set to 2000
zookeeper2 | 2018-12-19 08:16:55,606 [myid:3] - INFO [main:QuorumPeer@1039] - minSessionTimeout set to -1
zookeeper2 | 2018-12-19 08:16:55,606 [myid:3] - INFO [main:QuorumPeer@1050] - maxSessionTimeout set to -1
orderer1.example.com | 2018-12-19 08:16:53.349 UTC [localconfig] completeInitialization -> INFO 001 Kafka.Version unset, setting to 0.10.2.0
orderer1.example.com | 2018-12-19 08:16:53.516 UTC [orderer.common.server] prettyPrintStruct -> INFO 002 Orderer config values:
orderer1.example.com | General.LedgerType = "file"
orderer1.example.com | General.ListenAddress = "0.0.0.0"
orderer1.example.com | General.ListenPort = 7050
orderer1.example.com | General.TLS.Enabled = true
orderer1.example.com | General.TLS.PrivateKey = "/var/hyperledger/orderer/tls/server.key"
orderer1.example.com | General.TLS.Certificate = "/var/hyperledger/orderer/tls/server.crt"
orderer1.example.com | General.TLS.RootCAs = [/var/hyperledger/orderer/tls/ca.crt]
orderer1.example.com | General.TLS.ClientAuthRequired = false
orderer1.example.com | General.TLS.ClientRootCAs = []
orderer1.example.com | General.Cluster.RootCAs = [/etc/hyperledger/fabric/tls/ca.crt]
orderer1.example.com | General.Cluster.ClientCertificate = ""
orderer1.example.com | General.Cluster.ClientPrivateKey = ""
orderer1.example.com | General.Cluster.DialTimeout = 5s
orderer1.example.com | General.Cluster.RPCTimeout = 7s
orderer1.example.com | General.Cluster.ReplicationBufferSize = 20971520
orderer1.example.com | General.Cluster.ReplicationPullTimeout = 5s
orderer1.example.com | General.Cluster.ReplicationRetryTimeout = 5s
orderer1.example.com | General.Keepalive.ServerMinInterval = 1m0s
orderer1.example.com | General.Keepalive.ServerInterval = 2h0m0s
orderer1.example.com | General.Keepalive.ServerTimeout = 20s
orderer1.example.com | General.GenesisMethod = "file"
orderer1.example.com | General.GenesisProfile = "SampleInsecureSolo"
orderer1.example.com | General.SystemChannel = "test-system-channel-name"
orderer1.example.com | General.GenesisFile = "/var/hyperledger/orderer/orderer.genesis.block"
orderer1.example.com | General.Profile.Enabled = false
orderer1.example.com | General.Profile.Address = "0.0.0.0:6060"
orderer1.example.com | General.LocalMSPDir = "/var/hyperledger/orderer/msp"
orderer1.example.com | General.LocalMSPID = "OrdererMSP"
orderer1.example.com | General.BCCSP.ProviderName = "SW"
orderer1.example.com | General.BCCSP.SwOpts.SecLevel = 256
orderer1.example.com | General.BCCSP.SwOpts.HashFamily = "SHA2"
orderer1.example.com | General.BCCSP.SwOpts.Ephemeral = false
orderer1.example.com | General.BCCSP.SwOpts.FileKeystore.KeyStorePath = "/var/hyperledger/orderer/msp/keystore"
orderer1.example.com | General.BCCSP.SwOpts.DummyKeystore =
orderer1.example.com | General.BCCSP.SwOpts.InmemKeystore =
orderer1.example.com | General.BCCSP.PluginOpts =
orderer1.example.com | General.Authentication.TimeWindow = 15m0s
orderer1.example.com | FileLedger.Location = "/var/hyperledger/production/orderer"
orderer1.example.com | FileLedger.Prefix = "hyperledger-fabric-ordererledger"
orderer1.example.com | RAMLedger.HistorySize = 1000
orderer1.example.com | Kafka.Retry.ShortInterval = 1s
orderer1.example.com | Kafka.Retry.ShortTotal = 30s
orderer1.example.com | Kafka.Retry.LongInterval = 5m0s
orderer1.example.com | Kafka.Retry.LongTotal = 12h0m0s
orderer1.example.com | Kafka.Retry.NetworkTimeouts.DialTimeout = 10s
orderer1.example.com | Kafka.Retry.NetworkTimeouts.ReadTimeout = 10s
orderer1.example.com | Kafka.Retry.NetworkTimeouts.WriteTimeout = 10s
orderer1.example.com | Kafka.Retry.Metadata.RetryMax = 3
orderer1.example.com | Kafka.Retry.Metadata.RetryBackoff = 250ms
orderer1.example.com | Kafka.Retry.Producer.RetryMax = 3
orderer1.example.com | Kafka.Retry.Producer.RetryBackoff = 100ms
orderer1.example.com | Kafka.Retry.Consumer.RetryBackoff = 2s
orderer1.example.com | Kafka.Verbose = true
orderer1.example.com | Kafka.Version = 0.10.2.0
orderer1.example.com | Kafka.TLS.Enabled = false
orderer1.example.com | Kafka.TLS.PrivateKey = ""
orderer1.example.com | Kafka.TLS.Certificate = ""
orderer1.example.com | Kafka.TLS.RootCAs = []
orderer1.example.com | Kafka.TLS.ClientAuthRequired = false
orderer1.example.com | Kafka.TLS.ClientRootCAs = []
orderer1.example.com | Kafka.SASLPlain.Enabled = false
orderer1.example.com | Kafka.SASLPlain.User = ""
orderer1.example.com | Kafka.SASLPlain.Password = ""
orderer1.example.com | Kafka.Topic.ReplicationFactor = 3
orderer1.example.com | Debug.BroadcastTraceDir = ""
orderer1.example.com | Debug.DeliverTraceDir = ""
orderer1.example.com | Consensus = map[WALDir:/var/hyperledger/production/orderer/etcdraft/wal SnapDir:/var/hyperledger/production/orderer/etcdraft/snapshot]
orderer1.example.com | Operations.ListenAddress = "127.0.0.1:8443"
orderer1.example.com | Operations.TLS.Enabled = false
orderer1.example.com | Operations.TLS.PrivateKey = ""
orderer1.example.com | Operations.TLS.Certificate = ""
orderer1.example.com | Operations.TLS.RootCAs = []
orderer1.example.com | Operations.TLS.ClientAuthRequired = false
orderer1.example.com | Operations.TLS.ClientRootCAs = []
orderer1.example.com | Metrics.Provider = "disabled"
orderer1.example.com | Metrics.Statsd.Network = "udp"
orderer1.example.com | Metrics.Statsd.Address = "127.0.0.1:8125"
orderer1.example.com | Metrics.Statsd.WriteInterval = 30s
orderer1.example.com | Metrics.Statsd.Prefix = ""
orderer1.example.com | 2018-12-19 08:16:53.781 UTC [orderer.common.server] initializeServerConfig -> INFO 003 Starting orderer with TLS enabled
orderer1.example.com | 2018-12-19 08:16:53.801 UTC [fsblkstorage] newBlockfileMgr -> INFO 004 Getting block information from block storage
orderer1.example.com | 2018-12-19 08:16:53.910 UTC [orderer.consensus.kafka] newChain -> INFO 005 [channel: testchainid] Starting chain with last persisted offset -3 and last recorded block 0
orderer1.example.com | 2018-12-19 08:16:53.911 UTC [orderer.commmon.multichannel] Initialize -> INFO 006 Starting system channel 'testchainid' with genesis block hash 89aa6b0458f547d88023574ecfd47d10b35456026221e446d87e5da9215aee45 and orderer type kafka
orderer1.example.com | 2018-12-19 08:16:53.911 UTC [orderer.common.server] Start -> INFO 007 Starting orderer:
orderer1.example.com | Version: 1.4.0-rc1
orderer1.example.com | Commit SHA: development build
orderer1.example.com | Go version: go1.11.2
orderer1.example.com | OS/Arch: linux/amd64
orderer1.example.com | 2018-12-19 08:16:53.911 UTC [orderer.common.server] Start -> INFO 008 Beginning to serve requests
orderer1.example.com | 2018-12-19 08:16:53.916 UTC [orderer.consensus.kafka] setupTopicForChannel -> INFO 009 [channel: testchainid] Setting up the topic for this channel...
orderer1.example.com | 2018-12-19 08:17:10.795 UTC [orderer.consensus.kafka] setupProducerForChannel -> INFO 00a [channel: testchainid] Setting up the producer for this channel...
orderer1.example.com | 2018-12-19 08:17:11.031 UTC [orderer.consensus.kafka] startThread -> INFO 00b [channel: testchainid] Producer set up successfully
orderer1.example.com | 2018-12-19 08:17:11.032 UTC [orderer.consensus.kafka] sendConnectMessage -> INFO 00c [channel: testchainid] About to post the CONNECT message...
orderer1.example.com | 2018-12-19 08:17:13.608 UTC [orderer.consensus.kafka] startThread -> INFO 00d [channel: testchainid] CONNECT message posted successfully
orderer1.example.com | 2018-12-19 08:17:13.608 UTC [orderer.consensus.kafka] setupParentConsumerForChannel -> INFO 00e [channel: testchainid] Setting up the parent consumer for this channel...
orderer1.example.com | 2018-12-19 08:17:13.627 UTC [orderer.consensus.kafka] startThread -> INFO 00f [channel: testchainid] Parent consumer set up successfully
orderer1.example.com | 2018-12-19 08:17:13.628 UTC [orderer.consensus.kafka] setupChannelConsumerForChannel -> INFO 010 [channel: testchainid] Setting up the channel consumer for this channel (start offset: -2)...
orderer1.example.com | 2018-12-19 08:17:13.659 UTC [orderer.consensus.kafka] startThread -> INFO 011 [channel: testchainid] Channel consumer set up successfully
orderer1.example.com | 2018-12-19 08:17:13.659 UTC [orderer.consensus.kafka] startThread -> INFO 012 [channel: testchainid] Start phase completed successfully
orderer1.example.com | 2018-12-19 08:17:26.939 UTC [fsblkstorage] newBlockfileMgr -> INFO 013 Getting block information from block storage
orderer1.example.com | 2018-12-19 08:17:26.952 UTC [orderer.consensus.kafka] newChain -> INFO 014 [channel: businesschannel] Starting chain with last persisted offset -3 and last recorded block 0
orderer1.example.com | 2018-12-19 08:17:26.952 UTC [orderer.commmon.multichannel] newChain -> INFO 015 Created and starting new chain businesschannel
orderer1.example.com | 2018-12-19 08:17:26.954 UTC [orderer.consensus.kafka] setupTopicForChannel -> INFO 016 [channel: businesschannel] Setting up the topic for this channel...
orderer1.example.com | 2018-12-19 08:17:27.327 UTC [orderer.consensus.kafka] setupProducerForChannel -> INFO 017 [channel: businesschannel] Setting up the producer for this channel...
orderer1.example.com | 2018-12-19 08:17:27.344 UTC [orderer.consensus.kafka] startThread -> INFO 018 [channel: businesschannel] Producer set up successfully
orderer1.example.com | 2018-12-19 08:17:27.344 UTC [orderer.consensus.kafka] sendConnectMessage -> INFO 019 [channel: businesschannel] About to post the CONNECT message...
orderer1.example.com | 2018-12-19 08:17:28.624 UTC [orderer.consensus.kafka] startThread -> INFO 01a [channel: businesschannel] CONNECT message posted successfully
orderer1.example.com | 2018-12-19 08:17:28.625 UTC [orderer.consensus.kafka] setupParentConsumerForChannel -> INFO 01b [channel: businesschannel] Setting up the parent consumer for this channel...
orderer1.example.com | 2018-12-19 08:17:28.682 UTC [orderer.consensus.kafka] startThread -> INFO 01c [channel: businesschannel] Parent consumer set up successfully
orderer1.example.com | 2018-12-19 08:17:28.683 UTC [orderer.consensus.kafka] setupChannelConsumerForChannel -> INFO 01d [channel: businesschannel] Setting up the channel consumer for this channel (start offset: -2)...
orderer1.example.com | 2018-12-19 08:17:28.791 UTC [orderer.consensus.kafka] startThread -> INFO 01e [channel: businesschannel] Channel consumer set up successfully
orderer1.example.com | 2018-12-19 08:17:28.791 UTC [orderer.consensus.kafka] startThread -> INFO 01f [channel: businesschannel] Start phase completed successfully
orderer1.example.com | 2018-12-19 08:17:41.310 UTC [comm.grpc.server] 1 -> INFO 020 streaming call completed {"grpc.start_time": "2018-12-19T08:17:36.296Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.18.0.12:59294", "grpc.peer_subject": "CN=peer0.org2.example.com,L=San Francisco,ST=California,C=US", "error": "context finished before block retrieved: context canceled", "grpc.code": "Unknown", "grpc.call_duration": "5.0139304s"}
orderer1.example.com | 2018-12-19 08:19:36.855 UTC [cauthdsl] deduplicate -> WARN 021 De-duplicating identity [Org1MSP0270edfed53a78d7d3c66dc25737f57f956e48ef69dca5ecc91c26679dd4eff3] at index 2 in signature set
orderer1.example.com | 2018-12-19 08:19:36.856 UTC [cauthdsl] deduplicate -> WARN 022 De-duplicating identity [Org1MSP0270edfed53a78d7d3c66dc25737f57f956e48ef69dca5ecc91c26679dd4eff3] at index 2 in signature set
kafka0 | advertised.host.name = null
kafka0 | advertised.listeners = null
kafka0 | advertised.port = null
kafka0 | alter.config.policy.class.name = null
kafka0 | authorizer.class.name =
kafka0 | auto.create.topics.enable = true
kafka0 | auto.leader.rebalance.enable = true
kafka0 | background.threads = 10
kafka0 | broker.id = 0
kafka0 | broker.id.generation.enable = true
kafka0 | broker.rack = null
kafka0 | compression.type = producer
kafka0 | connections.max.idle.ms = 600000
kafka0 | controlled.shutdown.enable = true
kafka0 | controlled.shutdown.max.retries = 3
kafka0 | controlled.shutdown.retry.backoff.ms = 5000
kafka0 | controller.socket.timeout.ms = 30000
kafka0 | create.topic.policy.class.name = null
kafka0 | default.replication.factor = 3
kafka0 | delete.records.purgatory.purge.interval.requests = 1
kafka0 | delete.topic.enable = true
kafka0 | fetch.purgatory.purge.interval.requests = 1000
kafka0 | group.initial.rebalance.delay.ms = 0
kafka0 | group.max.session.timeout.ms = 300000
kafka0 | group.min.session.timeout.ms = 6000
kafka0 | host.name =
kafka0 | inter.broker.listener.name = null
kafka0 | inter.broker.protocol.version = 1.0-IV0
kafka0 | leader.imbalance.check.interval.seconds = 300
kafka0 | leader.imbalance.per.broker.percentage = 10
zookeeper2 | 2018-12-19 08:16:55,606 [myid:3] - INFO [main:QuorumPeer@1065] - initLimit set to 5
zookeeper2 | 2018-12-19 08:16:55,756 [myid:3] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation
zookeeper2 | 2018-12-19 08:16:55,919 [myid:3] - INFO [main:QuorumPeer@548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation
zookeeper2 | 2018-12-19 08:16:56,002 [myid:3] - INFO [ListenerThread:QuorumCnxManager$Listener@534] - My election bind port: zookeeper2/172.18.0.4:3888
zookeeper0 | 2018-12-19 08:16:55,825 [myid:1] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation
zookeeper0 | 2018-12-19 08:16:55,862 [myid:1] - INFO [main:QuorumPeer@548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation
zookeeper0 | 2018-12-19 08:16:55,969 [myid:1] - INFO [ListenerThread:QuorumCnxManager$Listener@534] - My election bind port: zookeeper0/172.18.0.3:3888
zookeeper0 | 2018-12-19 08:16:56,274 [myid:1] - INFO [zookeeper0/172.18.0.3:3888:QuorumCnxManager$Listener@541] - Received connection request /172.18.0.2:48338
zookeeper0 | 2018-12-19 08:16:56,438 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
zookeeper2 | 2018-12-19 08:16:56,250 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:QuorumPeer@774] - LOOKING
zookeeper2 | 2018-12-19 08:16:56,277 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:FastLeaderElection@818] - New election. My id = 3, proposed zxid=0x0
zookeeper2 | 2018-12-19 08:16:56,413 [myid:3] - INFO [zookeeper2/172.18.0.4:3888:QuorumCnxManager$Listener@541] - Received connection request /172.18.0.2:58774
zookeeper2 | 2018-12-19 08:16:56,602 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:FastLeaderElection@852] - Notification time out: 400
zookeeper2 | 2018-12-19 08:16:56,657 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
zookeeper2 | 2018-12-19 08:16:56,697 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
zookeeper2 | 2018-12-19 08:16:56,731 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
zookeeper2 | 2018-12-19 08:16:56,745 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
zookeeper2 | 2018-12-19 08:16:56,780 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
kafka2 | create.topic.policy.class.name = null
kafka2 | default.replication.factor = 3
kafka2 | delete.records.purgatory.purge.interval.requests = 1
kafka2 | delete.topic.enable = true
kafka2 | fetch.purgatory.purge.interval.requests = 1000
kafka2 | group.initial.rebalance.delay.ms = 0
kafka2 | group.max.session.timeout.ms = 300000
kafka2 | group.min.session.timeout.ms = 6000
kafka2 | host.name =
kafka2 | inter.broker.listener.name = null
kafka2 | inter.broker.protocol.version = 1.0-IV0
kafka2 | leader.imbalance.check.interval.seconds = 300
kafka2 | leader.imbalance.per.broker.percentage = 10
kafka2 | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
kafka2 | listeners = null
kafka2 | log.cleaner.backoff.ms = 15000
kafka2 | log.cleaner.dedupe.buffer.size = 134217728
kafka2 | log.cleaner.delete.retention.ms = 86400000
kafka2 | log.cleaner.enable = true
kafka2 | log.cleaner.io.buffer.load.factor = 0.9
kafka2 | log.cleaner.io.buffer.size = 524288
kafka2 | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
zookeeper1 | ZooKeeper JMX enabled by default
zookeeper1 | Using config: /conf/zoo.cfg
zookeeper1 | 2018-12-19 08:16:54,068 [myid:] - INFO [main:QuorumPeerConfig@124] - Reading configuration from: /conf/zoo.cfg
zookeeper1 | 2018-12-19 08:16:54,531 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: zookeeper2 to address: zookeeper2/172.18.0.4
zookeeper1 | 2018-12-19 08:16:54,549 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: zookeeper1 to address: zookeeper1/172.18.0.2
zookeeper1 | 2018-12-19 08:16:54,603 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: zookeeper0 to address: zookeeper0/172.18.0.3
zookeeper1 | 2018-12-19 08:16:54,603 [myid:] - INFO [main:QuorumPeerConfig@352] - Defaulting to majority quorums
zookeeper1 | 2018-12-19 08:16:54,668 [myid:2] - INFO [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
zookeeper0 | 2018-12-19 08:16:56,497 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:QuorumPeer@774] - LOOKING
zookeeper0 | 2018-12-19 08:16:56,522 [myid:1] - INFO [zookeeper0/172.18.0.3:3888:QuorumCnxManager$Listener@541] - Received connection request /172.18.0.4:59440
zookeeper0 | 2018-12-19 08:16:56,536 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:FastLeaderElection@818] - New election. My id = 1, proposed zxid=0x0
zookeeper0 | 2018-12-19 08:16:56,580 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
zookeeper0 | 2018-12-19 08:16:56,598 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
kafka3 | delete.records.purgatory.purge.interval.requests = 1
kafka3 | delete.topic.enable = true
kafka3 | fetch.purgatory.purge.interval.requests = 1000
kafka3 | group.initial.rebalance.delay.ms = 0
kafka3 | group.max.session.timeout.ms = 300000
kafka3 | group.min.session.timeout.ms = 6000
kafka3 | host.name =
kafka3 | inter.broker.listener.name = null
kafka3 | inter.broker.protocol.version = 1.0-IV0
kafka3 | leader.imbalance.check.interval.seconds = 300
kafka3 | leader.imbalance.per.broker.percentage = 10
kafka3 | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
kafka3 | listeners = null
kafka3 | log.cleaner.backoff.ms = 15000
kafka3 | log.cleaner.dedupe.buffer.size = 134217728
kafka3 | log.cleaner.delete.retention.ms = 86400000
kafka3 | log.cleaner.enable = true
kafka3 | log.cleaner.io.buffer.load.factor = 0.9
kafka3 | log.cleaner.io.buffer.size = 524288
kafka3 | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
kafka3 | log.cleaner.min.cleanable.ratio = 0.5
kafka3 | log.cleaner.min.compaction.lag.ms = 0
kafka3 | log.cleaner.threads = 1
kafka1 | [2018-12-19 08:17:02,721] INFO KafkaConfig values:
kafka1 | advertised.host.name = null
kafka1 | advertised.listeners = null
kafka1 | advertised.port = null
kafka1 | alter.config.policy.class.name = null
kafka1 | authorizer.class.name =
kafka1 | auto.create.topics.enable = true
kafka1 | auto.leader.rebalance.enable = true
kafka1 | background.threads = 10
kafka1 | broker.id = 1
kafka1 | broker.id.generation.enable = true
kafka1 | broker.rack = null
kafka1 | compression.type = producer
kafka1 | connections.max.idle.ms = 600000
kafka1 | controlled.shutdown.enable = true
kafka1 | controlled.shutdown.max.retries = 3
kafka1 | controlled.shutdown.retry.backoff.ms = 5000
kafka1 | controller.socket.timeout.ms = 30000
kafka1 | create.topic.policy.class.name = null
kafka1 | default.replication.factor = 3
kafka1 | delete.records.purgatory.purge.interval.requests = 1
kafka1 | delete.topic.enable = true
kafka2 | log.cleaner.min.cleanable.ratio = 0.5
kafka2 | log.cleaner.min.compaction.lag.ms = 0
kafka2 | log.cleaner.threads = 1
kafka2 | log.cleanup.policy = [delete]
kafka2 | log.dir = /tmp/kafka-logs
kafka2 | log.dirs = /tmp/kafka-logs
kafka2 | log.flush.interval.messages = 9223372036854775807
kafka2 | log.flush.interval.ms = null
kafka2 | log.flush.offset.checkpoint.interval.ms = 60000
kafka2 | log.flush.scheduler.interval.ms = 9223372036854775807
kafka2 | log.flush.start.offset.checkpoint.interval.ms = 60000
kafka2 | log.index.interval.bytes = 4096
kafka2 | log.index.size.max.bytes = 10485760
kafka2 | log.message.format.version = 1.0-IV0
kafka2 | log.message.timestamp.difference.max.ms = 9223372036854775807
kafka2 | log.message.timestamp.type = CreateTime
kafka2 | log.preallocate = false
kafka2 | log.retention.bytes = -1
kafka2 | log.retention.check.interval.ms = 300000
kafka2 | log.retention.hours = 168
kafka2 | log.retention.minutes = null
kafka2 | log.retention.ms = -1
kafka2 | log.roll.hours = 168
kafka2 | log.roll.jitter.hours = 0
kafka2 | log.roll.jitter.ms = null
kafka2 | log.roll.ms = null
kafka2 | log.segment.bytes = 1073741824
kafka2 | log.segment.delete.delay.ms = 60000
kafka2 | max.connections.per.ip = 2147483647
kafka2 | max.connections.per.ip.overrides =
kafka2 | message.max.bytes = 1048576
kafka2 | metric.reporters = []
kafka2 | metrics.num.samples = 2
kafka2 | metrics.recording.level = INFO
kafka2 | metrics.sample.window.ms = 30000
kafka2 | min.insync.replicas = 2
kafka2 | num.io.threads = 8
kafka2 | num.network.threads = 3
kafka2 | num.partitions = 1
kafka2 | num.recovery.threads.per.data.dir = 1
kafka2 | num.replica.fetchers = 1
kafka2 | offset.metadata.max.bytes = 4096
kafka2 | offsets.commit.required.acks = -1
kafka2 | offsets.commit.timeout.ms = 5000
kafka2 | offsets.load.buffer.size = 5242880
kafka2 | offsets.retention.check.interval.ms = 600000
kafka2 | offsets.retention.minutes = 1440
kafka2 | offsets.topic.compression.codec = 0
kafka2 | offsets.topic.num.partitions = 50
kafka2 | offsets.topic.replication.factor = 1
kafka2 | offsets.topic.segment.bytes = 104857600
kafka2 | port = 9092
zookeeper0 | 2018-12-19 08:16:56,621 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
zookeeper0 | 2018-12-19 08:16:56,638 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
zookeeper0 | 2018-12-19 08:16:56,650 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
kafka0 | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
kafka0 | listeners = null
kafka0 | log.cleaner.backoff.ms = 15000
kafka0 | log.cleaner.dedupe.buffer.size = 134217728
kafka0 | log.cleaner.delete.retention.ms = 86400000
kafka0 | log.cleaner.enable = true
kafka0 | log.cleaner.io.buffer.load.factor = 0.9
kafka0 | log.cleaner.io.buffer.size = 524288
kafka0 | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
kafka0 | log.cleaner.min.cleanable.ratio = 0.5
kafka0 | log.cleaner.min.compaction.lag.ms = 0
kafka0 | log.cleaner.threads = 1
kafka0 | log.cleanup.policy = [delete]
kafka0 | log.dir = /tmp/kafka-logs
kafka0 | log.dirs = /tmp/kafka-logs
kafka0 | log.flush.interval.messages = 9223372036854775807
kafka0 | log.flush.interval.ms = null
kafka0 | log.flush.offset.checkpoint.interval.ms = 60000
kafka0 | log.flush.scheduler.interval.ms = 9223372036854775807
zookeeper2 | 2018-12-19 08:16:56,784 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
zookeeper2 | 2018-12-19 08:16:56,986 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:QuorumPeer@856] - LEADING
zookeeper2 | 2018-12-19 08:16:57,000 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Leader@59] - TCP NoDelay set to: true
zookeeper2 | 2018-12-19 08:16:57,176 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment@100] - Server environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50 GMT
zookeeper2 | 2018-12-19 08:16:57,189 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment@100] - Server environment:host.name=zookeeper2
zookeeper2 | 2018-12-19 08:16:57,193 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment@100] - Server environment:java.version=1.8.0_181
zookeeper2 | 2018-12-19 08:16:57,199 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment@100] - Server environment:java.vendor=Oracle Corporation
kafka3 | log.cleanup.policy = [delete]
kafka3 | log.dir = /tmp/kafka-logs
kafka3 | log.dirs = /tmp/kafka-logs
kafka3 | log.flush.interval.messages = 9223372036854775807
kafka3 | log.flush.interval.ms = null
kafka3 | log.flush.offset.checkpoint.interval.ms = 60000
kafka3 | log.flush.scheduler.interval.ms = 9223372036854775807
kafka3 | log.flush.start.offset.checkpoint.interval.ms = 60000
kafka3 | log.index.interval.bytes = 4096
kafka3 | log.index.size.max.bytes = 10485760
kafka3 | log.message.format.version = 1.0-IV0
kafka3 | log.message.timestamp.difference.max.ms = 9223372036854775807
kafka3 | log.message.timestamp.type = CreateTime
kafka3 | log.preallocate = false
kafka3 | log.retention.bytes = -1
kafka3 | log.retention.check.interval.ms = 300000
kafka3 | log.retention.hours = 168
kafka3 | log.retention.minutes = null
kafka3 | log.retention.ms = -1
kafka3 | log.roll.hours = 168
kafka3 | log.roll.jitter.hours = 0
kafka3 | log.roll.jitter.ms = null
kafka3 | log.roll.ms = null
kafka3 | log.segment.bytes = 1073741824
kafka2 | principal.builder.class = null
kafka2 | producer.purgatory.purge.interval.requests = 1000
kafka2 | queued.max.request.bytes = -1
kafka2 | queued.max.requests = 500
kafka2 | quota.consumer.default = 9223372036854775807
kafka2 | quota.producer.default = 9223372036854775807
kafka2 | quota.window.num = 11
kafka2 | quota.window.size.seconds = 1
kafka2 | replica.fetch.backoff.ms = 1000
kafka2 | replica.fetch.max.bytes = 1048576
kafka2 | replica.fetch.min.bytes = 1
kafka2 | replica.fetch.response.max.bytes = 10485760
kafka2 | replica.fetch.wait.max.ms = 500
kafka2 | replica.high.watermark.checkpoint.interval.ms = 5000
kafka2 | replica.lag.time.max.ms = 10000
kafka2 | replica.socket.receive.buffer.bytes = 65536
kafka2 | replica.socket.timeout.ms = 30000
kafka2 | replication.quota.window.num = 11
kafka2 | replication.quota.window.size.seconds = 1
kafka2 | request.timeout.ms = 30000
kafka2 | reserved.broker.max.id = 1000
kafka2 | sasl.enabled.mechanisms = [GSSAPI]
kafka2 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafka2 | sasl.kerberos.min.time.before.relogin = 60000
zookeeper1 | 2018-12-19 08:16:54,668 [myid:2] - INFO [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 1
zookeeper1 | 2018-12-19 08:16:54,724 [myid:2] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started.
zookeeper1 | 2018-12-19 08:16:54,958 [myid:2] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed.
zookeeper1 | 2018-12-19 08:16:55,041 [myid:2] - INFO [main:QuorumPeerMain@127] - Starting quorum peer
zookeeper1 | 2018-12-19 08:16:55,351 [myid:2] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2181
zookeeper1 | 2018-12-19 08:16:55,428 [myid:2] - INFO [main:QuorumPeer@1019] - tickTime set to 2000
zookeeper1 | 2018-12-19 08:16:55,465 [myid:2] - INFO [main:QuorumPeer@1039] - minSessionTimeout set to -1
zookeeper1 | 2018-12-19 08:16:55,465 [myid:2] - INFO [main:QuorumPeer@1050] - maxSessionTimeout set to -1
zookeeper1 | 2018-12-19 08:16:55,465 [myid:2] - INFO [main:QuorumPeer@1065] - initLimit set to 5
kafka0 | log.flush.start.offset.checkpoint.interval.ms = 60000
kafka0 | log.index.interval.bytes = 4096
kafka0 | log.index.size.max.bytes = 10485760
kafka0 | log.message.format.version = 1.0-IV0
kafka0 | log.message.timestamp.difference.max.ms = 9223372036854775807
kafka0 | log.message.timestamp.type = CreateTime
kafka0 | log.preallocate = false
kafka0 | log.retention.bytes = -1
kafka0 | log.retention.check.interval.ms = 300000
kafka0 | log.retention.hours = 168
kafka0 | log.retention.minutes = null
kafka0 | log.retention.ms = -1
kafka0 | log.roll.hours = 168
kafka0 | log.roll.jitter.hours = 0
kafka0 | log.roll.jitter.ms = null
kafka0 | log.roll.ms = null
kafka0 | log.segment.bytes = 1073741824
kafka0 | log.segment.delete.delay.ms = 60000
kafka0 | max.connections.per.ip = 2147483647
kafka0 | max.connections.per.ip.overrides =
kafka0 | message.max.bytes = 1048576
kafka0 | metric.reporters = []
kafka0 | metrics.num.samples = 2
kafka0 | metrics.recording.level = INFO
kafka0 | metrics.sample.window.ms = 30000
kafka0 | min.insync.replicas = 2
kafka0 | num.io.threads = 8
kafka0 | num.network.threads = 3
kafka0 | num.partitions = 1
kafka0 | num.recovery.threads.per.data.dir = 1
kafka0 | num.replica.fetchers = 1
kafka3 | log.segment.delete.delay.ms = 60000
kafka3 | max.connections.per.ip = 2147483647
kafka3 | max.connections.per.ip.overrides =
kafka3 | message.max.bytes = 1048576
kafka3 | metric.reporters = []
kafka3 | metrics.num.samples = 2
kafka3 | metrics.recording.level = INFO
kafka3 | metrics.sample.window.ms = 30000
kafka3 | min.insync.replicas = 2
kafka3 | num.io.threads = 8
kafka3 | num.network.threads = 3
kafka3 | num.partitions = 1
kafka3 | num.recovery.threads.per.data.dir = 1
kafka3 | num.replica.fetchers = 1
kafka3 | offset.metadata.max.bytes = 4096
kafka3 | offsets.commit.required.acks = -1
kafka3 | offsets.commit.timeout.ms = 5000
kafka3 | offsets.load.buffer.size = 5242880
kafka3 | offsets.retention.check.interval.ms = 600000
kafka3 | offsets.retention.minutes = 1440
kafka3 | offsets.topic.compression.codec = 0
kafka3 | offsets.topic.num.partitions = 50
kafka3 | offsets.topic.replication.factor = 1
kafka3 | offsets.topic.segment.bytes = 104857600
zookeeper2 | 2018-12-19 08:16:57,203 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment@100] - Server environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre
zookeeper2 | 2018-12-19 08:16:57,214 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment@100] - Server environment:java.class.path=/zookeeper-3.4.9/bin/../build/classes:/zookeeper-3.4.9/bin/../build/lib/*.jar:/zookeeper-3.4.9/bin/../lib/slf4j-log4j12-1.6.1.jar:/zookeeper-3.4.9/bin/../lib/slf4j-api-1.6.1.jar:/zookeeper-3.4.9/bin/../lib/netty-3.10.5.Final.jar:/zookeeper-3.4.9/bin/../lib/log4j-1.2.16.jar:/zookeeper-3.4.9/bin/../lib/jline-0.9.94.jar:/zookeeper-3.4.9/bin/../zookeeper-3.4.9.jar:/zookeeper-3.4.9/bin/../src/java/lib/*.jar:/conf:
zookeeper2 | 2018-12-19 08:16:57,219 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
zookeeper2 | 2018-12-19 08:16:57,392 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment@100] - Server environment:java.io.tmpdir=/tmp
zookeeper1 | 2018-12-19 08:16:55,664 [myid:2] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation
zookeeper1 | 2018-12-19 08:16:55,709 [myid:2] - INFO [main:QuorumPeer@548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation
zookeeper1 | 2018-12-19 08:16:55,894 [myid:2] - INFO [ListenerThread:QuorumCnxManager$Listener@534] - My election bind port: zookeeper1/172.18.0.2:3888
zookeeper1 | 2018-12-19 08:16:56,135 [myid:2] - INFO [QuorumPeer[myid=2]/0.0.0.0:2181:QuorumPeer@774] - LOOKING
zookeeper1 | 2018-12-19 08:16:56,207 [myid:2] - INFO [QuorumPeer[myid=2]/0.0.0.0:2181:FastLeaderElection@818] - New election. My id = 2, proposed zxid=0x0
zookeeper1 | 2018-12-19 08:16:56,389 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
kafka3 | port = 9092
kafka3 | principal.builder.class = null
kafka3 | producer.purgatory.purge.interval.requests = 1000
kafka3 | queued.max.request.bytes = -1
kafka3 | queued.max.requests = 500
kafka3 | quota.consumer.default = 9223372036854775807
kafka3 | quota.producer.default = 9223372036854775807
kafka3 | quota.window.num = 11
kafka3 | quota.window.size.seconds = 1
kafka3 | replica.fetch.backoff.ms = 1000
kafka3 | replica.fetch.max.bytes = 1048576
kafka3 | replica.fetch.min.bytes = 1
kafka3 | replica.fetch.response.max.bytes = 10485760
kafka3 | replica.fetch.wait.max.ms = 500
kafka3 | replica.high.watermark.checkpoint.interval.ms = 5000
kafka3 | replica.lag.time.max.ms = 10000
kafka3 | replica.socket.receive.buffer.bytes = 65536
kafka3 | replica.socket.timeout.ms = 30000
kafka3 | replication.quota.window.num = 11
kafka3 | replication.quota.window.size.seconds = 1
kafka3 | request.timeout.ms = 30000
zookeeper1 | 2018-12-19 08:16:56,414 [myid:2] - INFO [WorkerSender[myid=2]:QuorumCnxManager@199] - Have smaller server identifier, so dropping the connection: (3, 2)
zookeeper1 | 2018-12-19 08:16:56,579 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
zookeeper1 | 2018-12-19 08:16:56,596 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
zookeeper1 | 2018-12-19 08:16:56,605 [myid:2] - INFO [zookeeper1/172.18.0.2:3888:QuorumCnxManager$Listener@541] - Received connection request /172.18.0.4:55700
zookeeper1 | 2018-12-19 08:16:56,632 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
zookeeper1 | 2018-12-19 08:16:56,634 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
zookeeper0 | 2018-12-19 08:16:56,863 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:QuorumPeer@844] - FOLLOWING
zookeeper0 | 2018-12-19 08:16:56,893 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:Learner@86] - TCP NoDelay set to: true
zookeeper0 | 2018-12-19 08:16:57,052 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:Environment@100] - Server environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50 GMT
zookeeper0 | 2018-12-19 08:16:57,066 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:Environment@100] - Server environment:host.name=zookeeper0
zookeeper0 | 2018-12-19 08:16:57,069 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:Environment@100] - Server environment:java.version=1.8.0_181
kafka0 | offset.metadata.max.bytes = 4096
kafka0 | offsets.commit.required.acks = -1
kafka0 | offsets.commit.timeout.ms = 5000
kafka0 | offsets.load.buffer.size = 5242880
kafka0 | offsets.retention.check.interval.ms = 600000
kafka0 | offsets.retention.minutes = 1440
kafka0 | offsets.topic.compression.codec = 0
kafka0 | offsets.topic.num.partitions = 50
kafka0 | offsets.topic.replication.factor = 1
kafka0 | offsets.topic.segment.bytes = 104857600
kafka0 | port = 9092
kafka0 | principal.builder.class = null
kafka0 | producer.purgatory.purge.interval.requests = 1000
kafka0 | queued.max.request.bytes = -1
kafka0 | queued.max.requests = 500
kafka0 | quota.consumer.default = 9223372036854775807
kafka0 | quota.producer.default = 9223372036854775807
kafka0 | quota.window.num = 11
kafka0 | quota.window.size.seconds = 1
kafka0 | replica.fetch.backoff.ms = 1000
kafka0 | replica.fetch.max.bytes = 1048576
kafka0 | replica.fetch.min.bytes = 1
kafka0 | replica.fetch.response.max.bytes = 10485760
kafka0 | replica.fetch.wait.max.ms = 500
kafka0 | replica.high.watermark.checkpoint.interval.ms = 5000
kafka0 | replica.lag.time.max.ms = 10000
kafka3 | reserved.broker.max.id = 1000
kafka3 | sasl.enabled.mechanisms = [GSSAPI]
kafka3 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafka3 | sasl.kerberos.min.time.before.relogin = 60000
kafka3 | sasl.kerberos.principal.to.local.rules = [DEFAULT]
kafka3 | sasl.kerberos.service.name = null
kafka3 | sasl.kerberos.ticket.renew.jitter = 0.05
kafka3 | sasl.kerberos.ticket.renew.window.factor = 0.8
kafka3 | sasl.mechanism.inter.broker.protocol = GSSAPI
kafka3 | security.inter.broker.protocol = PLAINTEXT
kafka3 | socket.receive.buffer.bytes = 102400
kafka3 | socket.request.max.bytes = 104857600
kafka3 | socket.send.buffer.bytes = 102400
kafka3 | ssl.cipher.suites = null
kafka3 | ssl.client.auth = none
kafka3 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
kafka3 | ssl.endpoint.identification.algorithm = null
kafka3 | ssl.key.password = null
kafka3 | ssl.keymanager.algorithm = SunX509
kafka3 | ssl.keystore.location = null
kafka1 | fetch.purgatory.purge.interval.requests = 1000
kafka1 | group.initial.rebalance.delay.ms = 0
kafka1 | group.max.session.timeout.ms = 300000
kafka1 | group.min.session.timeout.ms = 6000
kafka1 | host.name =
kafka1 | inter.broker.listener.name = null
kafka1 | inter.broker.protocol.version = 1.0-IV0
kafka1 | leader.imbalance.check.interval.seconds = 300
kafka1 | leader.imbalance.per.broker.percentage = 10
kafka1 | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
kafka1 | listeners = null
kafka1 | log.cleaner.backoff.ms = 15000
kafka1 | log.cleaner.dedupe.buffer.size = 134217728
kafka1 | log.cleaner.delete.retention.ms = 86400000
kafka1 | log.cleaner.enable = true
kafka1 | log.cleaner.io.buffer.load.factor = 0.9
kafka1 | log.cleaner.io.buffer.size = 524288
kafka1 | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
kafka1 | log.cleaner.min.cleanable.ratio = 0.5
kafka1 | log.cleaner.min.compaction.lag.ms = 0
kafka1 | log.cleaner.threads = 1
kafka1 | log.cleanup.policy = [delete]
kafka1 | log.dir = /tmp/kafka-logs
kafka1 | log.dirs = /tmp/kafka-logs
kafka1 | log.flush.interval.messages = 9223372036854775807
kafka1 | log.flush.interval.ms = null
zookeeper0 | 2018-12-19 08:16:57,070 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:Environment@100] - Server environment:java.vendor=Oracle Corporation
zookeeper0 | 2018-12-19 08:16:57,070 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:Environment@100] - Server environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre
zookeeper0 | 2018-12-19 08:16:57,071 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:Environment@100] - Server environment:java.class.path=/zookeeper-3.4.9/bin/../build/classes:/zookeeper-3.4.9/bin/../build/lib/*.jar:/zookeeper-3.4.9/bin/../lib/slf4j-log4j12-1.6.1.jar:/zookeeper-3.4.9/bin/../lib/slf4j-api-1.6.1.jar:/zookeeper-3.4.9/bin/../lib/netty-3.10.5.Final.jar:/zookeeper-3.4.9/bin/../lib/log4j-1.2.16.jar:/zookeeper-3.4.9/bin/../lib/jline-0.9.94.jar:/zookeeper-3.4.9/bin/../zookeeper-3.4.9.jar:/zookeeper-3.4.9/bin/../src/java/lib/*.jar:/conf:
kafka0 | replica.socket.receive.buffer.bytes = 65536
kafka0 | replica.socket.timeout.ms = 30000
kafka0 | replication.quota.window.num = 11
kafka0 | replication.quota.window.size.seconds = 1
kafka0 | request.timeout.ms = 30000
kafka0 | reserved.broker.max.id = 1000
kafka0 | sasl.enabled.mechanisms = [GSSAPI]
kafka0 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafka0 | sasl.kerberos.min.time.before.relogin = 60000
kafka0 | sasl.kerberos.principal.to.local.rules = [DEFAULT]
kafka0 | sasl.kerberos.service.name = null
kafka0 | sasl.kerberos.ticket.renew.jitter = 0.05
kafka0 | sasl.kerberos.ticket.renew.window.factor = 0.8
kafka0 | sasl.mechanism.inter.broker.protocol = GSSAPI
kafka0 | security.inter.broker.protocol = PLAINTEXT
kafka0 | socket.receive.buffer.bytes = 102400
kafka0 | socket.request.max.bytes = 104857600
kafka0 | socket.send.buffer.bytes = 102400
kafka0 | ssl.cipher.suites = null
kafka0 | ssl.client.auth = none
kafka0 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
kafka0 | ssl.endpoint.identification.algorithm = null
kafka0 | ssl.key.password = null
kafka0 | ssl.keymanager.algorithm = SunX509
kafka0 | ssl.keystore.location = null
kafka0 | ssl.keystore.password = null
kafka0 | ssl.keystore.type = JKS
kafka0 | ssl.protocol = TLS
kafka0 | ssl.provider = null
kafka0 | ssl.secure.random.implementation = null
kafka0 | ssl.trustmanager.algorithm = PKIX
kafka0 | ssl.truststore.location = null
kafka0 | ssl.truststore.password = null
kafka0 | ssl.truststore.type = JKS
kafka0 | transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
kafka0 | transaction.max.timeout.ms = 900000
kafka0 | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
kafka0 | transaction.state.log.load.buffer.size = 5242880
kafka0 | transaction.state.log.min.isr = 1
kafka0 | transaction.state.log.num.partitions = 50
kafka0 | transaction.state.log.replication.factor = 1
kafka0 | transaction.state.log.segment.bytes = 104857600
kafka0 | transactional.id.expiration.ms = 604800000
kafka0 | unclean.leader.election.enable = false
kafka0 | zookeeper.connect = zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
kafka0 | zookeeper.connection.timeout.ms = 6000
kafka0 | zookeeper.session.timeout.ms = 6000
kafka0 | zookeeper.set.acl = false
kafka0 | zookeeper.sync.time.ms = 2000
kafka0 | (kafka.server.KafkaConfig)
kafka3 | ssl.keystore.password = null
kafka3 | ssl.keystore.type = JKS
kafka3 | ssl.protocol = TLS
kafka3 | ssl.provider = null
kafka3 | ssl.secure.random.implementation = null
kafka3 | ssl.trustmanager.algorithm = PKIX
kafka3 | ssl.truststore.location = null
kafka3 | ssl.truststore.password = null
kafka3 | ssl.truststore.type = JKS
kafka3 | transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
kafka3 | transaction.max.timeout.ms = 900000
kafka3 | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
kafka3 | transaction.state.log.load.buffer.size = 5242880
kafka3 | transaction.state.log.min.isr = 1
kafka3 | transaction.state.log.num.partitions = 50
kafka3 | transaction.state.log.replication.factor = 1
kafka3 | transaction.state.log.segment.bytes = 104857600
kafka3 | transactional.id.expiration.ms = 604800000
kafka3 | unclean.leader.election.enable = false
kafka3 | zookeeper.connect = zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
kafka3 | zookeeper.connection.timeout.ms = 6000
kafka3 | zookeeper.session.timeout.ms = 6000
kafka3 | zookeeper.set.acl = false
kafka3 | zookeeper.sync.time.ms = 2000
kafka3 | (kafka.server.KafkaConfig)
kafka3 | [2018-12-19 08:17:02,921] INFO starting (kafka.server.KafkaServer)
zookeeper2 | 2018-12-19 08:16:57,394 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment@100] - Server environment:java.compiler=<NA>
zookeeper2 | 2018-12-19 08:16:57,404 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment@100] - Server environment:os.name=Linux
zookeeper2 | 2018-12-19 08:16:57,410 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment@100] - Server environment:os.arch=amd64
zookeeper2 | 2018-12-19 08:16:57,413 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment@100] - Server environment:os.version=4.9.93-linuxkit-aufs
zookeeper2 | 2018-12-19 08:16:57,416 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment@100] - Server environment:user.name=zookeeper
zookeeper2 | 2018-12-19 08:16:57,419 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment@100] - Server environment:user.home=/home/zookeeper
zookeeper2 | 2018-12-19 08:16:57,422 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment@100] - Server environment:user.dir=/zookeeper-3.4.9
kafka2 | sasl.kerberos.principal.to.local.rules = [DEFAULT]
kafka2 | sasl.kerberos.service.name = null
kafka2 | sasl.kerberos.ticket.renew.jitter = 0.05
kafka2 | sasl.kerberos.ticket.renew.window.factor = 0.8
kafka2 | sasl.mechanism.inter.broker.protocol = GSSAPI
kafka2 | security.inter.broker.protocol = PLAINTEXT
kafka2 | socket.receive.buffer.bytes = 102400
kafka2 | socket.request.max.bytes = 104857600
kafka2 | socket.send.buffer.bytes = 102400
kafka2 | ssl.cipher.suites = null
kafka2 | ssl.client.auth = none
kafka2 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
kafka2 | ssl.endpoint.identification.algorithm = null
kafka2 | ssl.key.password = null
kafka2 | ssl.keymanager.algorithm = SunX509
kafka2 | ssl.keystore.location = null
kafka2 | ssl.keystore.password = null
kafka2 | ssl.keystore.type = JKS
kafka2 | ssl.protocol = TLS
kafka2 | ssl.provider = null
kafka2 | ssl.secure.random.implementation = null
kafka2 | ssl.trustmanager.algorithm = PKIX
kafka2 | ssl.truststore.location = null
kafka2 | ssl.truststore.password = null
kafka2 | ssl.truststore.type = JKS
zookeeper1 | 2018-12-19 08:16:56,659 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
zookeeper1 | 2018-12-19 08:16:56,861 [myid:2] - INFO [QuorumPeer[myid=2]/0.0.0.0:2181:QuorumPeer@844] - FOLLOWING
zookeeper1 | 2018-12-19 08:16:56,927 [myid:2] - INFO [QuorumPeer[myid=2]/0.0.0.0:2181:Learner@86] - TCP NoDelay set to: true
zookeeper1 | 2018-12-19 08:16:57,147 [myid:2] - INFO [QuorumPeer[myid=2]/0.0.0.0:2181:Environment@100] - Server environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50 GMT
zookeeper1 | 2018-12-19 08:16:57,147 [myid:2] - INFO [QuorumPeer[myid=2]/0.0.0.0:2181:Environment@100] - Server environment:host.name=zookeeper1
zookeeper1 | 2018-12-19 08:16:57,147 [myid:2] - INFO [QuorumPeer[myid=2]/0.0.0.0:2181:Environment@100] - Server environment:java.version=1.8.0_181
kafka0 | [2018-12-19 08:17:03,355] INFO starting (kafka.server.KafkaServer)
kafka0 | [2018-12-19 08:17:03,382] INFO Connecting to zookeeper on zookeeper0:2181,zookeeper1:2181,zookeeper2:2181 (kafka.server.KafkaServer)
kafka0 | [2018-12-19 08:17:03,629] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
kafka0 | [2018-12-19 08:17:03,720] INFO Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT (org.apache.zookeeper.ZooKeeper)
kafka0 | [2018-12-19 08:17:03,722] INFO Client environment:host.name=kafka0 (org.apache.zookeeper.ZooKeeper)
kafka0 | [2018-12-19 08:17:03,732] INFO Client environment:java.version=1.8.0_181 (org.apache.zookeeper.ZooKeeper)
kafka0 | [2018-12-19 08:17:03,732] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
kafka2 | transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
kafka2 | transaction.max.timeout.ms = 900000
kafka2 | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
kafka2 | transaction.state.log.load.buffer.size = 5242880
kafka2 | transaction.state.log.min.isr = 1
kafka2 | transaction.state.log.num.partitions = 50
kafka2 | transaction.state.log.replication.factor = 1
kafka2 | transaction.state.log.segment.bytes = 104857600
kafka2 | transactional.id.expiration.ms = 604800000
kafka2 | unclean.leader.election.enable = false
kafka2 | zookeeper.connect = zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
kafka2 | zookeeper.connection.timeout.ms = 6000
kafka2 | zookeeper.session.timeout.ms = 6000
kafka2 | zookeeper.set.acl = false
kafka2 | zookeeper.sync.time.ms = 2000
kafka2 | (kafka.server.KafkaConfig)
kafka2 | [2018-12-19 08:17:02,624] INFO starting (kafka.server.KafkaServer)
kafka1 | log.flush.offset.checkpoint.interval.ms = 60000
kafka1 | log.flush.scheduler.interval.ms = 9223372036854775807
kafka1 | log.flush.start.offset.checkpoint.interval.ms = 60000
kafka1 | log.index.interval.bytes = 4096
kafka1 | log.index.size.max.bytes = 10485760
kafka1 | log.message.format.version = 1.0-IV0
kafka1 | log.message.timestamp.difference.max.ms = 9223372036854775807
kafka1 | log.message.timestamp.type = CreateTime
kafka1 | log.preallocate = false
kafka1 | log.retention.bytes = -1
kafka1 | log.retention.check.interval.ms = 300000
kafka1 | log.retention.hours = 168
kafka1 | log.retention.minutes = null
kafka1 | log.retention.ms = -1
kafka1 | log.roll.hours = 168
kafka1 | log.roll.jitter.hours = 0
kafka1 | log.roll.jitter.ms = null
kafka1 | log.roll.ms = null
kafka1 | log.segment.bytes = 1073741824
kafka1 | log.segment.delete.delay.ms = 60000
kafka1 | max.connections.per.ip = 2147483647
kafka1 | max.connections.per.ip.overrides =
kafka1 | message.max.bytes = 1048576
kafka1 | metric.reporters = []
kafka1 | metrics.num.samples = 2
kafka0 | [2018-12-19 08:17:03,732] INFO Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre (org.apache.zookeeper.ZooKeeper)
zookeeper1 | 2018-12-19 08:16:57,156 [myid:2] - INFO [QuorumPeer[myid=2]/0.0.0.0:2181:Environment@100] - Server environment:java.vendor=Oracle Corporation
zookeeper1 | 2018-12-19 08:16:57,164 [myid:2] - INFO [QuorumPeer[myid=2]/0.0.0.0:2181:Environment@100] - Server environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre
kafka2 | [2018-12-19 08:17:02,640] INFO Connecting to zookeeper on zookeeper0:2181,zookeeper1:2181,zookeeper2:2181 (kafka.server.KafkaServer)
kafka2 | [2018-12-19 08:17:02,703] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
kafka2 | [2018-12-19 08:17:02,725] INFO Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT (org.apache.zookeeper.ZooKeeper)
kafka2 | [2018-12-19 08:17:02,727] INFO Client environment:host.name=kafka2 (org.apache.zookeeper.ZooKeeper)
kafka2 | [2018-12-19 08:17:02,729] INFO Client environment:java.version=1.8.0_181 (org.apache.zookeeper.ZooKeeper)
kafka2 | [2018-12-19 08:17:02,730] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
zookeeper0 | 2018-12-19 08:16:57,076 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
zookeeper0 | 2018-12-19 08:16:57,150 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:Environment@100] - Server environment:java.io.tmpdir=/tmp
zookeeper0 | 2018-12-19 08:16:57,151 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:Environment@100] - Server environment:java.compiler=<NA>
zookeeper0 | 2018-12-19 08:16:57,154 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:Environment@100] - Server environment:os.name=Linux
kafka3 | [2018-12-19 08:17:02,931] INFO Connecting to zookeeper on zookeeper0:2181,zookeeper1:2181,zookeeper2:2181 (kafka.server.KafkaServer)
kafka3 | [2018-12-19 08:17:03,069] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
kafka3 | [2018-12-19 08:17:03,092] INFO Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT (org.apache.zookeeper.ZooKeeper)
kafka3 | [2018-12-19 08:17:03,092] INFO Client environment:host.name=kafka3 (org.apache.zookeeper.ZooKeeper)
kafka3 | [2018-12-19 08:17:03,093] INFO Client environment:java.version=1.8.0_181 (org.apache.zookeeper.ZooKeeper)
kafka3 | [2018-12-19 08:17:03,093] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
kafka3 | [2018-12-19 08:17:03,094] INFO Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre (org.apache.zookeeper.ZooKeeper)
zookeeper0 | 2018-12-19 08:16:57,157 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:Environment@100] - Server environment:os.arch=amd64
zookeeper0 | 2018-12-19 08:16:57,158 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:Environment@100] - Server environment:os.version=4.9.93-linuxkit-aufs
zookeeper0 | 2018-12-19 08:16:57,196 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:Environment@100] - Server environment:user.name=zookeeper
zookeeper0 | 2018-12-19 08:16:57,196 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:Environment@100] - Server environment:user.home=/home/zookeeper
zookeeper0 | 2018-12-19 08:16:57,205 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:Environment@100] - Server environment:user.dir=/zookeeper-3.4.9
zookeeper0 | 2018-12-19 08:16:57,265 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:ZooKeeperServer@173] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /datalog/version-2 snapdir /data/version-2
zookeeper0 | 2018-12-19 08:16:57,272 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:Follower@61] - FOLLOWING - LEADER ELECTION TOOK - 735
zookeeper0 | 2018-12-19 08:16:57,351 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:QuorumPeer$QuorumServer@149] - Resolved hostname: zookeeper2 to address: zookeeper2/172.18.0.4
kafka1 | metrics.recording.level = INFO
kafka1 | metrics.sample.window.ms = 30000
kafka1 | min.insync.replicas = 2
kafka1 | num.io.threads = 8
kafka1 | num.network.threads = 3
kafka1 | num.partitions = 1
kafka1 | num.recovery.threads.per.data.dir = 1
kafka1 | num.replica.fetchers = 1
kafka1 | offset.metadata.max.bytes = 4096
kafka1 | offsets.commit.required.acks = -1
kafka1 | offsets.commit.timeout.ms = 5000
kafka1 | offsets.load.buffer.size = 5242880
kafka1 | offsets.retention.check.interval.ms = 600000
kafka1 | offsets.retention.minutes = 1440
kafka1 | offsets.topic.compression.codec = 0
kafka1 | offsets.topic.num.partitions = 50
kafka1 | offsets.topic.replication.factor = 1
kafka1 | offsets.topic.segment.bytes = 104857600
kafka1 | port = 9092
kafka1 | principal.builder.class = null
kafka1 | producer.purgatory.purge.interval.requests = 1000
kafka1 | queued.max.request.bytes = -1
kafka1 | queued.max.requests = 500
kafka1 | quota.consumer.default = 9223372036854775807
kafka1 | quota.producer.default = 9223372036854775807
kafka1 | quota.window.num = 11
kafka1 | quota.window.size.seconds = 1
kafka1 | replica.fetch.backoff.ms = 1000
kafka1 | replica.fetch.max.bytes = 1048576
kafka1 | replica.fetch.min.bytes = 1
kafka1 | replica.fetch.response.max.bytes = 10485760
kafka1 | replica.fetch.wait.max.ms = 500
zookeeper0 | 2018-12-19 08:16:57,477 [myid:1] - WARN [QuorumPeer[myid=1]/0.0.0.0:2181:Learner@236] - Unexpected exception, tries=0, connecting to zookeeper2/172.18.0.4:2888
zookeeper0 | java.net.ConnectException: Connection refused (Connection refused)
zookeeper0 | at java.net.PlainSocketImpl.socketConnect(Native Method)
zookeeper0 | at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
zookeeper0 | at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
zookeeper0 | at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
zookeeper0 | at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
zookeeper0 | at java.net.Socket.connect(Socket.java:589)
zookeeper0 | at org.apache.zookeeper.server.quorum.Learner.connectToLeader(Learner.java:228)
zookeeper0 | at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:69)
zookeeper0 | at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:846)
kafka1 | replica.high.watermark.checkpoint.interval.ms = 5000
kafka1 | replica.lag.time.max.ms = 10000
kafka1 | replica.socket.receive.buffer.bytes = 65536
kafka1 | replica.socket.timeout.ms = 30000
kafka1 | replication.quota.window.num = 11
kafka1 | replication.quota.window.size.seconds = 1
kafka1 | request.timeout.ms = 30000
kafka1 | reserved.broker.max.id = 1000
kafka1 | sasl.enabled.mechanisms = [GSSAPI]
kafka1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafka1 | sasl.kerberos.min.time.before.relogin = 60000
kafka1 | sasl.kerberos.principal.to.local.rules = [DEFAULT]
kafka1 | sasl.kerberos.service.name = null
kafka1 | sasl.kerberos.ticket.renew.jitter = 0.05
kafka1 | sasl.kerberos.ticket.renew.window.factor = 0.8
kafka1 | sasl.mechanism.inter.broker.protocol = GSSAPI
kafka1 | security.inter.broker.protocol = PLAINTEXT
kafka1 | socket.receive.buffer.bytes = 102400
kafka1 | socket.request.max.bytes = 104857600
kafka1 | socket.send.buffer.bytes = 102400
kafka1 | ssl.cipher.suites = null
kafka1 | ssl.client.auth = none
kafka1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
kafka1 | ssl.endpoint.identification.algorithm = null
kafka1 | ssl.key.password = null
kafka1 | ssl.keymanager.algorithm = SunX509
kafka1 | ssl.keystore.location = null
kafka1 | ssl.keystore.password = null
kafka1 | ssl.keystore.type = JKS
kafka1 | ssl.protocol = TLS
kafka1 | ssl.provider = null
kafka1 | ssl.secure.random.implementation = null
kafka1 | ssl.trustmanager.algorithm = PKIX
kafka1 | ssl.truststore.location = null
kafka1 | ssl.truststore.password = null
kafka1 | ssl.truststore.type = JKS
kafka1 | transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
kafka1 | transaction.max.timeout.ms = 900000
kafka1 | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
kafka1 | transaction.state.log.load.buffer.size = 5242880
kafka1 | transaction.state.log.min.isr = 1
zookeeper2 | 2018-12-19 08:16:57,491 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:ZooKeeperServer@173] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /datalog/version-2 snapdir /data/version-2
zookeeper2 | 2018-12-19 08:16:57,506 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Leader@361] - LEADING - LEADER ELECTION TOOK - 1229
kafka2 | [2018-12-19 08:17:02,730] INFO Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre (org.apache.zookeeper.ZooKeeper)
zookeeper0 | 2018-12-19 08:16:58,596 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:Learner@326] - Getting a diff from the leader 0x0
zookeeper0 | 2018-12-19 08:16:58,617 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:FileTxnSnapLog@240] - Snapshotting: 0x0 to /data/version-2/snapshot.0
zookeeper0 | 2018-12-19 08:17:02,900 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /172.18.0.8:55138
zookeeper0 | 2018-12-19 08:17:02,946 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@928] - Client attempting to establish new session at /172.18.0.8:55138
zookeeper0 | 2018-12-19 08:17:02,992 [myid:1] - WARN [QuorumPeer[myid=1]/0.0.0.0:2181:Follower@116] - Got zxid 0x100000001 expected 0x1
kafka1 | transaction.state.log.num.partitions = 50
kafka1 | transaction.state.log.replication.factor = 1
kafka1 | transaction.state.log.segment.bytes = 104857600
kafka1 | transactional.id.expiration.ms = 604800000
kafka1 | unclean.leader.election.enable = false
kafka1 | zookeeper.connect = zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
kafka1 | zookeeper.connection.timeout.ms = 6000
kafka1 | zookeeper.session.timeout.ms = 6000
kafka1 | zookeeper.set.acl = false
kafka1 | zookeeper.sync.time.ms = 2000
kafka1 | (kafka.server.KafkaConfig)
zookeeper1 | 2018-12-19 08:16:57,164 [myid:2] - INFO [QuorumPeer[myid=2]/0.0.0.0:2181:Environment@100] - Server environment:java.class.path=/zookeeper-3.4.9/bin/../build/classes:/zookeeper-3.4.9/bin/../build/lib/*.jar:/zookeeper-3.4.9/bin/../lib/slf4j-log4j12-1.6.1.jar:/zookeeper-3.4.9/bin/../lib/slf4j-api-1.6.1.jar:/zookeeper-3.4.9/bin/../lib/netty-3.10.5.Final.jar:/zookeeper-3.4.9/bin/../lib/log4j-1.2.16.jar:/zookeeper-3.4.9/bin/../lib/jline-0.9.94.jar:/zookeeper-3.4.9/bin/../zookeeper-3.4.9.jar:/zookeeper-3.4.9/bin/../src/java/lib/*.jar:/conf:
zookeeper1 | 2018-12-19 08:16:57,165 [myid:2] - INFO [QuorumPeer[myid=2]/0.0.0.0:2181:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
zookeeper1 | 2018-12-19 08:16:57,165 [myid:2] - INFO [QuorumPeer[myid=2]/0.0.0.0:2181:Environment@100] - Server environment:java.io.tmpdir=/tmp
zookeeper2 | 2018-12-19 08:16:58,525 [myid:3] - INFO [LearnerHandler-/172.18.0.2:34804:LearnerHandler@329] - Follower sid: 2 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@23515657
zookeeper2 | 2018-12-19 08:16:58,548 [myid:3] - INFO [LearnerHandler-/172.18.0.2:34804:LearnerHandler@384] - Synchronizing with Follower sid: 2 maxCommittedLog=0x0 minCommittedLog=0x0 peerLastZxid=0x0
kafka1 | [2018-12-19 08:17:03,535] INFO starting (kafka.server.KafkaServer)
kafka1 | [2018-12-19 08:17:03,554] INFO Connecting to zookeeper on zookeeper0:2181,zookeeper1:2181,zookeeper2:2181 (kafka.server.KafkaServer)
zookeeper1 | 2018-12-19 08:16:57,165 [myid:2] - INFO [QuorumPeer[myid=2]/0.0.0.0:2181:Environment@100] - Server environment:java.compiler=<NA>
zookeeper1 | 2018-12-19 08:16:57,166 [myid:2] - INFO [QuorumPeer[myid=2]/0.0.0.0:2181:Environment@100] - Server environment:os.name=Linux
zookeeper1 | 2018-12-19 08:16:57,167 [myid:2] - INFO [QuorumPeer[myid=2]/0.0.0.0:2181:Environment@100] - Server environment:os.arch=amd64
zookeeper0 | 2018-12-19 08:17:03,010 [myid:1] - INFO [SyncThread:1:FileTxnLog@203] - Creating new log file: log.100000001
zookeeper0 | 2018-12-19 08:17:03,114 [myid:1] - INFO [CommitProcessor:1:ZooKeeperServer@673] - Established session 0x167c58a0ef90000 with negotiated timeout 6000 for client /172.18.0.8:55138
zookeeper1 | 2018-12-19 08:16:57,173 [myid:2] - INFO [QuorumPeer[myid=2]/0.0.0.0:2181:Environment@100] - Server environment:os.version=4.9.93-linuxkit-aufs
zookeeper1 | 2018-12-19 08:16:57,173 [myid:2] - INFO [QuorumPeer[myid=2]/0.0.0.0:2181:Environment@100] - Server environment:user.name=zookeeper
zookeeper1 | 2018-12-19 08:16:57,191 [myid:2] - INFO [QuorumPeer[myid=2]/0.0.0.0:2181:Environment@100] - Server environment:user.home=/home/zookeeper
zookeeper1 | 2018-12-19 08:16:57,191 [myid:2] - INFO [QuorumPeer[myid=2]/0.0.0.0:2181:Environment@100] - Server environment:user.dir=/zookeeper-3.4.9
zookeeper1 | 2018-12-19 08:16:57,252 [myid:2] - INFO [QuorumPeer[myid=2]/0.0.0.0:2181:ZooKeeperServer@173] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /datalog/version-2 snapdir /data/version-2
zookeeper1 | 2018-12-19 08:16:57,270 [myid:2] - INFO [QuorumPeer[myid=2]/0.0.0.0:2181:Follower@61] - FOLLOWING - LEADER ELECTION TOOK - 1063
zookeeper1 | 2018-12-19 08:16:57,344 [myid:2] - INFO [QuorumPeer[myid=2]/0.0.0.0:2181:QuorumPeer$QuorumServer@149] - Resolved hostname: zookeeper2 to address: zookeeper2/172.18.0.4
zookeeper1 | 2018-12-19 08:16:57,393 [myid:2] - WARN [QuorumPeer[myid=2]/0.0.0.0:2181:Learner@236] - Unexpected exception, tries=0, connecting to zookeeper2/172.18.0.4:2888
zookeeper1 | java.net.ConnectException: Connection refused (Connection refused)
kafka0 | [2018-12-19 08:17:03,733] INFO Client environment:java.class.path=:/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0-b32.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/commons-lang3-3.5.jar:/opt/kafka/bin/../libs/connect-api-1.0.0.jar:/opt/kafka/bin/../libs/connect-file-1.0.0.jar:/opt/kafka/bin/../libs/connect-json-1.0.0.jar:/opt/kafka/bin/../libs/connect-runtime-1.0.0.jar:/opt/kafka/bin/../libs/connect-transforms-1.0.0.jar:/opt/kafka/bin/../libs/guava-20.0.jar:/opt/kafka/bin/../libs/hk2-api-2.5.0-b32.jar:/opt/kafka/bin/../libs/hk2-locator-2.5.0-b32.jar:/opt/kafka/bin/../libs/hk2-utils-2.5.0-b32.jar:/opt/kafka/bin/../libs/jackson-annotations-2.9.1.jar:/opt/kafka/bin/../libs/jackson-core-2.9.1.jar:/opt/kafka/bin/../libs/jackson-databind-2.9.1.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.9.1.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.9.1.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.9.1.jar:/opt/kafka/bin/../libs/javassist-3.20.0-GA.jar:/opt/kafka/bin/../libs/javassist-3.21.0-GA.jar:/opt/kafka/bin/../libs/javax.annotation-api-1.2.jar:/opt/kafka/bin/../libs/javax.inject-1.jar:/opt/kafka/bin/../libs/javax.inject-2.5.0-b32.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/opt/kafka/bin/../libs/jersey-client-2.25.1.jar:/opt/kafka/bin/../libs/jersey-common-2.25.1.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.25.1.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.25.1.jar:/opt/kafka/bin/../libs/jersey-guava-2.25.1.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.25.1.jar:/opt/kafka/bin/../libs/jersey-server-2.25.1.jar:/opt/kafka/bin/../libs/jetty-continuation-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-http-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-io-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-security-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-server-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-servlet-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-servlets-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-util-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/kafka/bin/../libs/kafka-clients-1.0.0.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-1.0.0.jar:/opt/kafka/bin/../libs/kafka-streams-1.0.0.jar:/opt/kafka/bin/../libs/kafka-streams-examples-1.0.0.jar:/opt/kafka/bin/../libs/kafka-tools-1.0.0.jar:/opt/kafka/bin/../libs/kafka_2.11-1.0.0-sources.jar:/opt/kafka/bin/../libs/kafka_2.11-1.0.0-test-sources.jar:/opt/kafka/bin/../libs/kafka_2.11-1.0.0.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-java-1.4.jar:/opt/kafka/bin/../libs/maven-artifact-3.5.0.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka/bin/../libs/plexus-utils-3.0.24.jar:/opt/kafka/bin/../libs/reflections-0.9.11.jar:/opt/kafka/bin/../libs/rocksdbjni-5.7.3.jar:/opt/kafka/bin/../libs/scala-library-2.11.11.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.25.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.25.jar:/opt/kafka/bin/../libs/snappy-java-1.1.4.jar:/opt/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/opt/kafka/bin/../libs/zkclient-0.10.jar:/opt/kafka/bin/../libs/zookeeper-3.4.10.jar (org.apache.zookeeper.ZooKeeper)
kafka0 | [2018-12-19 08:17:03,734] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
kafka0 | [2018-12-19 08:17:03,734] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
zookeeper2 | 2018-12-19 08:16:58,548 [myid:3] - INFO [LearnerHandler-/172.18.0.2:34804:LearnerHandler@393] - leader and follower are in sync, zxid=0x0
zookeeper2 | 2018-12-19 08:16:58,550 [myid:3] - INFO [LearnerHandler-/172.18.0.2:34804:LearnerHandler@458] - Sending DIFF
kafka1 | [2018-12-19 08:17:03,672] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
kafka1 | [2018-12-19 08:17:03,754] INFO Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT (org.apache.zookeeper.ZooKeeper)
kafka1 | [2018-12-19 08:17:03,754] INFO Client environment:host.name=kafka1 (org.apache.zookeeper.ZooKeeper)
kafka1 | [2018-12-19 08:17:03,756] INFO Client environment:java.version=1.8.0_181 (org.apache.zookeeper.ZooKeeper)
kafka1 | [2018-12-19 08:17:03,756] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
kafka1 | [2018-12-19 08:17:03,756] INFO Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre (org.apache.zookeeper.ZooKeeper)
zookeeper1 | at java.net.PlainSocketImpl.socketConnect(Native Method)
zookeeper1 | at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
kafka0 | [2018-12-19 08:17:03,734] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
kafka0 | [2018-12-19 08:17:03,735] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
kafka0 | [2018-12-19 08:17:03,735] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
kafka0 | [2018-12-19 08:17:03,736] INFO Client environment:os.version=4.9.93-linuxkit-aufs (org.apache.zookeeper.ZooKeeper)
kafka0 | [2018-12-19 08:17:03,736] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
kafka0 | [2018-12-19 08:17:03,736] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
kafka0 | [2018-12-19 08:17:03,737] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
kafka0 | [2018-12-19 08:17:03,740] INFO Initiating client connection, connectString=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@6572421 (org.apache.zookeeper.ZooKeeper)
zookeeper2 | 2018-12-19 08:16:58,577 [myid:3] - INFO [LearnerHandler-/172.18.0.3:43546:LearnerHandler@329] - Follower sid: 1 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@23468299
zookeeper2 | 2018-12-19 08:16:58,589 [myid:3] - INFO [LearnerHandler-/172.18.0.3:43546:LearnerHandler@384] - Synchronizing with Follower sid: 1 maxCommittedLog=0x0 minCommittedLog=0x0 peerLastZxid=0x0
zookeeper2 | 2018-12-19 08:16:58,592 [myid:3] - INFO [LearnerHandler-/172.18.0.3:43546:LearnerHandler@393] - leader and follower are in sync, zxid=0x0
zookeeper2 | 2018-12-19 08:16:58,593 [myid:3] - INFO [LearnerHandler-/172.18.0.3:43546:LearnerHandler@458] - Sending DIFF
zookeeper2 | 2018-12-19 08:16:58,629 [myid:3] - INFO [LearnerHandler-/172.18.0.2:34804:LearnerHandler@518] - Received NEWLEADER-ACK message from 2
zookeeper2 | 2018-12-19 08:16:58,630 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Leader@952] - Have quorum of supporters, sids: [ 2,3 ]; starting up and setting last processed zxid: 0x100000000
kafka0 | [2018-12-19 08:17:03,834] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
kafka0 | [2018-12-19 08:17:03,927] INFO Opening socket connection to server zookeeper2.hlf_net/172.18.0.4:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
kafka0 | [2018-12-19 08:17:04,035] INFO Socket connection established to zookeeper2.hlf_net/172.18.0.4:2181, initiating session (org.apache.zookeeper.ClientCnxn)
kafka0 | [2018-12-19 08:17:04,138] INFO Session establishment complete on server zookeeper2.hlf_net/172.18.0.4:2181, sessionid = 0x367c58a0f0b0000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
kafka0 | [2018-12-19 08:17:04,143] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
kafka0 | [2018-12-19 08:17:05,563] INFO Cluster ID = 9fpp_mD2ROKEcsrLwmpYQw (kafka.server.KafkaServer)
kafka2 | [2018-12-19 08:17:02,731] INFO Client environment:java.class.path=:/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0-b32.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/commons-lang3-3.5.jar:/opt/kafka/bin/../libs/connect-api-1.0.0.jar:/opt/kafka/bin/../libs/connect-file-1.0.0.jar:/opt/kafka/bin/../libs/connect-json-1.0.0.jar:/opt/kafka/bin/../libs/connect-runtime-1.0.0.jar:/opt/kafka/bin/../libs/connect-transforms-1.0.0.jar:/opt/kafka/bin/../libs/guava-20.0.jar:/opt/kafka/bin/../libs/hk2-api-2.5.0-b32.jar:/opt/kafka/bin/../libs/hk2-locator-2.5.0-b32.jar:/opt/kafka/bin/../libs/hk2-utils-2.5.0-b32.jar:/opt/kafka/bin/../libs/jackson-annotations-2.9.1.jar:/opt/kafka/bin/../libs/jackson-core-2.9.1.jar:/opt/kafka/bin/../libs/jackson-databind-2.9.1.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.9.1.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.9.1.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.9.1.jar:/opt/kafka/bin/../libs/javassist-3.20.0-GA.jar:/opt/kafka/bin/../libs/javassist-3.21.0-GA.jar:/opt/kafka/bin/../libs/javax.annotation-api-1.2.jar:/opt/kafka/bin/../libs/javax.inject-1.jar:/opt/kafka/bin/../libs/javax.inject-2.5.0-b32.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/opt/kafka/bin/../libs/jersey-client-2.25.1.jar:/opt/kafka/bin/../libs/jersey-common-2.25.1.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.25.1.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.25.1.jar:/opt/kafka/bin/../libs/jersey-guava-2.25.1.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.25.1.jar:/opt/kafka/bin/../libs/jersey-server-2.25.1.jar:/opt/kafka/bin/../libs/jetty-continuation-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-http-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-io-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-security-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-server-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-servlet-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-servlets-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-util-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/kafka/bin/../libs/kafka-clients-1.0.0.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-1.0.0.jar:/opt/kafka/bin/../libs/kafka-streams-1.0.0.jar:/opt/kafka/bin/../libs/kafka-streams-examples-1.0.0.jar:/opt/kafka/bin/../libs/kafka-tools-1.0.0.jar:/opt/kafka/bin/../libs/kafka_2.11-1.0.0-sources.jar:/opt/kafka/bin/../libs/kafka_2.11-1.0.0-test-sources.jar:/opt/kafka/bin/../libs/kafka_2.11-1.0.0.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-java-1.4.jar:/opt/kafka/bin/../libs/maven-artifact-3.5.0.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka/bin/../libs/plexus-utils-3.0.24.jar:/opt/kafka/bin/../libs/reflections-0.9.11.jar:/opt/kafka/bin/../libs/rocksdbjni-5.7.3.jar:/opt/kafka/bin/../libs/scala-library-2.11.11.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.25.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.25.jar:/opt/kafka/bin/../libs/snappy-java-1.1.4.jar:/opt/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/opt/kafka/bin/../libs/zkclient-0.10.jar:/opt/kafka/bin/../libs/zookeeper-3.4.10.jar (org.apache.zookeeper.ZooKeeper)
kafka2 | [2018-12-19 08:17:02,733] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
kafka2 | [2018-12-19 08:17:02,735] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
kafka2 | [2018-12-19 08:17:02,735] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
kafka0 | [2018-12-19 08:17:05,609] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
kafka0 | [2018-12-19 08:17:05,876] INFO [ThrottledRequestReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka0 | [2018-12-19 08:17:05,889] INFO [ThrottledRequestReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka0 | [2018-12-19 08:17:05,899] INFO [ThrottledRequestReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka0 | [2018-12-19 08:17:06,325] INFO Log directory '/tmp/kafka-logs' not found, creating it. (kafka.log.LogManager)
kafka0 | [2018-12-19 08:17:06,442] INFO Loading logs. (kafka.log.LogManager)
kafka0 | [2018-12-19 08:17:06,546] INFO Logs loading complete in 94 ms. (kafka.log.LogManager)
kafka0 | [2018-12-19 08:17:06,747] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
zookeeper1 | at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
zookeeper1 | at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
zookeeper1 | at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
zookeeper1 | at java.net.Socket.connect(Socket.java:589)
zookeeper2 | 2018-12-19 08:16:58,660 [myid:3] - INFO [LearnerHandler-/172.18.0.3:43546:LearnerHandler@518] - Received NEWLEADER-ACK message from 1
zookeeper2 | 2018-12-19 08:17:02,976 [myid:3] - INFO [SyncThread:3:FileTxnLog@203] - Creating new log file: log.100000001
zookeeper2 | 2018-12-19 08:17:03,465 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x267c58a0ed20000 type:create cxid:0x3 zxid:0x100000004 txntype:-1 reqpath:n/a Error Path:/consumers Error:KeeperErrorCode = NodeExists for /consumers
zookeeper2 | 2018-12-19 08:17:03,509 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x167c58a0ef90000 type:create cxid:0x5 zxid:0x100000005 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NoNode for /brokers
kafka0 | [2018-12-19 08:17:06,757] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
kafka0 | [2018-12-19 08:17:08,676] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
kafka0 | [2018-12-19 08:17:08,742] INFO [SocketServer brokerId=0] Started 1 acceptor threads (kafka.network.SocketServer)
kafka0 | [2018-12-19 08:17:08,881] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka0 | [2018-12-19 08:17:08,887] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
zookeeper1 | at org.apache.zookeeper.server.quorum.Learner.connectToLeader(Learner.java:228)
zookeeper1 | at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:69)
zookeeper1 | at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:846)
zookeeper1 | 2018-12-19 08:16:58,561 [myid:2] - INFO [QuorumPeer[myid=2]/0.0.0.0:2181:Learner@326] - Getting a diff from the leader 0x0
kafka0 | [2018-12-19 08:17:08,896] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka0 | [2018-12-19 08:17:09,073] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
kafka0 | [2018-12-19 08:17:09,551] INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka0 | [2018-12-19 08:17:09,646] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka0 | [2018-12-19 08:17:09,653] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka0 | [2018-12-19 08:17:09,715] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator)
kafka2 | [2018-12-19 08:17:02,740] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
kafka2 | [2018-12-19 08:17:02,741] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
kafka2 | [2018-12-19 08:17:02,742] INFO Client environment:os.version=4.9.93-linuxkit-aufs (org.apache.zookeeper.ZooKeeper)
kafka2 | [2018-12-19 08:17:02,743] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
kafka2 | [2018-12-19 08:17:02,745] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
kafka2 | [2018-12-19 08:17:02,747] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
kafka3 | [2018-12-19 08:17:03,094] INFO Client environment:java.class.path=:/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0-b32.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/commons-lang3-3.5.jar:/opt/kafka/bin/../libs/connect-api-1.0.0.jar:/opt/kafka/bin/../libs/connect-file-1.0.0.jar:/opt/kafka/bin/../libs/connect-json-1.0.0.jar:/opt/kafka/bin/../libs/connect-runtime-1.0.0.jar:/opt/kafka/bin/../libs/connect-transforms-1.0.0.jar:/opt/kafka/bin/../libs/guava-20.0.jar:/opt/kafka/bin/../libs/hk2-api-2.5.0-b32.jar:/opt/kafka/bin/../libs/hk2-locator-2.5.0-b32.jar:/opt/kafka/bin/../libs/hk2-utils-2.5.0-b32.jar:/opt/kafka/bin/../libs/jackson-annotations-2.9.1.jar:/opt/kafka/bin/../libs/jackson-core-2.9.1.jar:/opt/kafka/bin/../libs/jackson-databind-2.9.1.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.9.1.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.9.1.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.9.1.jar:/opt/kafka/bin/../libs/javassist-3.20.0-GA.jar:/opt/kafka/bin/../libs/javassist-3.21.0-GA.jar:/opt/kafka/bin/../libs/javax.annotation-api-1.2.jar:/opt/kafka/bin/../libs/javax.inject-1.jar:/opt/kafka/bin/../libs/javax.inject-2.5.0-b32.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/opt/kafka/bin/../libs/jersey-client-2.25.1.jar:/opt/kafka/bin/../libs/jersey-common-2.25.1.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.25.1.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.25.1.jar:/opt/kafka/bin/../libs/jersey-guava-2.25.1.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.25.1.jar:/opt/kafka/bin/../libs/jersey-server-2.25.1.jar:/opt/kafka/bin/../libs/jetty-continuation-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-http-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-io-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-security-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-server-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-servlet-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-servlets-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-util-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/kafka/bin/../libs/kafka-clients-1.0.0.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-1.0.0.jar:/opt/kafka/bin/../libs/kafka-streams-1.0.0.jar:/opt/kafka/bin/../libs/kafka-streams-examples-1.0.0.jar:/opt/kafka/bin/../libs/kafka-tools-1.0.0.jar:/opt/kafka/bin/../libs/kafka_2.11-1.0.0-sources.jar:/opt/kafka/bin/../libs/kafka_2.11-1.0.0-test-sources.jar:/opt/kafka/bin/../libs/kafka_2.11-1.0.0.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-java-1.4.jar:/opt/kafka/bin/../libs/maven-artifact-3.5.0.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka/bin/../libs/plexus-utils-3.0.24.jar:/opt/kafka/bin/../libs/reflections-0.9.11.jar:/opt/kafka/bin/../libs/rocksdbjni-5.7.3.jar:/opt/kafka/bin/../libs/scala-library-2.11.11.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.25.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.25.jar:/opt/kafka/bin/../libs/snappy-java-1.1.4.jar:/opt/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/opt/kafka/bin/../libs/zkclient-0.10.jar:/opt/kafka/bin/../libs/zookeeper-3.4.10.jar (org.apache.zookeeper.ZooKeeper)
kafka3 | [2018-12-19 08:17:03,095] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
kafka3 | [2018-12-19 08:17:03,095] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
kafka3 | [2018-12-19 08:17:03,096] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
kafka3 | [2018-12-19 08:17:03,097] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
kafka3 | [2018-12-19 08:17:03,097] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
kafka3 | [2018-12-19 08:17:03,097] INFO Client environment:os.version=4.9.93-linuxkit-aufs (org.apache.zookeeper.ZooKeeper)
kafka3 | [2018-12-19 08:17:03,097] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
kafka3 | [2018-12-19 08:17:03,098] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
kafka3 | [2018-12-19 08:17:03,098] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
zookeeper2 | 2018-12-19 08:17:03,609 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x267c58a0ed20000 type:create cxid:0x5 zxid:0x100000006 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NoNode for /brokers
zookeeper2 | 2018-12-19 08:17:03,652 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x267c58a0ed20000 type:create cxid:0xb zxid:0x10000000a txntype:-1 reqpath:n/a Error Path:/config Error:KeeperErrorCode = NoNode for /config
zookeeper2 | 2018-12-19 08:17:03,663 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x167c58a0ef90000 type:create cxid:0x6 zxid:0x10000000b txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
zookeeper2 | 2018-12-19 08:17:03,685 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x167c58a0ef90000 type:create cxid:0x7 zxid:0x10000000d txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
kafka3 | [2018-12-19 08:17:03,104] INFO Initiating client connection, connectString=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@6572421 (org.apache.zookeeper.ZooKeeper)
kafka3 | [2018-12-19 08:17:03,175] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
kafka3 | [2018-12-19 08:17:03,194] INFO Opening socket connection to server zookeeper1.hlf_net/172.18.0.2:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
kafka3 | [2018-12-19 08:17:03,241] INFO Socket connection established to zookeeper1.hlf_net/172.18.0.2:2181, initiating session (org.apache.zookeeper.ClientCnxn)
zookeeper1 | 2018-12-19 08:16:58,586 [myid:2] - INFO [QuorumPeer[myid=2]/0.0.0.0:2181:FileTxnSnapLog@240] - Snapshotting: 0x0 to /data/version-2/snapshot.0
zookeeper1 | 2018-12-19 08:17:02,978 [myid:2] - WARN [QuorumPeer[myid=2]/0.0.0.0:2181:Follower@116] - Got zxid 0x100000001 expected 0x1
zookeeper1 | 2018-12-19 08:17:02,981 [myid:2] - INFO [SyncThread:2:FileTxnLog@203] - Creating new log file: log.100000001
zookeeper1 | 2018-12-19 08:17:03,228 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /172.18.0.6:34182
zookeeper1 | 2018-12-19 08:17:03,258 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@928] - Client attempting to establish new session at /172.18.0.6:34182
kafka0 | [2018-12-19 08:17:09,730] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
kafka0 | [2018-12-19 08:17:09,761] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 12 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
kafka0 | [2018-12-19 08:17:09,804] INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:2000,blockEndProducerId:2999) by writing to Zk with path version 3 (kafka.coordinator.transaction.ProducerIdManager)
kafka0 | [2018-12-19 08:17:09,917] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
kafka0 | [2018-12-19 08:17:09,964] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
zookeeper2 | 2018-12-19 08:17:03,709 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x167c58a0ef90000 type:create cxid:0xa zxid:0x10000000f txntype:-1 reqpath:n/a Error Path:/config/changes Error:KeeperErrorCode = NodeExists for /config/changes
zookeeper2 | 2018-12-19 08:17:03,753 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x167c58a0ef90000 type:create cxid:0x10 zxid:0x100000012 txntype:-1 reqpath:n/a Error Path:/admin Error:KeeperErrorCode = NoNode for /admin
zookeeper2 | 2018-12-19 08:17:03,769 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x267c58a0ed20000 type:create cxid:0xf zxid:0x100000013 txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics
zookeeper1 | 2018-12-19 08:17:03,297 [myid:2] - INFO [CommitProcessor:2:ZooKeeperServer@673] - Established session 0x267c58a0ed20000 with negotiated timeout 6000 for client /172.18.0.6:34182
kafka2 | [2018-12-19 08:17:02,753] INFO Initiating client connection, connectString=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@6572421 (org.apache.zookeeper.ZooKeeper)
kafka2 | [2018-12-19 08:17:02,846] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
kafka2 | [2018-12-19 08:17:02,852] INFO Opening socket connection to server zookeeper0.hlf_net/172.18.0.3:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
kafka2 | [2018-12-19 08:17:02,911] INFO Socket connection established to zookeeper0.hlf_net/172.18.0.3:2181, initiating session (org.apache.zookeeper.ClientCnxn)
kafka2 | [2018-12-19 08:17:03,110] INFO Session establishment complete on server zookeeper0.hlf_net/172.18.0.3:2181, sessionid = 0x167c58a0ef90000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
kafka2 | [2018-12-19 08:17:03,130] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
kafka2 | [2018-12-19 08:17:05,023] INFO Cluster ID = 9fpp_mD2ROKEcsrLwmpYQw (kafka.server.KafkaServer)
kafka0 | [2018-12-19 08:17:09,965] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
kafka0 | [2018-12-19 08:17:10,300] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
kafka0 | [2018-12-19 08:17:10,366] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
kafka0 | [2018-12-19 08:17:10,370] INFO Registered broker 0 at path /brokers/ids/0 with addresses: EndPoint(kafka0,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
kafka0 | [2018-12-19 08:17:10,381] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
kafka0 | [2018-12-19 08:17:10,412] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser)
kafka3 | [2018-12-19 08:17:03,288] INFO Session establishment complete on server zookeeper1.hlf_net/172.18.0.2:2181, sessionid = 0x267c58a0ed20000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
kafka3 | [2018-12-19 08:17:03,296] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
kafka3 | [2018-12-19 08:17:04,659] INFO Cluster ID = 9fpp_mD2ROKEcsrLwmpYQw (kafka.server.KafkaServer)
kafka3 | [2018-12-19 08:17:04,712] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
kafka3 | [2018-12-19 08:17:04,975] INFO [ThrottledRequestReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka3 | [2018-12-19 08:17:04,991] INFO [ThrottledRequestReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
zookeeper2 | 2018-12-19 08:17:03,828 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x267c58a0ed20000 type:create cxid:0x12 zxid:0x100000016 txntype:-1 reqpath:n/a Error Path:/admin/delete_topics Error:KeeperErrorCode = NodeExists for /admin/delete_topics
kafka2 | [2018-12-19 08:17:05,107] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
kafka2 | [2018-12-19 08:17:05,593] INFO [ThrottledRequestReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka2 | [2018-12-19 08:17:05,621] INFO [ThrottledRequestReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka2 | [2018-12-19 08:17:05,641] INFO [ThrottledRequestReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka3 | [2018-12-19 08:17:05,042] INFO [ThrottledRequestReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka3 | [2018-12-19 08:17:05,344] INFO Log directory '/tmp/kafka-logs' not found, creating it. (kafka.log.LogManager)
kafka3 | [2018-12-19 08:17:05,507] INFO Loading logs. (kafka.log.LogManager)
kafka3 | [2018-12-19 08:17:05,663] INFO Logs loading complete in 114 ms. (kafka.log.LogManager)
kafka3 | [2018-12-19 08:17:06,085] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
kafka3 | [2018-12-19 08:17:06,101] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
kafka3 | [2018-12-19 08:17:07,795] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
kafka2 | [2018-12-19 08:17:05,979] INFO Log directory '/tmp/kafka-logs' not found, creating it. (kafka.log.LogManager)
kafka2 | [2018-12-19 08:17:06,080] INFO Loading logs. (kafka.log.LogManager)
kafka2 | [2018-12-19 08:17:06,165] INFO Logs loading complete in 77 ms. (kafka.log.LogManager)
kafka2 | [2018-12-19 08:17:06,727] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
kafka2 | [2018-12-19 08:17:06,743] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
kafka2 | [2018-12-19 08:17:08,493] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
kafka2 | [2018-12-19 08:17:08,554] INFO [SocketServer brokerId=2] Started 1 acceptor threads (kafka.network.SocketServer)
kafka2 | [2018-12-19 08:17:08,692] INFO [ExpirationReaper-2-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka2 | [2018-12-19 08:17:08,733] INFO [ExpirationReaper-2-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka2 | [2018-12-19 08:17:08,737] INFO [ExpirationReaper-2-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka2 | [2018-12-19 08:17:08,882] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
kafka2 | [2018-12-19 08:17:09,170] INFO [ExpirationReaper-2-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka2 | [2018-12-19 08:17:09,223] INFO [ExpirationReaper-2-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka2 | [2018-12-19 08:17:09,270] INFO [ExpirationReaper-2-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka2 | [2018-12-19 08:17:09,314] INFO [GroupCoordinator 2]: Starting up. (kafka.coordinator.group.GroupCoordinator)
kafka3 | [2018-12-19 08:17:07,855] INFO [SocketServer brokerId=3] Started 1 acceptor threads (kafka.network.SocketServer)
kafka3 | [2018-12-19 08:17:08,384] INFO [ExpirationReaper-3-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka3 | [2018-12-19 08:17:08,401] INFO [ExpirationReaper-3-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka3 | [2018-12-19 08:17:08,403] INFO [ExpirationReaper-3-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka3 | [2018-12-19 08:17:08,613] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
kafka3 | [2018-12-19 08:17:08,912] INFO [ExpirationReaper-3-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka3 | [2018-12-19 08:17:08,992] INFO [ExpirationReaper-3-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka3 | [2018-12-19 08:17:09,006] INFO Creating /controller (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
zookeeper2 | 2018-12-19 08:17:03,851 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x267c58a0ed20000 type:create cxid:0x14 zxid:0x100000018 txntype:-1 reqpath:n/a Error Path:/brokers/seqid Error:KeeperErrorCode = NodeExists for /brokers/seqid
zookeeper2 | 2018-12-19 08:17:03,869 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x267c58a0ed20000 type:create cxid:0x16 zxid:0x10000001a txntype:-1 reqpath:n/a Error Path:/isr_change_notification Error:KeeperErrorCode = NodeExists for /isr_change_notification
zookeeper2 | 2018-12-19 08:17:03,904 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x267c58a0ed20000 type:create cxid:0x18 zxid:0x10000001c txntype:-1 reqpath:n/a Error Path:/latest_producer_id_block Error:KeeperErrorCode = NodeExists for /latest_producer_id_block
zookeeper2 | 2018-12-19 08:17:03,928 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x267c58a0ed20000 type:create cxid:0x1a zxid:0x10000001e txntype:-1 reqpath:n/a Error Path:/log_dir_event_notification Error:KeeperErrorCode = NodeExists for /log_dir_event_notification
zookeeper2 | 2018-12-19 08:17:04,023 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /172.18.0.7:60388
zookeeper2 | 2018-12-19 08:17:04,089 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@928] - Client attempting to establish new session at /172.18.0.7:60388
zookeeper2 | 2018-12-19 08:17:04,117 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /172.18.0.9:39560
zookeeper2 | 2018-12-19 08:17:04,129 [myid:3] - INFO [CommitProcessor:3:ZooKeeperServer@673] - Established session 0x367c58a0f0b0000 with negotiated timeout 6000 for client /172.18.0.7:60388
kafka0 | [2018-12-19 08:17:10,419] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser)
kafka0 | [2018-12-19 08:17:10,440] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
kafka0 | [2018-12-19 08:17:11,629] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions testchainid-0 (kafka.server.ReplicaFetcherManager)
kafka0 | [2018-12-19 08:17:12,019] INFO Loading producer state from offset 0 for partition testchainid-0 with message format version 2 (kafka.log.Log)
kafka1 | [2018-12-19 08:17:03,761] INFO Client environment:java.class.path=:/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0-b32.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/commons-lang3-3.5.jar:/opt/kafka/bin/../libs/connect-api-1.0.0.jar:/opt/kafka/bin/../libs/connect-file-1.0.0.jar:/opt/kafka/bin/../libs/connect-json-1.0.0.jar:/opt/kafka/bin/../libs/connect-runtime-1.0.0.jar:/opt/kafka/bin/../libs/connect-transforms-1.0.0.jar:/opt/kafka/bin/../libs/guava-20.0.jar:/opt/kafka/bin/../libs/hk2-api-2.5.0-b32.jar:/opt/kafka/bin/../libs/hk2-locator-2.5.0-b32.jar:/opt/kafka/bin/../libs/hk2-utils-2.5.0-b32.jar:/opt/kafka/bin/../libs/jackson-annotations-2.9.1.jar:/opt/kafka/bin/../libs/jackson-core-2.9.1.jar:/opt/kafka/bin/../libs/jackson-databind-2.9.1.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.9.1.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.9.1.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.9.1.jar:/opt/kafka/bin/../libs/javassist-3.20.0-GA.jar:/opt/kafka/bin/../libs/javassist-3.21.0-GA.jar:/opt/kafka/bin/../libs/javax.annotation-api-1.2.jar:/opt/kafka/bin/../libs/javax.inject-1.jar:/opt/kafka/bin/../libs/javax.inject-2.5.0-b32.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/opt/kafka/bin/../libs/jersey-client-2.25.1.jar:/opt/kafka/bin/../libs/jersey-common-2.25.1.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.25.1.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.25.1.jar:/opt/kafka/bin/../libs/jersey-guava-2.25.1.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.25.1.jar:/opt/kafka/bin/../libs/jersey-server-2.25.1.jar:/opt/kafka/bin/../libs/jetty-continuation-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-http-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-io-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-security-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-server-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-servlet-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-servlets-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-util-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/kafka/bin/../libs/kafka-clients-1.0.0.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-1.0.0.jar:/opt/kafka/bin/../libs/kafka-streams-1.0.0.jar:/opt/kafka/bin/../libs/kafka-streams-examples-1.0.0.jar:/opt/kafka/bin/../libs/kafka-tools-1.0.0.jar:/opt/kafka/bin/../libs/kafka_2.11-1.0.0-sources.jar:/opt/kafka/bin/../libs/kafka_2.11-1.0.0-test-sources.jar:/opt/kafka/bin/../libs/kafka_2.11-1.0.0.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-java-1.4.jar:/opt/kafka/bin/../libs/maven-artifact-3.5.0.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka/bin/../libs/plexus-utils-3.0.24.jar:/opt/kafka/bin/../libs/reflections-0.9.11.jar:/opt/kafka/bin/../libs/rocksdbjni-5.7.3.jar:/opt/kafka/bin/../libs/scala-library-2.11.11.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.25.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.25.jar:/opt/kafka/bin/../libs/snappy-java-1.1.4.jar:/opt/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/opt/kafka/bin/../libs/zkclient-0.10.jar:/opt/kafka/bin/../libs/zookeeper-3.4.10.jar (org.apache.zookeeper.ZooKeeper)
kafka1 | [2018-12-19 08:17:03,764] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
kafka1 | [2018-12-19 08:17:03,764] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
kafka1 | [2018-12-19 08:17:03,765] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
kafka1 | [2018-12-19 08:17:03,771] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
kafka1 | [2018-12-19 08:17:03,771] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
kafka1 | [2018-12-19 08:17:03,771] INFO Client environment:os.version=4.9.93-linuxkit-aufs (org.apache.zookeeper.ZooKeeper)
kafka3 | [2018-12-19 08:17:09,006] INFO [ExpirationReaper-3-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka3 | [2018-12-19 08:17:09,050] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
kafka3 | [2018-12-19 08:17:09,211] INFO [GroupCoordinator 3]: Starting up. (kafka.coordinator.group.GroupCoordinator)
kafka3 | [2018-12-19 08:17:09,348] INFO [GroupCoordinator 3]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
kafka3 | [2018-12-19 08:17:09,384] INFO [GroupMetadataManager brokerId=3] Removed 0 expired offsets in 107 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
kafka3 | [2018-12-19 08:17:09,674] INFO [ProducerId Manager 3]: Acquired new producerId block (brokerId:3,blockStartProducerId:1000,blockEndProducerId:1999) by writing to Zk with path version 2 (kafka.coordinator.transaction.ProducerIdManager)
kafka1 | [2018-12-19 08:17:03,772] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
kafka1 | [2018-12-19 08:17:03,773] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
kafka1 | [2018-12-19 08:17:03,776] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
kafka2 | [2018-12-19 08:17:09,317] INFO [GroupCoordinator 2]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
kafka2 | [2018-12-19 08:17:09,342] INFO [GroupMetadataManager brokerId=2] Removed 0 expired offsets in 15 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
kafka2 | [2018-12-19 08:17:09,401] INFO [ProducerId Manager 2]: Acquired new producerId block (brokerId:2,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
zookeeper2 | 2018-12-19 08:17:04,165 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@928] - Client attempting to establish new session at /172.18.0.9:39560
zookeeper2 | 2018-12-19 08:17:04,189 [myid:3] - INFO [CommitProcessor:3:ZooKeeperServer@673] - Established session 0x367c58a0f0b0001 with negotiated timeout 6000 for client /172.18.0.9:39560
zookeeper2 | 2018-12-19 08:17:04,638 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x267c58a0ed20000 type:create cxid:0x1c zxid:0x100000021 txntype:-1 reqpath:n/a Error Path:/cluster Error:KeeperErrorCode = NoNode for /cluster
zookeeper2 | 2018-12-19 08:17:04,804 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x167c58a0ef90000 type:create cxid:0x1c zxid:0x100000024 txntype:-1 reqpath:n/a Error Path:/cluster/id Error:KeeperErrorCode = NodeExists for /cluster/id
kafka0 | [2018-12-19 08:17:12,066] INFO Completed load of log testchainid-0 with 1 log segments, log start offset 0 and log end offset 0 in 206 ms (kafka.log.Log)
kafka1 | [2018-12-19 08:17:03,787] INFO Initiating client connection, connectString=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@6572421 (org.apache.zookeeper.ZooKeeper)
kafka1 | [2018-12-19 08:17:03,973] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
kafka1 | [2018-12-19 08:17:04,011] INFO Opening socket connection to server zookeeper2.hlf_net/172.18.0.4:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
kafka1 | [2018-12-19 08:17:04,113] INFO Socket connection established to zookeeper2.hlf_net/172.18.0.4:2181, initiating session (org.apache.zookeeper.ClientCnxn)
kafka1 | [2018-12-19 08:17:04,206] INFO Session establishment complete on server zookeeper2.hlf_net/172.18.0.4:2181, sessionid = 0x367c58a0f0b0001, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
kafka3 | [2018-12-19 08:17:09,878] INFO [TransactionCoordinator id=3] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
kafka3 | [2018-12-19 08:17:09,886] INFO [TransactionCoordinator id=3] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
kafka2 | [2018-12-19 08:17:09,513] INFO [TransactionCoordinator id=2] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
kafka2 | [2018-12-19 08:17:09,519] INFO [TransactionCoordinator id=2] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
kafka2 | [2018-12-19 08:17:09,535] INFO [Transaction Marker Channel Manager 2]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
kafka2 | [2018-12-19 08:17:09,920] INFO Creating /brokers/ids/2 (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
kafka2 | [2018-12-19 08:17:09,980] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
kafka2 | [2018-12-19 08:17:09,988] INFO Registered broker 2 at path /brokers/ids/2 with addresses: EndPoint(kafka2,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
kafka2 | [2018-12-19 08:17:09,995] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
kafka2 | [2018-12-19 08:17:10,043] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser)
kafka2 | [2018-12-19 08:17:10,045] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser)
kafka2 | [2018-12-19 08:17:10,072] INFO [KafkaServer id=2] started (kafka.server.KafkaServer)
kafka2 | [2018-12-19 08:17:27,386] INFO Replica loaded for partition businesschannel-0 with initial high watermark 0 (kafka.cluster.Replica)
kafka2 | [2018-12-19 08:17:27,399] INFO Replica loaded for partition businesschannel-0 with initial high watermark 0 (kafka.cluster.Replica)
kafka2 | [2018-12-19 08:17:27,603] INFO Loading producer state from offset 0 for partition businesschannel-0 with message format version 2 (kafka.log.Log)
zookeeper2 | 2018-12-19 08:17:05,211 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x367c58a0f0b0001 type:create cxid:0xe zxid:0x100000025 txntype:-1 reqpath:n/a Error Path:/cluster/id Error:KeeperErrorCode = NodeExists for /cluster/id
kafka1 | [2018-12-19 08:17:04,217] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
kafka1 | [2018-12-19 08:17:05,325] INFO Cluster ID = 9fpp_mD2ROKEcsrLwmpYQw (kafka.server.KafkaServer)
kafka1 | [2018-12-19 08:17:05,395] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
kafka1 | [2018-12-19 08:17:05,705] INFO [ThrottledRequestReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka1 | [2018-12-19 08:17:05,744] INFO [ThrottledRequestReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka1 | [2018-12-19 08:17:05,785] INFO [ThrottledRequestReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka1 | [2018-12-19 08:17:06,203] INFO Log directory '/tmp/kafka-logs' not found, creating it. (kafka.log.LogManager)
kafka2 | [2018-12-19 08:17:27,642] INFO Completed load of log businesschannel-0 with 1 log segments, log start offset 0 and log end offset 0 in 123 ms (kafka.log.Log)
zookeeper2 | 2018-12-19 08:17:05,362 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x367c58a0f0b0000 type:create cxid:0xe zxid:0x100000026 txntype:-1 reqpath:n/a Error Path:/cluster/id Error:KeeperErrorCode = NodeExists for /cluster/id
kafka0 | [2018-12-19 08:17:12,097] INFO Created log for partition [testchainid,0] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1048576, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 2, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> -1, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
kafka0 | [2018-12-19 08:17:12,114] INFO [Partition testchainid-0 broker=0] No checkpointed highwatermark is found for partition testchainid-0 (kafka.cluster.Partition)
kafka0 | [2018-12-19 08:17:12,127] INFO Replica loaded for partition testchainid-0 with initial high watermark 0 (kafka.cluster.Replica)
kafka0 | [2018-12-19 08:17:12,143] INFO Replica loaded for partition testchainid-0 with initial high watermark 0 (kafka.cluster.Replica)
kafka0 | [2018-12-19 08:17:12,143] INFO Replica loaded for partition testchainid-0 with initial high watermark 0 (kafka.cluster.Replica)
kafka0 | [2018-12-19 08:17:12,183] INFO [Partition testchainid-0 broker=0] testchainid-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
kafka0 | [2018-12-19 08:17:13,449] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset-1} for Partition: testchainid-0. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
kafka0 | [2018-12-19 08:17:27,159] INFO Topic creation {"version":1,"partitions":{"0":[1,0,2]}} (kafka.admin.AdminUtils$)
kafka2 | [2018-12-19 08:17:27,661] INFO Created log for partition [businesschannel,0] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1048576, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 2, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> -1, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
kafka2 | [2018-12-19 08:17:27,674] INFO [Partition businesschannel-0 broker=2] No checkpointed highwatermark is found for partition businesschannel-0 (kafka.cluster.Partition)
kafka2 | [2018-12-19 08:17:27,676] INFO Replica loaded for partition businesschannel-0 with initial high watermark 0 (kafka.cluster.Replica)
kafka2 | [2018-12-19 08:17:27,714] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions businesschannel-0 (kafka.server.ReplicaFetcherManager)
kafka2 | [2018-12-19 08:17:27,862] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Starting (kafka.server.ReplicaFetcherThread)
kafka2 | [2018-12-19 08:17:27,912] INFO [ReplicaFetcherManager on broker 2] Added fetcher for partitions List([businesschannel-0, initOffset 0 to broker BrokerEndPoint(1,kafka1,9092)] ) (kafka.server.ReplicaFetcherManager)
kafka3 | [2018-12-19 08:17:10,007] INFO [Transaction Marker Channel Manager 3]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
kafka3 | [2018-12-19 08:17:10,676] INFO Creating /brokers/ids/3 (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
kafka3 | [2018-12-19 08:17:10,749] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
kafka1 | [2018-12-19 08:17:06,310] INFO Loading logs. (kafka.log.LogManager)
kafka1 | [2018-12-19 08:17:06,367] INFO Logs loading complete in 52 ms. (kafka.log.LogManager)
kafka1 | [2018-12-19 08:17:06,823] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
kafka1 | [2018-12-19 08:17:06,828] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
kafka1 | [2018-12-19 08:17:08,957] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
zookeeper2 | 2018-12-19 08:17:09,130 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x267c58a0ed20000 type:setData cxid:0x27 zxid:0x100000028 txntype:-1 reqpath:n/a Error Path:/controller_epoch Error:KeeperErrorCode = NoNode for /controller_epoch
zookeeper2 | 2018-12-19 08:17:09,936 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x167c58a0ef90000 type:create cxid:0x2e zxid:0x10000002d txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
kafka0 | [2018-12-19 08:17:27,159] INFO Topic creation {"version":1,"partitions":{"0":[3,0,1]}} (kafka.admin.AdminUtils$)
kafka0 | [2018-12-19 08:17:27,220] INFO [KafkaApi-0] Auto creation of topic businesschannel with 1 partitions and replication factor 3 is successful (kafka.server.KafkaApis)
kafka0 | [2018-12-19 08:17:27,635] INFO Replica loaded for partition businesschannel-0 with initial high watermark 0 (kafka.cluster.Replica)
kafka0 | [2018-12-19 08:17:27,699] INFO Loading producer state from offset 0 for partition businesschannel-0 with message format version 2 (kafka.log.Log)
kafka0 | [2018-12-19 08:17:27,709] INFO Completed load of log businesschannel-0 with 1 log segments, log start offset 0 and log end offset 0 in 19 ms (kafka.log.Log)
kafka2 | [2018-12-19 08:17:28,150] WARN [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Based on follower's leader epoch, leader replied with an unknown offset in businesschannel-0. High watermark 0 will be used for truncation. (kafka.server.ReplicaFetcherThread)
kafka2 | [2018-12-19 08:17:28,165] INFO Truncating businesschannel-0 to 0 has no effect as the largest offset in the log is -1. (kafka.log.Log)
kafka2 | [2018-12-19 08:17:28,563] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset-1} for Partition: businesschannel-0. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
kafka1 | [2018-12-19 08:17:09,200] INFO [SocketServer brokerId=1] Started 1 acceptor threads (kafka.network.SocketServer)
kafka1 | [2018-12-19 08:17:09,294] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka1 | [2018-12-19 08:17:09,327] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka1 | [2018-12-19 08:17:09,332] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka1 | [2018-12-19 08:17:09,567] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
kafka1 | [2018-12-19 08:17:09,900] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
zookeeper2 | 2018-12-19 08:17:09,948 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x167c58a0ef90000 type:create cxid:0x2f zxid:0x10000002e txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
zookeeper2 | 2018-12-19 08:17:10,315 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x267c58a0ed20000 type:delete cxid:0x48 zxid:0x100000031 txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
kafka0 | [2018-12-19 08:17:27,724] INFO Created log for partition [businesschannel,0] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1048576, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 2, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> -1, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
kafka0 | [2018-12-19 08:17:27,731] INFO [Partition businesschannel-0 broker=0] No checkpointed highwatermark is found for partition businesschannel-0 (kafka.cluster.Partition)
kafka0 | [2018-12-19 08:17:27,756] INFO Replica loaded for partition businesschannel-0 with initial high watermark 0 (kafka.cluster.Replica)
kafka0 | [2018-12-19 08:17:27,758] INFO Replica loaded for partition businesschannel-0 with initial high watermark 0 (kafka.cluster.Replica)
kafka0 | [2018-12-19 08:17:27,779] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions businesschannel-0 (kafka.server.ReplicaFetcherManager)
kafka0 | [2018-12-19 08:17:28,114] INFO [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Starting (kafka.server.ReplicaFetcherThread)
kafka0 | [2018-12-19 08:17:28,149] INFO [ReplicaFetcherManager on broker 0] Added fetcher for partitions List([businesschannel-0, initOffset 0 to broker BrokerEndPoint(1,kafka1,9092)] ) (kafka.server.ReplicaFetcherManager)
kafka0 | [2018-12-19 08:17:28,269] WARN [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Based on follower's leader epoch, leader replied with an unknown offset in businesschannel-0. High watermark 0 will be used for truncation. (kafka.server.ReplicaFetcherThread)
kafka1 | [2018-12-19 08:17:09,944] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka1 | [2018-12-19 08:17:09,989] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka1 | [2018-12-19 08:17:10,060] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator)
kafka1 | [2018-12-19 08:17:10,074] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
kafka1 | [2018-12-19 08:17:10,101] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 20 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
zookeeper2 | 2018-12-19 08:17:10,329 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x367c58a0f0b0000 type:create cxid:0x20 zxid:0x100000032 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
zookeeper2 | 2018-12-19 08:17:10,343 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x367c58a0f0b0000 type:create cxid:0x21 zxid:0x100000033 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
zookeeper2 | 2018-12-19 08:17:10,574 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x367c58a0f0b0001 type:create cxid:0x20 zxid:0x100000035 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
kafka0 | [2018-12-19 08:17:28,306] INFO Truncating businesschannel-0 to 0 has no effect as the largest offset in the log is -1. (kafka.log.Log)
kafka0 | [2018-12-19 08:17:28,523] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset-1} for Partition: businesschannel-0. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
kafka3 | [2018-12-19 08:17:10,753] INFO Registered broker 3 at path /brokers/ids/3 with addresses: EndPoint(kafka3,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
kafka3 | [2018-12-19 08:17:10,817] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
kafka3 | [2018-12-19 08:17:11,111] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser)
kafka3 | [2018-12-19 08:17:11,113] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser)
kafka3 | [2018-12-19 08:17:11,124] INFO [KafkaServer id=3] started (kafka.server.KafkaServer)
kafka3 | [2018-12-19 08:17:11,740] INFO Replica loaded for partition testchainid-0 with initial high watermark 0 (kafka.cluster.Replica)
kafka1 | [2018-12-19 08:17:10,152] INFO [ProducerId Manager 1]: Acquired new producerId block (brokerId:1,blockStartProducerId:3000,blockEndProducerId:3999) by writing to Zk with path version 4 (kafka.coordinator.transaction.ProducerIdManager)
kafka1 | [2018-12-19 08:17:10,235] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
kafka1 | [2018-12-19 08:17:10,254] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
kafka1 | [2018-12-19 08:17:10,256] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
kafka1 | [2018-12-19 08:17:10,568] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
kafka1 | [2018-12-19 08:17:10,614] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
kafka1 | [2018-12-19 08:17:10,625] INFO Registered broker 1 at path /brokers/ids/1 with addresses: EndPoint(kafka1,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
zookeeper2 | 2018-12-19 08:17:10,585 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x367c58a0f0b0001 type:create cxid:0x21 zxid:0x100000036 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
zookeeper2 | 2018-12-19 08:17:10,714 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x267c58a0ed20000 type:create cxid:0x56 zxid:0x100000038 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
zookeeper2 | 2018-12-19 08:17:10,730 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x267c58a0ed20000 type:create cxid:0x57 zxid:0x100000039 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
zookeeper2 | 2018-12-19 08:17:11,317 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x367c58a0f0b0001 type:setData cxid:0x29 zxid:0x10000003b txntype:-1 reqpath:n/a Error Path:/config/topics/testchainid Error:KeeperErrorCode = NoNode for /config/topics/testchainid
kafka3 | [2018-12-19 08:17:11,915] INFO Loading producer state from offset 0 for partition testchainid-0 with message format version 2 (kafka.log.Log)
kafka3 | [2018-12-19 08:17:11,970] INFO Completed load of log testchainid-0 with 1 log segments, log start offset 0 and log end offset 0 in 134 ms (kafka.log.Log)
kafka1 | [2018-12-19 08:17:10,637] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
kafka1 | [2018-12-19 08:17:10,682] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser)
kafka1 | [2018-12-19 08:17:10,687] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser)
kafka1 | [2018-12-19 08:17:10,767] INFO [KafkaServer id=1] started (kafka.server.KafkaServer)
kafka1 | [2018-12-19 08:17:11,356] INFO Topic creation {"version":1,"partitions":{"0":[0,3,1]}} (kafka.admin.AdminUtils$)
kafka1 | [2018-12-19 08:17:11,375] INFO [KafkaApi-1] Auto creation of topic testchainid with 1 partitions and replication factor 3 is successful (kafka.server.KafkaApis)
kafka1 | [2018-12-19 08:17:11,597] INFO Replica loaded for partition testchainid-0 with initial high watermark 0 (kafka.cluster.Replica)
kafka1 | [2018-12-19 08:17:11,604] INFO Replica loaded for partition testchainid-0 with initial high watermark 0 (kafka.cluster.Replica)
zookeeper2 | 2018-12-19 08:17:11,331 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x367c58a0f0b0001 type:create cxid:0x2a zxid:0x10000003c txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics
zookeeper2 | 2018-12-19 08:17:11,453 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x267c58a0ed20000 type:create cxid:0x6f zxid:0x10000003f txntype:-1 reqpath:n/a Error Path:/brokers/topics/testchainid/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/testchainid/partitions/0
zookeeper2 | 2018-12-19 08:17:11,463 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x267c58a0ed20000 type:create cxid:0x70 zxid:0x100000040 txntype:-1 reqpath:n/a Error Path:/brokers/topics/testchainid/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/testchainid/partitions
kafka3 | [2018-12-19 08:17:12,045] INFO Created log for partition [testchainid,0] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1048576, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 2, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> -1, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
kafka3 | [2018-12-19 08:17:12,050] INFO [Partition testchainid-0 broker=3] No checkpointed highwatermark is found for partition testchainid-0 (kafka.cluster.Partition)
kafka3 | [2018-12-19 08:17:12,051] INFO Replica loaded for partition testchainid-0 with initial high watermark 0 (kafka.cluster.Replica)
kafka3 | [2018-12-19 08:17:12,051] INFO Replica loaded for partition testchainid-0 with initial high watermark 0 (kafka.cluster.Replica)
kafka3 | [2018-12-19 08:17:12,056] INFO [ReplicaFetcherManager on broker 3] Removed fetcher for partitions testchainid-0 (kafka.server.ReplicaFetcherManager)
kafka3 | [2018-12-19 08:17:12,504] INFO [ReplicaFetcher replicaId=3, leaderId=0, fetcherId=0] Starting (kafka.server.ReplicaFetcherThread)
kafka3 | [2018-12-19 08:17:12,518] INFO [ReplicaFetcherManager on broker 3] Added fetcher for partitions List([testchainid-0, initOffset 0 to broker BrokerEndPoint(0,kafka0,9092)] ) (kafka.server.ReplicaFetcherManager)
zookeeper2 | 2018-12-19 08:17:27,072 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x367c58a0f0b0000 type:setData cxid:0x30 zxid:0x100000044 txntype:-1 reqpath:n/a Error Path:/config/topics/businesschannel Error:KeeperErrorCode = NoNode for /config/topics/businesschannel
kafka1 | [2018-12-19 08:17:11,938] INFO Loading producer state from offset 0 for partition testchainid-0 with message format version 2 (kafka.log.Log)
kafka1 | [2018-12-19 08:17:11,967] INFO Completed load of log testchainid-0 with 1 log segments, log start offset 0 and log end offset 0 in 139 ms (kafka.log.Log)
zookeeper2 | 2018-12-19 08:17:27,079 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x367c58a0f0b0000 type:setData cxid:0x31 zxid:0x100000045 txntype:-1 reqpath:n/a Error Path:/config/topics/businesschannel Error:KeeperErrorCode = NoNode for /config/topics/businesschannel
zookeeper2 | 2018-12-19 08:17:27,096 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x367c58a0f0b0000 type:create cxid:0x32 zxid:0x100000046 txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics
zookeeper2 | 2018-12-19 08:17:27,106 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x367c58a0f0b0000 type:create cxid:0x33 zxid:0x100000047 txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics
kafka1 | [2018-12-19 08:17:12,021] INFO Created log for partition [testchainid,0] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1048576, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 2, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> -1, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
kafka3 | [2018-12-19 08:17:12,650] INFO [ReplicaFetcher replicaId=3, leaderId=0, fetcherId=0] Based on follower's leader epoch, leader replied with an offset 0 >= the follower's log end offset 0 in testchainid-0. No truncation needed. (kafka.server.ReplicaFetcherThread)
kafka3 | [2018-12-19 08:17:12,675] INFO Truncating testchainid-0 to 0 has no effect as the largest offset in the log is -1. (kafka.log.Log)
kafka3 | [2018-12-19 08:17:13,547] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset-1} for Partition: testchainid-0. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
zookeeper2 | 2018-12-19 08:17:27,119 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x367c58a0f0b0000 type:create cxid:0x35 zxid:0x100000049 txntype:-1 reqpath:n/a Error Path:/config/topics/businesschannel Error:KeeperErrorCode = NodeExists for /config/topics/businesschannel
zookeeper2 | 2018-12-19 08:17:27,176 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x367c58a0f0b0000 type:create cxid:0x38 zxid:0x10000004c txntype:-1 reqpath:n/a Error Path:/brokers/topics/businesschannel Error:KeeperErrorCode = NodeExists for /brokers/topics/businesschannel
zookeeper2 | 2018-12-19 08:17:27,208 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x267c58a0ed20000 type:create cxid:0x7c zxid:0x10000004d txntype:-1 reqpath:n/a Error Path:/brokers/topics/businesschannel/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/businesschannel/partitions/0
zookeeper2 | 2018-12-19 08:17:27,222 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x267c58a0ed20000 type:create cxid:0x7d zxid:0x10000004e txntype:-1 reqpath:n/a Error Path:/brokers/topics/businesschannel/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/businesschannel/partitions
kafka1 | [2018-12-19 08:17:12,035] INFO [Partition testchainid-0 broker=1] No checkpointed highwatermark is found for partition testchainid-0 (kafka.cluster.Partition)
kafka1 | [2018-12-19 08:17:12,036] INFO Replica loaded for partition testchainid-0 with initial high watermark 0 (kafka.cluster.Replica)
kafka1 | [2018-12-19 08:17:12,062] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions testchainid-0 (kafka.server.ReplicaFetcherManager)
kafka1 | [2018-12-19 08:17:12,319] INFO [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Starting (kafka.server.ReplicaFetcherThread)
kafka1 | [2018-12-19 08:17:12,358] INFO [ReplicaFetcherManager on broker 1] Added fetcher for partitions List([testchainid-0, initOffset 0 to broker BrokerEndPoint(0,kafka0,9092)] ) (kafka.server.ReplicaFetcherManager)
kafka1 | [2018-12-19 08:17:12,595] INFO [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Based on follower's leader epoch, leader replied with an offset 0 >= the follower's log end offset 0 in testchainid-0. No truncation needed. (kafka.server.ReplicaFetcherThread)
kafka1 | [2018-12-19 08:17:12,608] INFO Truncating testchainid-0 to 0 has no effect as the largest offset in the log is -1. (kafka.log.Log)
kafka1 | [2018-12-19 08:17:13,534] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset-1} for Partition: testchainid-0. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
kafka1 | [2018-12-19 08:17:27,278] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions businesschannel-0 (kafka.server.ReplicaFetcherManager)
kafka1 | [2018-12-19 08:17:27,326] INFO Loading producer state from offset 0 for partition businesschannel-0 with message format version 2 (kafka.log.Log)
kafka1 | [2018-12-19 08:17:27,380] INFO Completed load of log businesschannel-0 with 1 log segments, log start offset 0 and log end offset 0 in 55 ms (kafka.log.Log)
kafka1 | [2018-12-19 08:17:27,407] INFO Created log for partition [businesschannel,0] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1048576, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 2, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> -1, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
kafka1 | [2018-12-19 08:17:27,434] INFO [Partition businesschannel-0 broker=1] No checkpointed highwatermark is found for partition businesschannel-0 (kafka.cluster.Partition)
kafka1 | [2018-12-19 08:17:27,437] INFO Replica loaded for partition businesschannel-0 with initial high watermark 0 (kafka.cluster.Replica)
kafka1 | [2018-12-19 08:17:27,439] INFO Replica loaded for partition businesschannel-0 with initial high watermark 0 (kafka.cluster.Replica)
kafka1 | [2018-12-19 08:17:27,441] INFO Replica loaded for partition businesschannel-0 with initial high watermark 0 (kafka.cluster.Replica)
kafka1 | [2018-12-19 08:17:27,454] INFO [Partition businesschannel-0 broker=1] businesschannel-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
kafka1 | [2018-12-19 08:17:27,891] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset-1} for Partition: businesschannel-0. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)