# fabric_资产管理系统 **Repository Path**: lcl1024/fabric_asset_management_system ## Basic Information - **Project Name**: fabric_资产管理系统 - **Description**: No description available - **Primary Language**: Go - **License**: Apache-2.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 2 - **Forks**: 1 - **Created**: 2020-03-25 - **Last Updated**: 2021-09-03 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README ##超级账本 # 安装工具 *有两种安装方法* ## 安装证书和区块的生成工具 1. cryptogen证书生成工具 1. 进入`fabric/common/tools/cryptogen`目录 2. 执行`go install --tags=nopkcs11`命令 3. 在`$GOPATH/bin`目录下会生成对应的二进制文件 2. configtxgen区块生成工具 1. 进入`fabric/common/tools/configtxgen`目录 2. 执行`go install --tags=nopkcs11`命令 3. 在`¥GOPATH/bin`目录下会生成对应的二进制文件 ## 编译fabric源码 1. 获取fabric源码`git clone https://github.com/hyperledger/fabric.git` (有时因为网络问题可能会clone失败,可以使用码云,`git clone https://gitee.com/mirrors/hyperledger-fabric.git`,该方式clone 下来后记得把**_文件夹名修改为fabric_**) 2. 安装依赖软件 * `go get github.com/golang/protobuf/protoc-gen-go` 3. 在fabric目录下,执行`make release` `make docker` (MacOS系统需要在Makefile文件中找到第一个GO_LDFLAGS字符串,在行尾加上字符串 -s ) 4. 将编译好的文件进行安装 * Linux系统:在fabric/release/linux-amd64/bin目录下 * MacOS系统:在fabric/release/darwin-amd64/bin目录下 1. 将生成的二进制文件赋值到系统文件夹(/usr/local/bin)目录下 2. 修改每个二进制文件的执行权限`sudo chmod -R 775` # 生成配置文件 ## 证书配置文件 1. 编写crypto-config.yaml文件 ```yaml OrdererOrgs: - Name: Orderer Domain: asset.com Specs: - Hostname: orderer PeerOrgs: - Name: gy1_org1 Domain: org1.asset.com Template: Count: 3 Users: Count: 4 - Name: gy1_org2 Domain: org2.asset.com Template: Count: 3 Users: Count: 4 - Name: gy1_org3 Domain: org3.asset.com Template: Count: 3 Users: Count: 4 ``` 2. 生成证书文件 `cryptogen generate --config=crypto-config.yaml --output ./crypto-config` ## 创世块的生成 1. 编写configtx.yaml文件 ```yaml Organizations: - &OrdererOrg Name: OrdererOrg ID: OrdererMSP MSPDir: ./crypto-config/ordererOrganizations/asset.com/msp - &sy_org1 Name: SyOrg1MSP ID: SyOrg1MSP MSPDir: ./crypto-config/peerOrganizations/org1.asset.com/msp AnchorPeers: - Host: peer0.org1.qklszzngyl.com Port: 7051 - &sy_org2 Name: SyOrg2MSP ID: SyOrg2MSP MSPDir: ./crypto-config/peerOrganizations/org2.asset.com/msp AnchorPeers: - Host: peer0.org2.qklszzngyl.com Port: 7051 - &sy_org3 Name: SyOrg3MSP ID: SyOrg3MSP MSPDir: ./crypto-config/peerOrganizations/org3.asset.com/msp AnchorPeers: - Host: peer0.org3.qklszzngyl.com Port: 7051 Orderer: &OrdererDefaults OrdererType: solo Addresses: - orderer.qklszzngyl.com:7050 BatchTimeout: 2s BatchSize: MaxMessageCount: 10 AbsoluteMaxBytes: 98 MB PreferredMaxBytes: 521 KB Kafka: Brokers: - 127.0.0.1:9092 Organizations: Application: &ApplicationDefaults Organizations: Profiles: TestOrgsOrdererGenesis: Orderer: <<: *OrdererDefaults Organizations: - *OrdererOrg Consortiums: SampleConsortium: Organizations: - *sy_org1 - *sy_org2 - *sy_org3 TestOrgsChannel: Consortium: SampleConsortium Application: <<: *ApplicationDefaults Organizations: - *sy_org1 - *sy_org2 - *sy_org3 ``` 2. 生成系统创世区块 `configtxgen -profile TestOrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block` 3. 生成账本创世区块 `configtxgen -profile TestOrgsChannel -outputCreateChannelTx ./channel-artifacts/mychannel.tx -channelID mychannel` 4. 生成锚点文件(可选,每个组织生成一个,对应不同的channel要再次生成) `configtxgen -profile TestOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/Org1MSPanchors.tx -channelID mychannel -asOrg Org1MSP` ## 编写docker-compose文件 1. 单机多节点 1. order节点服务 2. peer节点服务 * couchdb服务 * ca服务 3. cli服务 2. Solo多机 1. order节点服务 2. peer节点服务 * couchdb服务 * ca服务 * cli服务 3. Kafka多机 1. Zookeeper服务 ```yaml # Copyright IBM Corp. All Rights Reserved. # # SPDX-License-Identifier: Apache-2.0 # # ZooKeeper的基本运转流程: # 1、选举Leader。 # 2、同步数据。 # 3、选举Leader过程中算法有很多,但要达到的选举标准是一致的。 # 4、Leader要具有最高的执行ID,类似root权限。 # 5、集群中大多数的机器得到响应并follow选出的Leader。 # version: '2' services: zookeeper1: container_name: zookeeper1 hostname: zookeeper1 image: hyperledger/fabric-zookeeper restart: always environment: # ======================================================================== # Reference: https://zookeeper.apache.org/doc/r3.4.9/zookeeperAdmin.html#sc_configuration # ======================================================================== # # myid # The ID must be unique within the ensemble and should have a value # ID在集合中必须是唯一的并且应该有一个值 # between 1 and 255. # 在1和255之间。 - ZOO_MY_ID=1 # # server.x=[hostname]:nnnnn[:nnnnn] # The list of servers that make up the ZK ensemble. The list that is used # by the clients must match the list of ZooKeeper servers that each ZK # server has. There are two port numbers `nnnnn`. The first is what # followers use to connect to the leader, while the second is for leader # election. # 组成ZK集合的服务器列表。客户端使用的列表必须与ZooKeeper服务器列表所拥有的每一个ZK服务器相匹配。 # 有两个端口号 `nnnnn`。第一个是追随者用来连接领导者的东西,第二个是领导人选举。 - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888 ports: - "2181:2181" - "2888:2888" - "3888:3888" extra_hosts: - "zookeeper1:172.31.159.137" - "zookeeper2:172.31.159.135" - "zookeeper3:172.31.159.136" - "kafka1:172.31.159.133" - "kafka2:172.31.159.132" - "kafka3:172.31.159.134" - "kafka4:172.31.159.131" ``` 2. Kafka配置 ```yaml # Copyright IBM Corp. All Rights Reserved. # # SPDX-License-Identifier: Apache-2.0 # # 我们使用K和Z分别代表Kafka集群和ZooKeeper集群的节点个数 # # 1)K的最小值应该被设置为4(我们将会在第4步中解释,这是为了满足crash容错的最小节点数。 # 如果有4个代理,那么可以容错一个代理崩溃,一个代理停止服务后,channel仍然可以继续读写,新的channel可以被创建) # 2)Z可以为3,5或是7。它的值需要是一个奇数避免脑裂(split-brain)情况,同时选择大于1的值为了避免单点故障。 # 超过7个ZooKeeper servers会被认为overkill。 # version: '2' services: kafka1: container_name: kafka1 hostname: kafka1 image: hyperledger/fabric-kafka restart: always environment: # ======================================================================== # Reference: https://kafka.apache.org/documentation/#configuration # ======================================================================== # # broker.id - KAFKA_BROKER_ID=1 # # min.insync.replicas # Let the value of this setting be M. Data is considered committed when # it is written to at least M replicas (which are then considered in-sync # and belong to the in-sync replica set, or ISR). In any other case, the # write operation returns an error. Then: # 1. If up to M-N replicas -- out of the N (see default.replication.factor # below) that the channel data is written to -- become unavailable, # operations proceed normally. # 2. If more replicas become unavailable, Kafka cannot maintain an ISR set # of M, so it stops accepting writes. Reads work without issues. The # channel becomes writeable again when M replicas get in-sync. # # min.insync.replicas = M---设置一个M值(例如1