Access a local development Hyperledger Composer REST server on the internet

Developers who want to get a fast start in developing a blockchain solution can quickly begin by using Hyperledger Composer on a local development system. After a business network is created and deployed locally, it’s also possible to deploy a REST server that exposes the business network for easy access by a front-end application.

But what happens when the target front-end application is for a mobile system or is running on a cloud runtime environment, such as a Cloud Foundry application or a Docker container, and the app needs to access the local blockchain business network? Or in general, you might be looking for a way to establish a connection to a network service that is running on a host that has outbound internet access, but it doesn’t support inbound access. The network service might be behind a firewall or on a dynamic IP address.

You can solve these problems by creating an internet-based proxy that accepts network connections and then forwards them to the service of interest. In this tutorial, you learn how to create this proxy for a Hyperledger Composer REST server by using the IBM Secure Gateway service.

Learning objectives

Complete this tutorial to understand how to create a REST server for a Blockchain business network and how to make it available on the internet. The tutorial shows how to configure a simple business network using Hyperledger Composer running on a local virtual machine. Then, you use the IBM Secure Gateway service to provide an internet-reachable network service that proxies connections to the REST server on the virtual machine.

Prerequisites

To complete this tutorial, you need:

This tutorial does not cover the development of blockchain business networks using Hyperledger Composer. For more information about developing those blockchain business networks, see the Hyperledger Composer tutorials.

Estimated time

The steps in this tutorial take about 30-45 minutes to complete. Add time to create the desired business network, if you are creating one from scratch.

Steps

Complete the following steps to create a local virtual machine (VM) that is capable of serving a Composer Business Network as a REST API endpoint. First, you use Vagrant to configure a VM with Docker support. After the VM is configured, continue by following the Hyperledger Composer set-up steps for a local envionment at Installing the development environment. Finally, after you have the local Composer REST server running locally, configure a Secure Gateway instance to expose the API on the IBM Cloud.

Configure a VM with Docker support

  1. Create a directory for the project:

    mkdir composer

  2. Copy the contents of the Vagrantfile into the directory.

  3. Start the Vagrant image from the directory (this might take a little while):

    vagrant up

  4. After the VM is up, log in to start configuring Hyperledger Fabric:

    vagrant ssh

Set up Hyperledger Composer

  1. Follow the pre-requisite setup steps for a local Hyperledger Composer environment for Ubuntu at Installing prerequisites. Complete these steps as an ordinary user and not a root user on the VM. Log out from vagrant with exit and reconnect with vagrant ssh when prompted.

    curl -O https://hyperledger.github.io/composer/latest/prereqs-ubuntu.sh chmod u+x prereqs-ubuntu.sh ./prereqs-ubuntu.sh

  2. After you finish installing pre-requisites, set up the Hyperledger Fabric local development environment as described at Installing the development environment, starting with the CLI tools.

    npm install -g composer-cli@0.20 npm install -g composer-rest-server@0.20 npm install -g generator-hyperledger-composer@0.20 npm install -g yo

  3. Install Composer Playground.

    npm install -g composer-playground@0.20

  4. Optional: Follow steps to set up IDE Step 3 of Installing the development environment.

  5. Complete Step 4 from the set-up instructions to get Hyperledger Fabric docker images installed.

    mkdir ~/fabric-dev-servers && cd ~/fabric-dev-servers curl -O https://raw.githubusercontent.com/hyperledger/composer-tools/master/packages/fabric-dev-servers/fabric-dev-servers.tar.gz tar -xvf fabric-dev-servers.tar.gz cd ~/fabric-dev-servers export FABRIC_VERSION=hlfv12 ./downloadFabric.sh

  6. Proceed with the steps under “Controlling your dev environment” to start the development fabric and create the PeerAdmin card:

    cd ~/fabric-dev-servers export FABRIC_VERSION=hlfv12 ./startFabric.sh ./createPeerAdminCard.sh

  7. Start the web app for Composer (“Playground”). Note: Starting the web app does not start up a browser session automatically as described in the documentation, because the command is running inside the VM instead of on the workstation.

    composer-playground

    After the service starts, navigate with a browser tab to http://localhost:8080/ (this local port is mapped by the Vagrantfile configuration to the VM).

  8. Develop a business network and test in the Composer Playground as usual. If you’ve never used composer playground, the Playground Tutorial is a good place to start.

  9. After you have completed testing the intended business network, deploy the Composer REST server, providing the card for the network owner (admin@marbles-network in this example). See Step 5 from Developer tutorial for creating a Hyperledger Composer solution for explanations on the responses to the input prompts. The Secure Gateway connectivity steps in this tutorial were tested with the following options.

     composer-rest-server ? Enter the name of the business network card to use: admin@marbles-network ? Specify if you want namespaces in the generated REST API: always use namespaces ? Specify if you want to use an API key to secure the REST API: No ? Specify if you want to enable authentication for the REST API using Passport: No ? Specify if you want to enable the explorer test interface: Yes ? Specify a key if you want to enable dynamic logging: ? Specify if you want to enable event publication over WebSockets: Yes ? Specify if you want to enable TLS security for the REST API: No

    To restart the REST server using the same options, issue the following command:

     composer-rest-server -c admin@marbles-network -n always -u true -w true Discovering types from business network definition ... Discovering the Returning Transactions.. Discovered types from business network definition Generating schemas for all types in business network definition ... Generated schemas for all types in business network definition Adding schemas for all types to Loopback ... Added schemas for all types to Loopback Web server listening at: http://localhost:3000 Browse your REST API at http://localhost:3000/explorer

    Keep the REST server running in the terminal. When finished with the REST API server, you can use Ctrl-C in the terminal to terminate the server.

  10. Test the REST API server by opening http://localhost:3000/explorer in a browser.

Configure a Secure Gateway instance to expose the API on the cloud

  1. Open the IBM Cloud catalog entry for Secure Gateway to create a Secure Gateway instance in your IBM Cloud account. You need either a paid account or Trial promo code. The Essentials service plan is sufficient for implementing traffic forwarding for a development hyperledger fabric network with a capacity of 500 MB/month of data transfer. Verify that this plan is selected, and click on Create.

  2. Click Add Gateway in the Secure Gateway Service Details panel. Enter a name in the panel, for example: “Blockchain”. Keep the other gateway default settings of Requre security token to connect clients and Token expriation before 90 days. Click othe Add Gateway button to create the gateway.

  3. Click the Connect Client button on the Secure Gateway Service Details panel to begin setting up the client that runs on the VM and connect to the Secure Gateway service.

  4. Choose Docker as the option to connect the client and copy the provided docker run command with the Gateway id and security token.

  5. Open a new local terminal window, change directory to the folder with the Vagrantfile and then connect to the VM using vagrant ssh. Paste the docker run command shown into this terminal to start the Secure Gateway client and leave a CLI running in the terminal. Do not close this terminal. After the container starts, you see messages like the following example, indicating a successful connection:

     [2018-10-20 18:34:01.451] [INFO] (Client ID 1) No password provided. The UI will not require a password for access [2018-10-20 18:34:01.462] [WARN] (Client ID 1) UI Server started. The UI is not currently password protected [2018-10-20 18:34:01.463] [INFO] (Client ID 1) Visit localhost:9003/dashboard to view the UI. [2018-10-20 18:34:01.760] [INFO] (Client ID 11) Setting log level to INFO [2018-10-20 18:34:02.153] [INFO] (Client ID 11) The Secure Gateway tunnel is connected [2018-10-20 18:34:02.304] [INFO] (Client ID HxzoYUW6z74_PZ9) Your Client ID is HxzoYUW6z74_PZ9 HxzoYUW6z74_PZ9>

    After the client has started, close the web ui panel to display the Secure Gateway service details.

  6. On another terminal on the vagrant VM, use the ip address show command to find the IP address of the VM. Many interfaces are listed. Select the one that begins with enp or eth. In the examples that follow, the VM IP address is 10.0.2.15.

  7. Return to the terminal for the Secure Gateway client docker container, create an acl entry that allows traffic to the composer REST API server running on port 3000.

    acl allow 10.0.2.15:3000 1

  8. Define a basic http connection through the Secure Gateway service to the Composer REST API server. For more advanced security settings refer to the Secure Gateway documentation. Click on the Destinations tab in the Secure Gateway service details. Next, click on the “+” icon to open the Add Destination wizard. Select the Guided Setup option.

  9. For the “Where is your resource located?” item, select On-premises and then click on Next.

  10. For “What is the host and port of your destination?”, put in the IP address from step 20 as the hostname and 3000 as the port. Then click on Next.

  11. For the connection protocol, select HTTP and then click on Next.

  12. For the destination authentication, select None and then click on Next.

  13. Skip entry of the IP address and ports for the options to “… make your destination private, add IP table rules below” step and click on Next.

  14. Enter a name like Composer REST server for the name of the destination and click on Add Destination

  15. Click on the gear icon for the tile of the destination that was just created to display the details. Copy the Cloud Host : Port – which looks something like: cap-sg-prd-2.integration.ibmcloud.com:17870. This host and port is the Cloud endpoint that can be accessed. Traffic is forwarded by the Secure Gateway service to the running Composer REST server.

  16. Append /explorer after the host and port and open this url in a web browser. For the example, the final url would be: http://cap-sg-prd-2.integration.ibmcloud.com:17870/explorer/ .

Summary

At this point you should be able to access the Composer REST server to perform actions in the deployed business network, using the host name and the port from the Secure Gateway destination. This server is reachable from any system with access to the internet and is best suited to development and testing, and not production use.

You can develop the application locally on the host (instead of within the vagrant VM) without going out to the cloud endpoint. The Vagrantfile maps the local port 3000 to the Composer REST server. This mapping allows you to use the http://localhost:8080/ endpoint when developing your application locally. When deploying to the cloud (as a Cloud Foundry application, or Docker container) switch the endpoint to the cloud URL (for example http://cap-sg-prd-2.integration.ibmcloud.com:17870/).

The Hyperledger Composer can generate a basic Angular interface to the business network. This step is described in Writing Web Applications.

To see how to deploy this Angular application to Cloud Foundry using DevOps, check out the Continuously deploy your Angular application tutorial. There are two changes to the tutorial for the generated Angular application. First, use the full project contents by leaving the Build Archive Directory empty in the Delivery Pipeline Build stage. Second, the application reads the REST API server endpoint from the environment, set this in the Delivery Pipeline Deploy stage by adding an environment property of REST_SERVER_URL with a value of the cloud URL.

Related:

Be Ready for IoT with Dell Technologies IoT Connected Bundles

The age of IoT is upon us…

Given this new era, an important question to ask ourselves and our customers is “are you ready?”

According to IDC estimates, there will be over 80 billion IoT connected devices by 2025, generating over 162 zettabytes of data.Growth in IoT is exploding and across the globe, companies are eager to implement IoT solutions.

Dell Technologies understands the power of IoT and has been hard at work to help customers of all sizes implement and scale their IoT initiatives. Proof of our dedication to IoT is our recently launched IoT Connected Bundles. Exclusively for channel partners, these use case specific IoT Connected Bundles enable customers to deploy IoT solutions that are purpose-built, approachable, interoperable and market validated.

What is an IoT Connected Bundle?

It is complicated to build IoT solutions, which typically are a mix of components like sensors, gateways, partner software applications and network connections to create the end-to-end solution.

With IoT Connected Bundles, most of the guesswork is removed.

The IoT Connected Bundles are turnkey solutions. In a single box, the bundles provide all necessary components for successful IoT deployment targeted to a specific use case.

What use cases do the IoT Connected Bundles target?

For the first seven IoT Connected Bundles, we’ve partnered with domain experts in:

How can I order the IoT Connected Bundles?

The first four bundles were launched at VMworld this past August and include the following partners – ELM Fieldsight, Modius, V5 Systems and ActionPoint.

These bundles are currently available through TechData.

How can I learn more?

As per Joyce Mullen, there is a “goldmine” in IoT for partners who choose to invest in understanding the environment and outcomes.

With IoT Connected Bundles, IoT competency is instilled, granting access to the much anticipated IoT gold rush.

Come hear more about the IoT Connected Bundles on our December 4th, 2018 webcast hosted by Chris Wolff, Dell Technologies Head of Global OEM & IoT Partnerships. Click here to register.

Related:

  • No Related Posts

unisphere for powermax: no ping after deployment

hi,

i have recently deployed the SUSE OVF template for Unisphere for PowerMax 9.0.0.6 and i have a connection issue.

i can’t log in via browser to the server.

i connect to the sever via console using the cseadmin account, pinging default gateway gives me: Destination Host Unreachable.

IP was not taken, default gateway configured properly.

also when pinging from server within the same subnet i get the same error.

any thoughts on what is the problem?

Related:

  • No Related Posts

ScaleIO – Unable to upgrade SIO using Gateway “Could not connect to…”

Article Number: 488740 Article Version: 2 Article Type: Break Fix



ScaleIO 2.0.1,ScaleIO 2.0.0,ScaleIO 1.32.6,ScaleIO 1.32.5

Trying to upgrade ScaleIO from 1.32.x to 2.0.0.x using the Gateway/Installation Manager fails with:

Command failed: Could not connect to <IPs>. Ensure that the relevant service is running and that the server can communicate with the node.

The step that failed was at “switch to single mode”

.

User-added image

The Gateway/Installation Manager MUST be able to connect to ALL IPs on the configuration. If any IP is not reachable (ie. Managerment IP), the upgrade will fail.

In this situation, the customer was not able to communicate with the Management IPs.

Make sure that the Gateway/Installation Manager is able to communicate via ALL the IPs configured.

Related:

  • No Related Posts

ScaleIO: How to prevent certain ScaleIO nodes from being upgraded by the Installation Manager

Article Number: 493735 Article Version: 3 Article Type: Break Fix



ScaleIO 2.0.1.1,ScaleIO 2.0.1,ScaleIO 2.0.0.1,ScaleIO 2.0.0.2,ScaleIO 2.0.0.3

Description

In certain cases customer wants only to upgrade only part of the ScaleIO system (i.e. due to split back-end and front-end management networks); by default IM will fetch the list of all ScaleIO nodes and will try to upgrade them all, which might fail if there is no communication between the IM and these nodes (on purpose).

In such a case, we can instruct the IM to ignore certain cluster elements from the upgrade (for example, in 2-layer configuration we might want to ignore all SDCs).

Steps

As per ScaleIO Installation Guide, change the /opt/emc/scaleio/gateway/webapps/ROOT/WEB-INF/classes/gatewayUser.properties file in the following way:

To exclude some nodes while upgrading with the Installation Manager, list the IP addresses to ignore on the im.ip.ignore.list line. 
Example: im.ip.ignore.list=10.0.0.1,10.0.0.2,... 

Please note that only use of the full IP addresses is supported – there is no option to use wildcards or subnets.

After the change restart the gateway service.

EXAMPLE:

Before the change:

User-added image

Added following to the ignore.list and restarted the GW:

im.ip.ignore.list=10.136.243.115,10.136.243.116 

After the change:

User-added image

Related:

Three Steps to Cloud-Enabling YOUR Edge

EMC logo


The edge computing frontier is a bit like the wild West—technology providers trying to stake their claim of the $70B+ edge computing opportunity (by 2022/2023 according to ACG Research), customers trying to figure how to move into this new territory, and the industry still needing to do considerable work to specify edge nomenclatures, standards and architecture blueprints. As a result, the notion of the “edge” is still very much a matter of perspective: An IoT sensor gateway on an airplane helping predict engine failure(s) A low-power Bluetooth beacon in a mall (proximity-triggered couponing) A cell site … READ MORE



ENCLOSURE:https://blog.dellemc.com/uploads/2018/08/BLOG-PHOTO-edge-cliff-cloud-600×356.jpg

Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

  • No Related Posts

How to Manually Change the Default Gateway From the Management Network to the Data Network on the NetScaler Instance of a CloudBridge 4000/5000 Appliance

When a CloudBridge 4000/5000 appliance is provisioned for the first time, the default gateway of the NetScaler instance is configured by default on the “management” subnet.

Management network: interface 0/1 (XenServer, SVM, CloudBridge instance, NetScaler instance) and the data/traffic network: data/traffic interfaces (CloudBridge apA/apB/apC ports) should be segregated on two different networks.

For more information refer to Citrix Documentation – Deployment Worksheet of CloudBridge 4000/5000 appliance.

This article depicts the steps to manually change the default gateway of the NetScaler instance of a CloudBridge 4000/5000 appliance from the management network to the data network using the command line interface (CLI).

Related:

Re: setting up ROUTE for new interface on VNX 5200

Thanks Rainer for picking up this for us. We have a new eNAS installed recently, but when we created interfaces which are used for CIFS filers. We found the eNAS would automatically create a route for it, just like kevlee mentioned in the topic.

For example, we create:

interface with ip 192.168.100.43/24 – vlan43, it will automatically create a route to 192.168.100.0/24 via 192.168.100.43.

interface with ip 192.168.101.43/24 – vlan143, it will automatically create a route to 192.168.101.0/24 via 192.168.101.43

interface with ip 192.168.102.43/24 – vlan 243, it will automatically create a route to 192.168.102.0/24 via 192.168.102.43

our DNS/AD is 192.168.150.50 – vlan 50

The physical link between eNAS and Ethernet switch is running with 802.1Q. For each VLAN, it’s gateway is on the switch side.

I suppose we should tell eNAS that the gateway IP of each VLAN, but we couldn’t as the system automatically created route is already there (192.168.10x/24 via 192.168.10x.43)

Per your advise we define a default route per data mover, in that case, the eNAS knows how to forward the traffic out, but each traffic has a VLAN id tagged with it, the switch/firewall will drop the packet because the VLAN id.

In theory, host x (vlan x) need to talk to host y (vlan y),

the traffic flow is: host x -> gateway-vlan x -> gateway-vlan y -> host y

If we define 192.168.100.1 as the gateway (0.0.0.0/0 192.168.100.1), we have no issues for the communication of 192.168.100.43, but how about 192.168.101.43 and 192.168.102.43? The packets will be dropped because of VLAN id.

Either our deployment/understanding has something wrong, or we should be able to define gateway for each interface, such 192.168.100.1 for vlan 43, 192.168.101.1 for vlan 143, 192.168.102.1 for vlan 243.

I know each physical DM has a default/global CIFS server for antivirus, etc. I think that default route should be for that default/global CIFS server. But how about the other cifs servers on VDMs? do we have a way to define gateway/defaultroute for each of them?

Thanks,

John

Related:

Re: eNAS static route issue

Thanks Rainer. It turned out to be we should NOT try to use the ‘Run_Ping_Test’ from eNAS to ping an external IP from an interface. It looks eNAS doesn’t rely on it when it’s working. that’s a wrong verification method we’re told.

If we try, the issue is still there, the eNAS will put the traffic to default route, then the packet will be dropped as VLAN ID mismatch.

My understanding,

eNAS contains CIFS server, the interface IP is linked to the CIFS server, such as 192.168.10.50. the CIFS server doesn’t have its own gateway set (for example, 192.168.10.1). Like a Windows server, it only has its IP set, but no gateway defined. Gateway is used to forward traffic to another subnet. eNAS does have the gateway connected – it’s on the other side of 802.1Q link between eNAS and switch.

It does make sense to define a default route for local interface/subnet. the traffic flow would be:

192.168.10.50 -> 192.168.10.1 -> 192.168.20.1 (DC’s gateway) -> 192.168.20.50 (DC)





I still don’t know when it needs to start communication to outside, such as DNS, authentication/DC, what source address to be used. I believe it’s not 192.168.10.50. It’s like there is a proxy there for all interfaces to forward the traffic to default route. It’s like on 802.1Q trunk, we have vlan 10, 20, 30, 40, 50, each vlan’s gateway is resided on switch side, but we will only use vlan 30 (192.168.30.1) as the default route for every thing.

I also don’t understand your saying ‘traffic to all IP’s for that subnet 192.168.10.0 – 192.158.10.255 will be sent out through the local interface 192.168.10.50′.

192.168.10.50 is a host IP for CIFS server, it’s supposed to receive nothing but destined to 192.168.10.50. What’s the meaning of the route — 192.168.10.0/24 via 192.168.10.50



Related:

ProxySG Split Gateway

I need a solution

Hi all,

I want to ask can proxysg deployed to used 2 gateway?

i set 2 itnerface,
interface 0:0 -> 192.168.x.x
interface 1:0 -> 172.16.x.x

i have set gateway each 192 and 172 in same routing domain.

the goal was i want to use gateway 172 for video traffic, and the other use gateway 192.

any documentation or use case to achieve this goal?

Regards.

0

Related: