WANdisco Integrates DConE into Permissioned Blockchain Frameworks

New technology offers flawless execution within permissioned blockchain digital ledger technologies such as Hyperledger Fabric and R3 Corda

SAN RAMON, Calif., June 29, 2021 /PRNewswire/ — WANdisco (LSE: WAND), the LiveData company, announced today the availability of its Distributed Coordination Engine (DConE) to provide highly secured and reliable transactionality for permissioned blockchains (i.e. digital ledgers) within the financial sector, and other business critical applications and use cases.

This new use of DConE, the only commercialized version of the Paxos algorithm, will offer a guaranteed fault-tolerant alternative to the Raft algorithm, commonly used in digital ledger technologies such as Hyperledger Fabric and R3 Corda. Serving as the basis of WANdisco’s solutions, DConE has been previously used to provide:

  • Multi-site replication of Subversion and Git repositories

  • The migration and replication of business-critical data to cloud service providers such as Microsoft Azure and AWS

  • Keeping Hadoop-compatible file systems, Apache Hive and Databricks data consistent across on-premises and multi-cloud environments

“After our success in coordinating software collaboration, and LiveData migration and replication to the cloud, we saw an opportunity to provide DConE as a solution for enterprises requiring high availability of their distributed ledgers,” said Dr. Yeturu Aahlad, Co-founder and Chief Scientist, WANdisco. “DConE is based on the Paxos algorithm, which is the gold standard of consensus algorithms. The new application is a natural extension of this unique and powerful technology, and gives blockchain application customers the peace of mind that their systems are guaranteed to never break.”

Dr. Aahlad commercialized the Paxos algorithm to make the internet more efficient, trusted and reliable for business critical use cases. Aahlad and WANdisco’s other co-founder, CEO David Richards, realized the cloud was going to play a huge part in advanced analytics. DConE was created to be the differentiated technology underpinning WANdisco LiveData Cloud Services, the only automated cloud migration platform on the market recommended by Microsoft and AWS for Hadoop-to-Cloud migration.

WANdisco’s patented consensus technology works by requiring the storage points across its network to approve any transaction that may be entered. Multiple transactions can be processed simultaneously enabling a more scalable distributed ledger with shorter lag times for transactions than typical blockchain frameworks.

“We are very excited to offer DConE as a new consensus option for permissioned blockchains and digital ledger technologies,” said Richards. “This new application will continue WANdisco’s mission to deliver technology that empowers customers to benefit from the freedom of data, regardless of location and when it’s needed. We look forward to developing partnerships to further make this technology available in other frameworks.”

About WANdisco

WANdisco is the LiveData company. WANdisco solutions enable enterprises to create an environment where data is always available, accurate and protected, creating a strong backbone for their IT infrastructure and a bedrock for running consistent, accurate machine learning applications. With zero downtime and zero data loss, WANdisco LiveData Platform keeps geographically dispersed data at any scale consistent between on-premises and cloud environments allowing businesses to operate seamlessly in a hybrid or multi-cloud environment. For more information on WANdisco, visit www.wandisco.com.

Media Contact

Josh Turner

Silicon Valley Communications

turner@siliconvpr.com

+1 (917) 231-0550

CisionCision
Cision

View original content:https://www.prnewswire.com/news-releases/wandisco-integrates-dcone-into-permissioned-blockchain-frameworks-301321880.html

SOURCE WANdisco

Related:

  • No Related Posts

Running a smart contract on the blockchain network

resume

Considering that Hyperledger Fabric has several order service implementations (including Solo, Kafka and Raft), you, the developer, must understand the advantages and disadvantages of each implementation before opting for a design. As the new standard for production blockchain networks, the Raft consensus algorithm is a fault-tolerant ordering service implementation, which is easier to configure and manage than Kafka. Most importantly, Raft allows different organizations to contribute nodes to the ordering service, allowing for a more decentralized network architecture.

description

Given that there are many possible ways that nodes in a blockchain network can reach an agreement (or consensus), and that agreement is essential for a distributed computing network, it is not surprising that the debate about the most efficient consensus algorithm is often heated, and even religious. At the heart of consensus algorithms is the need to resolve a difficult, popular and distributed computing issue: how can we (computers) agree on a result (reach consensus) among a group of computers when we know that computers are unreliable ? Although there is no perfect way to solve this problem, the pattern, in fact, since 1989, has been the Paxos algorithm. The problem with Paxos is that it can take years for doctoral students to master it. Even the most brilliant engineers who try to implement this algorithm in practice find it difficult to understand and thus implement their solution.

That’s where Raft comes in. Designed as an alternative consensus algorithm for Paxos, Raft is much easier to understand and is now used in some of the most successful software projects, such as Docker. Since the launch of Hyperledger Fabric, the consensus has been designed as pluggable, and you, the developer, can choose which type of consensus your order nodes will use. Raft allows for a much easier configuration than Kafka, a more decentralized approach, as several organizations can contribute with us to the ordering service and greater tolerance to collision failures than Solo, which has a single node request. This code pattern helps you understand how to build and deploy a smart contract on a Hyperledger Fabric network running Raft and allows you to test the network’s tolerance by stopping and starting some order nodes.

When you complete this code pattern, you will understand:

How the Raft algorithm works;

How to build and execute a Raft order service with several organizations in Hyperledger Fabric;

How to send transactions and run a blockchain network using a Raft ordering service;

How to test the fault tolerance of the order service by deleting (locking) one of the order nodes.



Related:

How to Enable Session Reliability on NetScaler in High Availability

This article describes how to enable session reliability on NetScaler in high availability.

Background

When high availability failover occurs, the ICA sessions will get disconnected. In order to avoid the ICA session disconnection on high availability failover, you can configure Session Reliability.

Points to Note

  • NetScaler appliances should be running on software version 11.1 build 49.16 or later.
  • You should not Enable or disable Session Reliability mode when the NetScaler appliances have active connections.
  • Enabling or Disabling the feature when connections are still active causes HDX Insight to stop parsing those sessions after a failover occurs and result in loss of information about the sessions.

Related:

Problem symptoms occur when all nodes in a XenMobile Server cluster are not located near to each other

XenMobile Server supports ‘active/passive’ failover between multiple sites.

It is not typically possible to configure ‘active/active’ for anything other than the access layer, comprising Citrix ADC, Public DNS records and GSLB.

A typical setup for Disaster Recovery will use two XenMobile Server clusters, one active and the other passive.

Supported SQL infrastructures include Basic AOAG (Always On Availability Groups) and Clustered SQL for High Availability.

In a Disaster Recovery scenario (either for testing or practical reasons), it is usually necessary to block or prevent device connections in to the Primary datacentre or site as one of the first steps of performing the failover. This is to prevent any further changes from being made to the configuration database for XenMobile Server.

When performing a failover to a Disaster Recovery datacentre or site, synchronise the configuration database from the Primary site to the DR site one last time, after blocking or preventing device connections in to the Primary site first. Afterwards, allow connections in to the DR site where changes can then be made to the since synchronised database.

Whilst the access layer (front end) of supported Disaster Recovery infrastructures can be configured as ‘active/active’, the different clusters of XenMobile Server nodes (back end) are intended to be ‘active/passive’.

Related:

Implementing Failover between (SG 600-35) and (S200-30)

I need a solution

As per below KB, it’s not recommend to implement failover between two different models

https://support.symantec.com/us/en/article.tech240…

but as we are planning HW refresh (600-35 ver. 6.7.2.2) proxy cluster with a new cluster (S200-30 ver. 6.7.4.3)

To achieve no downtime, we will remove the cables from old standby proxy and connect them to the new standby that has the same configuration; and so on with the active one but not in the same night

regardless the load point that is mentioned in above link, is there any “compatibility issue”, “concerns” or “recommendation” to implement HA between both of those proxies (with different model/version)?

0

1574151551

Related:

SOFS(Scale-Out file server) does not work with 1901 or 1902

Unable to mount a share on an scale-out-fileserver clusterresource (cluster-shared-volumes).

The below errors are seen maservice.log.

2019-02-07 03:42:04,148 ERROR [80] MountPointService: Encountered error creating mount point /mnt/maserviceshare/mnt00000004 with exception Uni.Core.Handlers.Exceptions.GlobalizedErrorException`1[Uni.Core.Contract.Results.FileCategory]: MessageId=FileShareServicePermissionDenied, DefaultTitle=, CategoryData={[FileCategory { Message = “mount error(13): Permission denied; Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)” }]}

at Uni.Appliance.Services.FileShareServices.MountPointService.MountUncToPath (System.String uncPath, Uni.Appliance.Services.FileShareServices.Interfaces.FileShareType fsType, Uni.Appliance.Services.FileShareServices.Interfaces.FileShareCredentials fsCreds, System.Boolean cacheEnabled, System.Nullable`1[T] timeout, System.Int64 mntNumber, System.String mntPoint) [0x000af] in d:buildsR4ZION-WSW-JOB1sourceUni.ApplianceServicesFileShareServicesMountPointService.cs:281

2019-02-07 03:42:04,148 INFO [80] TestRemoteFileShar: TEST( \citrix-layers.CustPrivInfo.dirlayers$, Cifs, CustPrivInfoservice, * ) FAILED MessageId=FileShareServicePermissionDenied, DefaultTitle=, CategoryData={[FileCategory { Message = “mount error(13): Permission denied; Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)” }]}

2019-02-07 03:42:04,148 INFO [80] HandlerHelper: Finished Command TestRemoteFileShareCommand->TestRemoteFileShareResult

2019-02-07 03:42:04,148 ERROR [80] HandlerHelper: ‘Application Error while processing ‘Command’ ‘TestRemoteFileShareCommand”: ‘DefaultTitle=””, MessageID=”FileShareServicePermissionDenied”, {CategoryData={[FileCategory { Message = “mount error(13): Permission denied; Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)” }]}’

Related:

  • No Related Posts

Configure ASG Failover of transparent/explicit proxies

I need a solution

Hi All,

  Here attached our current design of proxy, and going to configure failover unit. Now we do the failover on our firewalls and proxy is connected directly to the firewalls.

  Question and challenge is : now like to use current LAN-10.1.0.1 on VIP and make active to 10.1.0.3 and standby to 10.1.0.4, also change to WAN-10.1.0.2 to VIP and make active to 10.1.0.5 and standby to 10.1.0.6, and explicit users still will use 10.1.0.1 ip and both active and standby can possible to use same certificates and we don’t really want to auto failover from proxy, we want manually failover because it is inline configured between active and standby firewalls.

for that what configuration required? below guides are not accured and no technical steps.

https://support.symantec.com/en_US/article.TECH242151.embed.html

https://www.symantec.com/connect/sites/default/files/Technical%20Brief%20Implementing%20Failover%20Services.pdf

  on the guide need to configure VIP on failover – which our case it is directly connected- and can configure 192.168.0.1 and active 192.168.0.2, standby 192.168.0.3 or can use management IPs.

Config for Primary

1. remove LAN interface IP – 10.1.0.1

2. create VIP – 10.1.0.1

3. Assign LAN interface IP – 10.1.0.3

4. remove WAN interface IP – 10.1.0.2

5. create VIP – 10.1.0.2

6. Assign WAN interface IP – 10.1.0.5

7. Failover New

– enabled

   – use existing  – 10.1.0.1 

– multicast 224.0.0.1

– master – check

Standby

. create VIP – 10.1.0.1

. Assign LAN interface IP – 10.1.0.4

. create VIP – 10.1.0.2

. Assign WAN interface IP – 10.1.0.6

. Failover New

– enabled

   – use existing  – 10.1.0.1 

– multicast 224.0.0.1

– master – check

Apply ?

 is that correct? also how to do manually failover?

0

Related:

  • No Related Posts

Types of Configuration Synchronized in ASG Failover

I need a solution

Hello Guys, 

Can anyone help me knowing the actual configuration that is synchronized between members of one failover group.

In other words, what are the configuration that i can configure one time on the master member(configuration sync) , and what configuration that i have to configure on both members whether before of after creating the failover group.

Thanks for you replies in advance

0

Related:

  • No Related Posts