Russia’s Largest Bank Completes $15 Million Debt Purchase Via Hyperledger Blockchain

Using the Hyperledger Blockckhain, Russia’s Sberbank has successfully bought around $15 million worth of accounts as receivable from Singaporean commodity trading firm, Trafigura. According to a spokesperson for the largest bank in Russia, the purchase was completed using Hyperledger Fabric’s private collections feature, which allows some certain information remain private even with a network which has other members.

Furthermore, the Sberbank-driven framework through which the transaction was carried was done using smart contracts already programmed with the Scala general purpose language. The software is also powered by the Aurelia framework as well as SberCloud’s cloud service developed and deployed by Sberbank itself. The company boasts that it only takes one second to complete a full block of transactions on its platform.

The conclusion of the transaction was done a few days ago at the Eastern Economic Forum in Russia’s Vladivostok Pacific port city. Speaking on the transaction, Sberbank’s first deputy chairman, Alexander Vedyakhin, said that the system significantly reduced the amount of required time to complete the transaction by making the exchange of documents a lot more seamless. Vedyakhin said:

“Our blockchain pilot project records every step of the transaction: request for purchase of receivables, application processing and its approval with the bank, issuing the bank’s offer, confirmation of terms by Trafigura, and settlement of the transaction.”

Because of the success of this transaction, both Trafigura and Sberbank are currently making plans to find more ways through which blockchain technology can significantly improve financial transactions and processes worldwide.

Image Credits: Pixabay

Related:

XD 7.X XACT_ABORT support

When SET XACT_ABORT is ON, if a Transact-SQL statement raises a run-time error, the entire transaction is terminated and rolled back.

When this option is enabled on a SQL instance hosting a XenDesktop 7.X database, unexpected might occur at brokering time, or when attempting to reconnect to an active session.

Related:

Provisioning Services: PVS Servers May Stop Responding Or Target Devices May Freeze During Startup Due To Large Size Of MS SQL Transaction Logs

Backup the XenApp/XenDesktop Site and PVS database and the Transaction log file to trigger the Transaction log auto truncation.

The transaction log should be backed up on the regular basis to avoid the auto growth operation and filling up a transaction log file.

Reference: https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/back-up-a-transaction-log-sql-server?view=sql-server-2017​

ADDITIONAL INFORMATION

Ideally Transaction log will be truncated automatically after the following events:

  • Under the simple recovery model, unless some factor is delaying log truncation, an automatic checkpoint truncates the unused section of the transaction log.In the Simple recovery there is little chance for the transaction log growing – just in specific situations when there is a long running transaction or transaction that creates many changes
  • By contrast, under the full and bulk-logged recovery models, once a log backup chain has been established, automatic checkpoints do not cause log truncation. Under the full recovery model or bulk-logged recovery model, if a checkpoint has occurred since the previous backup, truncation occurs after a log backup (unless it is a copy-only log backup). There is no automated process of transaction log truncation, the transaction log backups must be made regularly to mark unused space available for overwriting. Bulk-logged recovery model reduces transaction log space usage by using minimal logging for most bulk operations

Transaction log file size may not decrease even if transaction log has been truncated automatically.

Log truncation frees space in the log file for reuse by the transaction log. Log truncation is essential to keep the log from filling. Log truncation deletes inactive virtual log files from the logical transaction log of a SQL Server database, freeing space in the logical log for reuse by the Physical transaction log. If a transaction log were never truncated, it would eventually fill all the disk space that is allocated to its physical log files.

It is recommended also to keep the transaction log file in a separate drive from the database data files, as placing both data and log files on the same drive can result poor database performance.

Related:

XenDesktop 7.6 Studio Failed to Connect to Database, Error Id:XDDS:9B560459

Step1.Check the event log of SQL server, because large transaction log may cause such error message. If transaction log is full, try to shrink transaction log

  1. Use BACKUP LOG [databasename] to disk = ‘nul’. For reference please turn to http://www.cnblogs.com/TeyGao/p/3519954.html or http://realit1.blogspot.com/2016/02/shrinking-database-log-files-in.html. This method may take a lot of time depending on the size of the log , so please be patient
  2. Break the mirror configuration to shrink the transaction log quickly. This method may have a higher risk, since we make too many changes to the environment and Citrix has no professional database support to deal with anything urgent caused by such big changes.

Remove mirror from the primary SQL Server

Modify the recovery model to simple

Right click the database and shrink the transaction log

Backup the database and transactionlog, copy them to Mirror database server

Restore the backup with NORECOVERY option on Mirror database server

select the database and select mirror task on primary database server

click ConfigureSecurity to start Mirroring wizard

Select Yes on Witnessserver page

Step2: Try to restart SQL Server and DDC if possible

Step3: Use ODBC to connect to database to test if database can accept remote connection

Open ODBC on any windows system, Click Add and select SQL Server and click Finish to create a New Data Source to SQL Server.

User-added image
Give a name of the Data Source and select the SQL server name you want to connect, then click Next > Next > Finish > Test Data Source. If the DB can accept remote connection, TESTS COMPLETED SUCCESSFULLY message is displayed.

User-added image
User-added image

Other troubleshooting steps including but not limited to checking the 1433 port of SQL Server and “Allow remote connection to this server configurations” Configuration of Database.

Related:

XenDesktop 7.6 Studio Failed to Connect to Database Error Id:XDDS:9B560459

Step1.Check the event log of SQL server, because large transaction log may cause such error message. If transaction log is full, please try to shrink transaction log

  • For only one database environment,use the method in this KB to shrink transaction log:https://support.citrix.com/article/CTX126916
  • For Mirror or AlwayOn database environment
  1. Use BACKUP LOG [databasename] to disk = ‘nul’. For reference please turn to http://www.cnblogs.com/TeyGao/p/3519954.html or http://realit1.blogspot.com/2016/02/shrinking-database-log-files-in.html. This method may take a lot of time depending on the size of the log , so please be patient
  2. Break the mirror configuration to shrink the transaction log quickly. This method may have a higher risk, since we make too many changes to the environment and Citrix has no professional database support to deal with anything urgent caused by such big changes.

Remove mirror from the primary SQL Server

Modify the recovery model to simple

Right click the database and shrink the transaction log

Backup the database and transactionlog, copy them to Mirror database server

Restore the backup with NORECOVERY option on Mirror database server

select the database and select mirror task on primary database server

click ConfigureSecurity to start Mirroring wizard

Select Yes on Witnessserver page

Step2: Try to restart SQL Server and DDC if possible

Step3: Use ODBC to connect to database to test if database can accept remote connection

Open ODBC on any windows system, Click Add and select SQL Server and Click Finish to Creat a New Data Source to SQL Server.

User-added image
Give a name of the Data Source and select the SQL server name you want to connect, then Click Next->Next->Finish->Test Data Source. If the DB can accept remote connection, TESTS COMPLETED SUCCESSFULLY message will show up.

User-added image
User-added image

Other troubleshooting steps including but not limited to checking the 1433 port of SQL Server and “Allow remote connection to this server configurations” Configuration of Database.

Related:

How can I rollback a transaction in a composite service?

We’re implementing multiple composite services under MDM AE. Inside these composite services we are storing multiple objects in the database. In case something fails I want to rollback the whole transaction. I have tried these :

1) set status DWLStatus.FATAL with the following code

DWLResponse outputResponse = new DWLResponse();
DWLStatus status = new DWLStatus();
status.setStatus(DWLStatus.FATAL);
DWLError error = new DWLError();
error.setErrorMessage(“Some error mesage”);
error.setDetail(“Some detail”);
error.setThrowable(new IOException(“vecer a zasrano”)); // if exception occured
status.addError(error);
outputResponse.setStatus(status);

2) throw RuntimeException
3) throw checked exception

Well, nothing caused the transaction to roll back. I have a simple test case for this where I insert the simplest possible BObj into the DB calling corresponding add method.

Related:

Why are we getting DuplicateKeyExceptions on some inserts into our WXS grid?

My application is getting DuplicateKeyExceptions on some of our inserts into a WebSphere eXtreme Scale grid.

The exception stack is below:

com.ibm.websphere.objectgrid.TransactionException: rolling back transaction, see caused by exception

at com.ibm.ws.objectgrid.SessionImpl.rollbackPMapChanges(SessionImpl.java:2983)
at com.ibm.ws.objectgrid.SessionImpl.commit(SessionImpl.java:2297)
at com.ibm.ws.objectgrid.SessionImpl.mapPostInvoke(SessionImpl.java:4147)
at com.ibm.ws.objectgrid.ObjectMapImpl.postInvoke(ObjectMapImpl.java:947)
at com.ibm.ws.objectgrid.ObjectMapImpl.insert(ObjectMapImpl.java:1547)
at com.ibm.ws.objectgrid.ObjectMapImpl.insert(ObjectMapImpl.java:1478)
at com.my.application.package.client.insert(Client.java:163)

Caused by: com.ibm.websphere.objectgrid.ClientServerTransactionCallbackException: Client Services – received exception from remote server: com.ibm.websphere.objectgrid.TransactionException: com.ibm.websphere.objectgrid.TransactionException:rolling back transaction, see caused by exception

at com.ibm.ws.objectgrid.client.RemoteTransactionCallbackImpl.processReadWriteResponse(RemoteTransactionCallbackImpl.java:1187)
at com.ibm.ws.objectgrid.client.RemoteTransactionCallbackImpl.processReadWriteAsyncResponse(RemoteTransactionCallbackImpl.java:1736)
at com.ibm.ws.objectgrid.client.RemoteTransactionCallbackImpl.processReadWriteRequestAndResponse(RemoteTransactionCallbackImpl.java:1484)
at com.ibm.ws.objectgrid.client.RemoteTransactionCallbackImpl.commit(RemoteTransactionCallbackImpl.java:338)
at com.ibm.ws.objectgrid.SessionImpl.commit(SessionImpl.java:2166)

Caused by: com.ibm.websphere.objectgrid.TransactionException: com.ibm.websphere.objectgrid.TransactionException:rolling back transaction, see caused by exception

at RemovedByServerSerialization.RemovedByServerSerialization(RemovedByServerSerialization:0)
Caused by: com.ibm.websphere.objectgrid.DuplicateKeyException: com.ibm.websphere.objectgrid.DuplicateKeyException:ObjectGrid: Grid, Map: Map1, Key: KeyDataBytesImpl: {key1}

What is the cause of this error and how can I correct the problem?

Related:

The Cloud Security Ecosystem

Composition and functions of digital identity

Digital identity in this context has specific composition and transactional functions which make its accuracy and integrity critical.

A feature of all schemes which require digital identity for transactions is that they consist of two sets of information — a small set of defined information which must be presented for a transaction, i.e., transaction identity; and the larger collection of more detailed “other information” which is updated on an on-going basis. This architecture can be depicted diagrammatically in Figure 1).

These two sets of information collectively comprise digital identity, but they are different in composition and function.

Because of its nature and functionality transaction identity is the most important part of digital identity and it is also most vulnerable to system error as defined in this chapter. Transaction identity is comparatively static, with much of the information being established at birth. It typically consists of full name, gender, date of birth,

and at least one piece of what is referred to as “identifying information” which is most often a signature or numerical identifier. The information which comprises transaction identity is largely public and is not of a nature which naturally seems to attract privacy protection. Most significantly, transaction identity is not just information. As discussed below, it is functional.

The information which constitutes transaction identity is fundamentally different from the larger body of other information which sits behind it. That larger body of information tells a story about a person and that is its sole purpose. It is also dynamic. It is augmented on an on-going basis. Even information which at first sight seems largely administrative adds to the profile. This is also information which is not generally in the public domain. It is generally considered to be personal information which is typically protected by privacy and data protection regulation in most jurisdictions, including Australia, United Kingdom, United States of America, and in the EU. Why is this passage relevant? Access to the other information is primarily via transaction identity. The system is designed so that transaction identity is the access point and transaction identity has a gate-keeper role. Transaction identity links digital identity to an individual through the identifying information (Figure 2).

These digital identity schemes depend on two processes — first, authentication of identity, and second, verification of identity. Both processes are founded on the integrity of transaction identity.

The information collected when an individual is registered under the scheme is used to authenticate identity in the sense that it is used to prove authenticity. The identifying information is used to link an individual to the registered digital identity. Typically, the identifying information is a number, a handwritten signature, and sometimes also a head and shoulders photo. Some schemes include biometrics as part of the identifying information. The biometrics typically used are 10 fingerprints, two iris scans, and a face scan. The identifying information is regarded as being associated inseparably with that individual. Once authenticated, the identity is recorded in the system.

Transaction identity, the defined, limited set of information which determines identity for transactional purposes, is then used to verify transactions. Invariably, full name, gender, date of birth, and a piece of identifying information will be required to transact. Not all the recorded information need to be used for every transaction. A feature of the scheme is that the information varies, to an extent, depending on the requirements of the transacting entity. The identifying information most commonly required is a signature and/or a numerical identifier.

As a set, this information is functional in that it enables the system to transact with the identity on record. Transaction identity is verified for transactional purposes when all the required transaction information as presented, matches the information on record. Transaction identity is verified by matching information with information. A human being is not central to the transaction and no human interaction is required. The set of information required to establish transaction identity can be provided remotely without any human involvement at that time. Through this matching process, transaction identity performs a number of sequential functions. First, transaction identity singles out one digital identity from all those recorded under the scheme. Second, transaction identity verifies that identity by determining whether there is a match between all the transaction identity information as presented, with that on record. These two steps enable the system to recognize and then transact with that digital identity as depicted in Figure 3.

Under the scheme, there is an important distinction between identification and identity. Identification is just one part of the two processes used to establish identity for a transaction. Although in some respects transaction identity may seem to replicate the traditional function of identity credentials, there is an important difference in the role played by human beings and information. Unlike traditional identity

papers, the information which comprises transaction identity plays the critical role in the transaction, not the individual. Digital identity does not merely support a claim to identity. Digital identity, specifically transaction identity, is the actor in the transaction. This function distinguishes transaction identity.

Although the assumption is that there is a reaching behind transaction identity to deal with a person, the system does not actually operate in that way. The primary role of the identifying information is to link the registered digital identity to a person. The individual who is assumed to be represented by that identity is connected to transaction identity by the identifying information. However, this link is relatively tenuous. A human being is not central to, or necessary, for the transaction. Transaction identity enables the transaction. The interaction is machine to machine, based on matching datasets. As a matter of fact, if not law, the transaction is with the digital identity, not a person. If all the transaction identity information as presented, matches the information on record, then the system automatically authorizes dealings with that digital identity as depicted in Figure 4.

Within the scheme parameters, the system can “act and will for itself” to recognize the defined set of information which comprises transaction identity and

then transact with that identity. This has significant consequences for the government as scheme administrator, for public and private sector entities using the scheme but the individual bears the most direct and significant consequences. This is because transaction identity directly implicates the individual linked to the digital identity by the identifying information, and why it is important to protect the integrity of digital identity, especially now that governments are increasingly using cloud computing for their e-services and transactions. How is this link to the below passage? — sudden introduction of cloud computing?

About the author:

Dr. Ryan Ko is a Senior Lecturer with the University of Waikato, New Zealand. He established New Zealand’s first Master’s degree in Cyber Security and first dedicated Cyber Security Lab at the University of Waikato. His main research areas are Cyber Security, Cloud Data Provenance and Cloud Computing Security and Trust. Prior to joining the faculty, Dr. Ko was a lead computer scientist with Hewlett-Packard (HP) Labs’ Cloud and Security Lab and achieved first-in-the-world scientific breakthroughs in the area of cloud data provenance. Recipient of the Cloud Security Alliance (CSA) Ron Knode Service Award, he is active as Research Advisor for CSA Asia Pacific, and serves as chair and board member of several cyber security industry consortia and chapters. Dr. Ko is also the co-founder and co-chair of the CSA Cloud Data Governance Working Group, the first CSA research group led by a chapter in Asia Pacific. Prior to HP Labs and his Ph.D., he was an entrepreneur with two startups, and was with Micron Technology, Inc. He has spoken on Cloud Security at several locations in USA and Asia Pacific. Dr. Ko holds three international patents and is a member of the IEEE, ACM and AAAI. Most recently, he was one of 14 international subject matter experts selected by (ISC)2 to develop a new international certification like the CISSP for cloud security professionals

Dr. Kim-Kwang Raymond Choo is a Fulbright Scholar and Senior Lecturer at the University of South Australia. He has (co)authored a number of publications in the areas of anti-money laundering, cyber and information security, and digital forensics including six Australian Government Australian Institute of Criminology refereed monographs. Dr. Choo has been an invited speaker for a number of events (e.g. 2011 UNODC-ITU Asia-Pacific Regional Workshop on Fighting Cybercrime and 2011 KANZ Broadband Summit 2011), and delivered Keynote/Plenary Speeches at ECPAT Taiwan 2008 Conference on Criminal Problems and Intervention Strategy, 2010 International Conference on Applied Linguistics and 2011 Economic Crime Asia Conference, and Invited Lecture at the Bangladesh Institute of International and Strategic Studies. He was one of more than 20 international (and one of two Australian) experts consulted by the research team preparing McAfee’s commissioned report entitled “Virtual Criminology Report 2009: Virtually Here: The Age of Cyber Warfare”; and his opinions on cyber crime and cyber security are regularly published in the media. In 2009, he was named one of 10 Emerging Leaders in the Innovation category of The Weekend Australian Magazine / Microsoft’s Next 100 series. He is also the recipient of several awards including the 2010 Australian Capital Territory (ACT) Pearcey Award for “Taking a risk and making a difference in the development of the Australian ICT industry.”

Related:

process and content engine in one transaction

Customer and I are speculating about the upcoming Content and process foundation release 5.5.

A long lasting problem (dating back to 3.5) is the fact that it is not possible to issue CE and PE actions within a single distributed transaction. So updates to a workflow and a document are not possible in a single atomic transaction causing all sorts of somersaults and handstands in code to recover from that, but non of them is satisfactory.

So the question is: Will 5.5 remedy this situation? No excuses about not being released after 7th of Dec this year 🙂

And how do others deal with this issue?

Kind regards,
/Gerold

Related: