Designing Namespace and Address for Hyperledger Sawtooth Transaction Family

  • Implementing Hyperledger Sawtooth Transaction Family

    After the namespace and address scheme is defined for the transaction family, the state, transaction, and payload encoding scheme can be defined.

    To define the state, you should analyze the data requirements for your organization and follow an appropriate modeling process to define the semantic data model for the system. For example, you could use entity-relationship modeling to represent conceptual data logic in your enterprise. In our example, the state is as follows:

    hyperledger sawtooth transaction family 5

     

    This can be explained as follows:

    • House APN (Key): The Assessor Parcel Number (APN) for a house is a unique number assigned to each parcel of land by a county tax assessor. The APN is based on formatting codes, depending on the home’s location. Local governments use APNs to identify and keep track of land ownership for property tax purposes.
    • House Owner: The name of the person who currently owns the house.

    Defining transactions involves analyzing all your business use cases and the attributes used to perform these business operations. In our example, transactions and their payloads are as follows:

    • House APN (Key): The key attribute for the state.
    • Action: This can be either the create keyword or the transfer keyword.
    • House Owner: The name of the house owner.

    To define payload encoding schemes, you could choose from one of the following methods:

    • JSON Encoding: JavaScript Object Notation (JSON) is a lightweight data-interchange format that is popularly used in software systems as an alternative to XML. The standard JSON package for Python can be found here: https://docs.python.org/3/library/json.html. The advantages of using JSON are that it is human-readable, it is supported in most programming languages, and it is easy to encode and decode using libraries.
    • Protobuf Encoding: Protocol buffers are Google’s language and platform-neutral mechanism for serializing data. With protobuf encoding, you define the message formats in a .proto file and compile them using the protocol buffer compiler. To find out more, you can follow the guide that can be found here: https://developers.google.com/protocol-buffers/. It is small, fast, and simple, but it is not human-readable. Like JSON, it is also supported in many languages, such as Java, Python, C++, and Go.
    • Simple text encoding: This involves defining your own message format and carrying out character encoding using your own protocol with a special delimiter, or following common formats, such as .csv or base64, to represent data in an ASCII format or string. These are human-readable, simple, easy, fast, and language- and platform-neutral.

    For our example, the encoding for the state and payload is simple text encoding that encodes the data with UTF8 and the CSV format. We are using the Sawtooth XO family as a template.

    For hash collisions, the colliding state will be stored as the UTF-8 encoding of the string with a delimiter, |, such as entry 1|entry 2:

    ‘|’.join(sorted([‘,’.join([house, owner])

    for house, (owner) in house_list.items()])).encode()

  • Related:

    Exclude Prefix Data Identifier validation check

    I do not need a solution (just sharing information)

    Applies to multiple versions of DLP

    In an effort to reduce false positives for several of my custom DIs, most notably the 9 digit SSN without dashes, I’m using the validation check named “Exclude Prefix” (comma-separated list of values of any length)

    In my case the values are words. I’ve determined the list to be case sensitive (so “Extension” is different from “extension”) but it is also sensitive to trailing characters such as a space; so a value of “flyer ,” is different from “flyer  ,”.   This is important when inspecting different file types.  I’ve found a value that is effective for MS Word files is often not effective for a PDF file.  Specifically I’ve found you need two spaces after the word for some PDF files.  So the validator looks like “Policy Number  ,).

    To take it further it appears the validator will accept a tab as well as spaces.  So for example a spreadsheet generating false positives that contains a column for US States (abbreviated) and a column for a 9 digit zip code are separated by two spaces and a tab when extracted by DLP. Incorporating those control characters into the validator (NYspacespacetab,)  corrects the false positive.  I’ve implemented this last example with doublespaces and a tab in my test environment and am still testing but it looks promising.

    I use Ultra Edit but think most text editors will work.  I’ve attached an image of a portion of the list used.

    0

    Related:

    How can I automatically remove resources from a Custom Data Class?

    I need a solution

    Someone who can help me with this?

    I have a CSV file with the colums “ComputerName” and “UserName”. The CSV file is configured as a data source and that data source is imported into a custom data class on a schedule. This works as it should.

    Now, I would like to make computers not in the CSV file to be removed from the custom data class. I cannot find that it is possible with the import task – it just adds or updates, and is not removing entries from the custom data class. Is it possible to do this? I guess it is possible to clear the custom data class and then let it repopulate, but how can I do that and schedule it?

    0

    1558522916

    Related:

    • No Related Posts

    Problem with exact data profile

    I need a solution

    Hello everybody!

    I need some help, please.

    We created a policy by regular expression to detect mails with Name + Telephone number. So, we have a lot of false-positive incidents. For example contract number detect as telephone number. I  tried to use Exact Data Profile. First, i created .csv file with Names and telephone numbers. When i upload this file to server for creating exact match data identifier profile, ithere is an error : data file was corrupted (creating index version 1). .csv file writed by Russian language, so file encoding is UTF-8. Thanks for any ideas, how to load .csv to create data identifier profile

    0

    Related:

    • No Related Posts

    SEPM dump logs – is it just me?

    I need a solution

    Has anyone used or looked at the exported dump logs from  the console? 

    Just looking at the risk and traffic logs, I can’t believe they seriously think they could be used for any useful purpose.  

    • Each log contains a header row that isn’t always complete. The risk log contains about 10 additional fields than are shown in the header.
    • Each record will also include a field name with most items but it often doesn’t match what was listed in the header. The traffic log will contain three separate items with Local: and Remote: in each record.
    • They’re formatted as comma delimited but values that contain comma’s or white space aren’t always enclosed in quotes.

    The Syslog output apparently has the same issues.

    Is that format/layout following some standard I haven’t seen before or are things as bad as they look?  Is there an easier method to parse them than a script that tries to forsee every inconsistancy?

    0

    Related:

    Can IDM do all EDM does?

    I need a solution

    We have to make use of Remote IDM Indexing / Remote EDM Indexing for the reason that I as DLP administrator cannot directly access the confidential documents of other departments.

    I am worrying the feasibility of Remote EDM Indexing if it has to be done by non-IT staff of other departments, e.g. HR. (not to mention they are willing or not to do so)

    In terms of structured data, most of their confidential documents are stored in excel spreadsheet in .xlsx format. The following is an illustrated example.

        Column A   Column B   

    1  Account     Password

    2  User1        P@ssw0rd

    3  User2        p@ssw0rd!@#$

    4  User3        !@#$p@ssw0rd

    If i understand right,the excel spreadsheet in .xlsx format must then be exported / converted to the formats like tab/comma delimited CSV, Text (Tab Delimited) (*.txt) for Remote EDM Indexing to work on. This seems difficult to ask staff of other departments to do so, as it is not one-off task, they will need to do this periodically or whenever there is update.

    Whereas the Remote IDM Indexing is much easier and simpler which can directly index a bunch of files from shared network resource, and can run automatically with built in schedule.

    I’m curious if Remote IDM Indexing can index structured data too, or can perform all the functionalities of Remote EDM Indexing; to just avoid the cumbersome and manual procedures with Remote EDM Indexing?

    0

    Related:

    • No Related Posts

    Computer Status / Risk Report Export to email in csv

    I need a solution

    I it possble to to get export of SEPM–> Monitor –> Computer Status ( Past Month ) and  SEPM–> Monitor –> Risk Logs ( Pastt 24 Hours ) to CSV files at get it on email ? I hope there is some solution to this.

    0

    Related:

    Export Option for Intrustion Summary

    I need a solution

    I am wondering if there is any way to export the Intrusion Summary lists. Specifically the Attack: Intrusion Histroy. About once a week I have to manually copy 500 pages into a text document. It would save me a lot of time if I could simply export this list as a csv or even a .txt document is fine. Let me know if there is a solution like this or you plan on doing it in the near future.

    0

    Related:

    • No Related Posts

    Re: insight iq  how to  collect the information of the contents ( files) of a specific directory

    My question is how to collect the information in a specific directory with the cli of insight IQ

    I can collect the directories with the IIQ cli

    iiq_data_export fsa export -c clustername –data-module directories -o 7318 –name <file>

    output CSV:

    path[directory:/ifs/],dir_cnt (count),file_cnt (count),ads_cnt,other_cnt (count),log_size_sum (bytes),phys_size_sum (bytes),log_size_sum_overflow,report_date: 1537042326

    /ifs/data,4092765,123934912,0,0,1097588028518359,1332028348377088,0

    /ifs/home,12,68,0,0,94902,2095104,0

    /ifs/.isilon,3,22,0,0,60217,564224,0

    /ifs/data/files,1,7,0,0,17907,184832,0

    The –data-module directories generates an overview of the files

    in the manual i can only find the data-modules option

    Directories directories

    File Count by Logical Size file_count_by_logical_size

    File Count by Last Modified file_count_by_modified_time

    File Count by Physical Size file_count_by_physical_size

    Top Directories top_directories

    Top Files top_files

    Now i want to have an export of the directory /ifs/data/files exported to csv format

    I can do report in the gui also under file system analytics with the download as csv

    Can someone hint me the syntax

    Thanks

    Related: