On demand data in Python, Part 3: Coroutines and asyncio

Much of the data in modern big data applications comes from the web or
databases. You need to write code to process this at scale, but you don’t want
everything to grind to a halt in the process. Python 3 introduced a system for
cooperative multitasking, which alleviates this problem, using asynchronous
coroutines. Asynchronous coroutines build on similar concepts to generators.
They are objects created from special functions which can be suspended and
resumed. They make it possible to break down complex and inefficient
processing into simple tasks that cooperate to maximize trade-offs between CPU
and input/output. Learn these core techniques following a simple sequence of
examples.

Related:

  • No Related Posts

On demand data in Python, Part 2: The magic of itertools

Python’s motto has always been “Batteries included,” to highlight its
extensive standard library. There are many well-kept secrets among the standard modules, including itertools, which is less well known in part because iterators and generators are less well known. This is a shame because the routines in itertools and related modules such as functools and operators can save developers many hours in developing big data operators. Learn by copious examples how to use itertools to address the most common MapReduce-style data science tasks.

Related:

  • No Related Posts

IBM Blockchain 101: Quick-start guide for developers

Join the blockchain revolution! This developerWorks quick-start guide is
for application developers who are exploring blockchain technology and want to
quickly spin up a blockchain pre-production network, deploy sample
applications, and develop and deploy client applications. Simple instructions
show you how to activate a blockchain network based on the latest Hyperledger
Fabric framework, write and install chaincode (business logic for the
network), and develop client applications to streamline business processes and
digital interactions.

Related:

On demand data in Python, Part 1: Python iterators and generators

The oldest known way to process data in Python is building up data in
lists, dictionaries and other such data structures. Though such techniques
work well in many cases, they cause major problems when dealing with large
quantities of data. It’s easy to find that your code is running painfully
slowly or running out of memory. Generators and iterators help address this
problem. These techniques have been around in Python for a while but are not
well understood. Used properly, they can bring big data tasks down to size so
that they don’t require a huge hardware investment to complete.

Related:

  • No Related Posts

Deploy a blockchain business network to the cloud using the IBM Blockchain Starter Plan

Simple steps and companion videos show you how to deploy an existing
sample business network, the Car Auction network, to the cloud, specifically
to the IBM Blockchain Platform Starter Plan. Once you deploy the sample
network, you can start developing, demoing, and staging your blockchain
applications on a simulated multi-organization network.

Related:

  • No Related Posts

IoT on the edge, Part 3: Integrating cloud analytics and a dashboard app into your IoT solutions

In the first article in this series, you learned how to monitor hay barns
for humidity and temperature to identify dangerous conditions. You learned
how to get sensor readings for temperature and humidity from NodeMCU
devices to the IBM Watson IoT Platform. In this article, you learn how to
preserve those readings in a database, how to display them in an IoT
dashboard, and how to generate alerts.

Related:

  • No Related Posts