7022319: Ceilometer doesn’t return any metering data and times out

This document (7022319) is provided subject to the disclaimer at the end of this document.

Environment

SUSE OpenStack Cloud 6

Situation

Ceilometer API calls end up with “Request returned failure status: 504”

Resolution

This is known issue. Removing the old data from database resolves the problem.

This can be achieved by installing a fixed package or editing path in the cronjob.

Cause

Ceilometer collects the metering information, that grows over time and clean up process is required to remove the old data. This is the task of ceilometer-expirer. However the cronjob is not working due to wrong path to the configuration directory.

Following can be seen in the logs:

2017-10-17T16:06:07.295754+02:00 d00-22-19-0e-2e-8d run-crons[30889]: openstack-ceilometer-expirer.cron returned 1

Therefore ceilometer API calls are timing out as the amount of information is so large that it cannot be processed within defined 60 seconds timeout period.

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related:

7022318: Horizon resource usage showing just glance meters

This document (7022318) is provided subject to the disclaimer at the end of this document.

Environment

SUSE OpenStack Cloud 6

Situation

In Resource Usage tab in Horizon only Image (glance service) meters are retrieved. CPU, RAM, network and disk usage details are not displayed.

Resolution

Manually increase the “default_api_return_limit” ceilometer configuration option value by creating a custom /etc/ceilometer/ceilometer.conf.d/101-ceilometer-api.conf configuration file on each controller node with the following contents:

[api]

default_api_return_limit = 1000

Cause

The ceilometer admin dashboard in Horizon doesn’t specify a limit for the number of entries returned by ceilometer API calls. Default_api_return_limit configuration option is used to limit the number of returned entries, which on certain systems may be too low.

Ceilometer CLI doesn’t suffer from this limitation. Following command takes precedence over default_api_return_limit and can be used to verify whether ceilometer has the data available :

ceilometer meter-list -l 5000

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented “AS IS” WITHOUT WARRANTY OF ANY KIND.

Related:

From Down Under – Doubling Down on OpenStack

EMC logo


Dell EMC is excited about OpenStack Summit happening in Sydney. As Australia is known as the “Land Down Under”, I’d like to provide a peek under the covers of our OpenStack strategy.

Since our last release and announcement of the Dell EMC Ready Bundle for Red Hat OpenStack Platform, Dell EMC has strengthened our OpenStack team to build scaled and hardened solutions for communications and cloud service providers. SP use-cases provide the right foundation to enable robust feature sets for large enterprise, higher education, and government workloads.

Our OpenStack strategy is built around integrating physical and virtual infrastructure into platforms best able to address the complex technical and operational challenges facing the SP industry. These include IaaS, CaaS, SaaS, NFV and the increasing role that containers, bare metal and cloud native technologies play. Today, there are four areas I would like to provide updates on:

  1. Dell EMC NFV Ready Bundle for VMware with Integrated OpenStack
  2. Dell EMC Ready Bundle for Red Hat OpenStack Platform
  3. Dell EMC NFV Ready Bundle for Red Hat
  4. Update and Priorities for Dell EMC OpenStack Solutions

Dell EMC NFV Ready Bundle for VMware with Integrated OpenStack

In September, at MWC Americas, Dell EMC announced the Dell EMC NFV Ready Bundle for VMware. This solution extended Dell EMC’s thought leadership and commitment to the Communications Service Provider (CSP) industry by enabling a fully-integrated platform of Dell EMC infrastructure with VMware vCloud NFV.

Today, we have extended that platform to support VMware Integrated OpenStack (VIO) providing an option for SPs to capitalize on the accessibility of OpenStack by using the open APIs to integrate with VMware infrastructure. This carrier grade Ready Bundle provides a strong foundation for SPs to reduce time to service while offering multi-tenancy, dynamic scalability, high-availability, integrated containers support and In-Service-Software-Upgrade.

Dell EMC Ready Bundle for Red Hat OpenStack Platform

At the same time, we’re announcing an update to the Dell EMC Ready Bundle for Red Hat OpenStack Platform to enable support for the Dell PowerEdge 14th generation server family.

Release 10.1 of the Ready Bundle supports the PowerEdge R640 and R740xd rack servers and runs Red Hat OpenStack Platform 10 and Red Hat Ceph Storage 2 software.  The PowerEdge 14th generation servers bring a scalable architecture, intelligent automation, and deeply integrated security, as well as increased performance and density for this Ready Bundle.

This release includes the Dell EMC JetPack Automation Toolkit, which delivers rapid and reliable automated deployment and life-cycle management capabilities for OpenStack in less than half the time of current methods.

More information can be found at dell.com/openstack and all technical documentation, including the Architecture Guides, can be found in the Dell EMC Tech Center community.

Dell EMC NFV Ready Bundle for Red Hat

In addition to the standard platform, Dell EMC introduces availability of our next Dell EMC NFV Ready Bundle for Red Hat which is a pre-integrated and pre-validated joint Dell EMC + Red Hat solution optimized to simplify and help CSPs accelerate production deployments of business critical virtualized network functions (VNF) and operationalization of NFV.

This NFV Ready Bundle provides enhancements on the core platform to include deployment of NFV-specific enhancements in an automated fashion with JetPack. Such enhancements include NUMA pinning, SR-IOV, DPDK support, and Huge Pages – all capabilities that Virtual Network Function (VNF) providers rely on for delivery of their network workloads.

Update and Priorities for Dell EMC OpenStack Solutions

Going forward, Dell EMC has created a single organization to develop both the strategy and the solutions for OpenStack across our partners. As a result, we are better able to serve our customers and our partners (VMware, Red Hat, Canonical, SuSE and Mirantis) and markets. As an example, this team just delivered the Dell EMC Canonical Mitaka OpenStack Reference Architectures for PowerEdge and DSS 9000 rack scale infrastructure. Our core objective is to deliver a simplified and converged platform, continue investment in JetPack as a common automation layer, expanding the networking options available to include SDN, 25 Gigabit networking integrated with Dell EMC Open Networking, and create a comprehensive infrastructure assurance suite across the entire stack.

Dell EMC is committed to OpenStack as a platform for multiple workloads, and will continue to invest to bring industry-leading OpenStack solutions that address our customers’ challenges to market. Be sure to visit Dell EMC at OpenStack Summit (Dev & Ops Lounge sponsored by Dell EMC on Level 4) and join many of the speaking engagements in which Dell EMC is participating:

Baremetal Server Management – Like Shooting Redfish in a Barrel 

Automated NFV Deployment and Management with TripleO

Fast and secure Clear Containers on OpenStack. A winner! 

What’s Your Workflow?



ENCLOSURE:https://blog.dellemc.com/uploads/2017/11/Syndey-Opera-House-at-Night-Blue-Lights-1000×500.jpg

Update your feed preferences


   

   


   


   

submit to reddit
   

Related:

VMAX & Openstack Ocata: An Inside Look Pt. 5: Manage & Unmanage Volumes

Welcome back to part 5 of VMAX & OpenStack Ocata: An Inside Look! In my last post I covered snapshots and backups in Cinder using VMAX, if you would like to see that blog article again please click here. Today we will look at managing and unmanaging VMAX volumes in OpenStack Ocata.

Manage Volumes

Managing volumes in OpenStack is the process whereby a volume which exists on the storage device is imported into OpenStack to be made available for use in the OpenStack environment. For a volume to be valid for managing into OpenStack, the following prerequisites must be met:

  • The volume exists in a Cinder managed pool
  • The volume is not part of a Masking View
  • The volume is not part of an SRDF relationship
  • The volume is configured as a TDEV (thin device)
  • The volume is set to FBA emulation

For a volume to exist in a Cinder managed pool, it must reside in in the same Storage Resource Pool (SRP) as the back end which is configured for use in OpenStack. For the purposes of this article, my configured back end will be using the Diamond service level with no workload type specified. Specifying the pool correctly can be entered manually as it follows the same format each time:

Pool format:

<service_level>+<workload_type>+<srp>+<array_id>

Pool example 1:

Diamond+DSS+SRP_1+111111111111

Pool example 2:

Diamond+SRP_1+111111111111

Values

service_level – The service level of the volume to be managed

workload – The workload type of the volume to be managed

srp – The Storage Resource Pool configured for use by the back end

array_id – The numerical VMAX ID

It is also possible to get the pool name using the Cinder CLI command cinder get-pools. Running this command will return the available pools from your configured back ends. Each of the pools returned will also have the host name and back end name specified, these values are needed in the next step.

CinderGetPools.png

With your pool name defined you can now manage the volume into OpenStack, this is possible with the CLI command cinder manage. The bootable parameter is optional in the command, if the volume to be managed into OpenStack is not bootable leave this parameter out. OpenStack will also determine the size of the value when it is managed so there is no need to specify the volume size.

Command Format:

cinder manage –name <new_volume_name> –volume-type <vmax_vol_type> –availability-zone <av_zone> <–bootable> <host> <identifier>

Command Example:

cinder manage –name vmax_managed_volume –volume-type VMAX_ISCSI_DIAMOND

–availability-zone nova demo@VMAX_ISCSI_DIAMOND#Diamond+SRP_1+111111111111 031D8

CinderManage.PNG.png

After the above command has been run, the volume will be available for use in the same way as any other OpenStack VMAX volume.



Managing Volumes with Replication Enabled

Whilst it is not possible to manage volumes into OpenStack that are part of a SRDF relationship, it is possible to manage a volume into OpenStack and enable replication at the same time. This is done by having a replication enabled VMAX volume type (discussed in pt. 7 of this series), during the manage volume process you specify the replication volume type as the chosen volume type. Once managed, replication will be enabled for that volume.



Unmanaging a Volume

Unmanaging a volume is not the same as deleting a volume. When a volume is deleted from OpenStack, it is also deleted from the VMAX at the same time. Unmanaging a volume is the process whereby a volume is removed from OpenStack but it remains for further use on the VMAX. The volume can also be managed back into OpenStack at a later date using the process discussed in the previous section. Unmanaging volume is carried out using the cinder unmanage CLI command:

Command Format:

cinder unmanage <volume_name/volume_id>

Command Example:

cinder unmanage vmax_test_vol

CinderUnmanage.PNG.png

Once unmanaged from OpenStack, the volume can still be retrieved using its device ID or OpenStack volume ID. Within Unisphere you will also notice that the ‘OS-‘ prefix has been removed, this is another visual indication that the volume is no longer managed by OpenStack.

VolU4V_unmanaged.PNG.png

Whats coming up in part 6 of ‘VMAX & OpenStack Ocata: An Inside Look’…

In the next in the series of ‘VMAX & OpenStack Ocata: An inside Look’ we will be looking at OpenStack Consistency Groups. Not to be confused with VMAX Consistency Groups (which is an SRDF feature), OpenStack Consistency Groups allow snapshots of multiple volumes in the same consistency group to be taken at the same point-in-time to ensure data consistency. See you then!

Related: