|Product:||Windows Operating System|
|Message:||WinMgmt could not initialize the core parts. This could be due to a badly installed version of WinMgmt, WinMgmt repository upgrade failure, insufficient disk space or insufficient memory.|
Windows Management Instrumentation (WMI) could not start. The startup process creates many Component Object Model (COM) objects, allocates memory, accesses registry keys, etc. A failure in any one of these steps can cause WMI not to start.
To determine why WMI could not start
If the problem is with a COM component, reregister the DLLs.
To reregister the DLLs
for /f %s in (‘dir /b /s %windir%\system32\wbem\*.dll’) do regsvr32 /s %s
If the problem is insufficient memory, close some programs.
To close programs while WMI is running
If the problem is insufficient hard disk space, free up some space on the system drive.
If you have tried the preceding solutions and WMI still does not start, some files in the %SystemRoot%\System32\Wbem\Repository folder might be corrupted. Correct this condition by restoring the WMI repository. Possible ways to restore the repository are:
Note: If you force WMI to rebuild the repository, all static data or other changes to the repository that are not captured in the original MOF file will be lost. You should keep a copy of the corrupted file in case you need to either restore it or have Microsoft Product Support Services evaluate the corrupted files.
By: Gautam Chanda, GPLM DC Networking, HPE
In today’s world of hyper speed business decisions and need to be agile enough to stay ahead of the competition, match or exceed new market demands, and manage demanding customer expectations, your customer facing application behavior will succeed or fail based primarily on its performance.
This puts a tremendous responsibility on Network Engineers (aka Network Operators) in Enterprises and HyperScale Data Centers as most of the unexplained application performance shortfalls result from an underlying network infrastructure not providing proper performance and scale on demand, or it was not adequately designed for optimum application performance. The Network Operators will have to ensure that their networks are responsive, “always on” and capable of meeting the ever-growing demand from applications they run. Providing operators with deeper instrumentation and telemetry data about the network help operators diagnose network issues, plan and fine-tune the network to provide improved performance and make optimal use of network resources.
One of the main causes of these unexplained application performance issues stems from latency caused by underlying network congestion. Among the causes of a network, congestion is an elusive type of network congestion called “Microburst.” As the name suggest “microbursts” are sub-second periods of time when major bursts of network usage occur at line rate and can temporarily overflow the switch buffers and cause packet loss or backpressure.
Traditionally “congestion” has been associated with switch ports being utilized at close to line rate. In a congestion scenario, packets can be dropped by the switch or flows may backpressure due to lack of buffer space. However, a more recent analysis has uncovered the existence of these “microbursts” occurring more frequently than we may have guessed and there was no good way to detect them, resulting in network engineers looking for the proverbial “needle in a haystack” to find the causes of unexpected application performance issues in their network.
Typically, these “microbursts” do not last long enough to be detected by traditional switch counters such as SNMP or port statistics. This is because traditional tools used to monitor network traffic patterns, such as RMON and SNMP, have been based on a polling model where data is typically collected at one second or longer intervals. What about the events that will occur within these polling intervals? With the evolution to 100GbE attachment in the data center, within even a one-second interval a 100GbE interface could go from idle to forwarding over 280 million packets and back again. In a traditional SNMP/RMON polling model this 280 million packet burst can become invisible.
Let’s look at the potential business impact of these microbursts in a High-Frequency Trading (HFT) environment:
- In a NetworkingWorld article, Charles Thompson – NI manager of system engineering, stated “When trading floors open at 9:30 am Eastern time, their networks are flooded with a ridiculous number of trades that have been queued up since the night before. To analyze performance issues, network managers often have to break out a one-second period into smaller, microscopic intervals. So, they’ll chop up the one-second interval into 100-millisecond intervals, 10-millisecond intervals, or 5-microsecond intervals for investigations. When you get to a sub-second resolution, it’s referred to as a microburst. It’s a small period of time when a major burst of usage occurred…….But, we’ve had many customers requiring 100-microsecond increments, who will take advantage of this drill down capability.”
- An InformationWeek article stated, “A 1-millisecond advantage in trading applications can be worth $100 million a year to a major brokerage firm.”
Networks are critical to the business as they deliver applications and services to the rest of the organization. Networks must have high performance, low latency, reliability and security. Network/data center downtime is expensive and impacts the business outcome. By proactively detecting these elusive networks microburst allows the network operators to run their network at the most optimum performance level.
Learn more about the HPE FlexFabric 5950 100G TOR (Top of the Rack) Switch. This switch will provide the capability of detecting these elusive microbursts by embedding BroadviewTM Instrumentation analytics in the switch.
In part 1 of this series we saw what we can do in terms of performance monitoring without any customization in IMC. This means only by installing IMC and discovering devices … Now let’s see the customization part.
Here is where the cool stuff begins:
- We can add some performance views with the data we already have, in order to enable. For example, here are some Trend Lines of CPU and Memory usage for our core devices:
- We can also add some customised “Real-Time monitoring” pages (in Performance Management à Real-Time monitoring). Doing so we can have a very quick access to live information for live debugging.
- You may feel frustrated with only a few performance metrics to monitor … Don’t worry, there are about 445 predefined performance indexes you just need to activate in order to monitor these 🙂
So we can, for example, monitor device temperature, interfaces bandwidth, error packets, QoS … Below is an example of adding temperature monitoring:
And the result:
- And if you need even more … You can add your own global index (everything you want if this can be obtained through SNMP), either in an existing category or in a user-defined one. Only things you need to know are the SNMP system Object Identifier of the data you need and the formula to calculate the performance index. Below we add SFP transceiver Transmitted Power for ArubaOS devices to the predefined category “Transceiver Current Transmitted Power”.
And so we can now add this metric as we usually do with predefined ones 🙂
- Let’s finish with another interesting feature. We can add any of the monitoring indexes configured above to the topology. For example, we can say that for switches we want to display the temperature and that we want to display some interface statistics on the links:
And so when you select a device or a link you should see something like this:
There are many other things you can do, such as adding dashboards, playing with RMON, configuring schedule reports to be sent by email … I’ll let you discover by yourself. And also we can use IMC NTA (Network Traffic Analyser) module in order to analyse more precisely the traffic on our infrastructure. I promise I’ll be back soon with another blog in this series.
application with the ability to gather information about network devices and to monitor
them. To accomplish this, SNMP defines a set of operations for retrieving and setting
data as well as monitoring for conditions being reported by the managed devices. One of
these operations is GetBulk, which provides an application with the ability to easily
retrieve a large amount of data with a single request. This can be particularly useful
when retrieving information from the standard SNMP tables. This article describes how to
use a new GetBulk API for retrieving data from a table and other new functions recently
added to the SNMP support on IBM i.
I have a Cisco 6500 switch that I want to capture all vlan8 traffic incoming and outgoing. I talked with my networking group and they set me up with the following commands. (May not be exact commands but this was an example I gave them)
ip flow-export version
ip flow-export destination
ip flow ingress
I am currently capturing this data using Ntop and we are getting a lot of traffic. I see all incoming and outgoing traffic from all vlan8 machines (192.168.8.0/24). However for any machine that is not in vlan8, but is talking to vlan8, I only see the received traffic from them.
Ex. 192.168.8.10 goes to a website on 192.168.9.20
I only see received traffic from the 192.168.9.20 machine and no sent traffic. Obviously it has sent traffic because 192.168.8.10 received the website.
I just wanted to verify that this is how Netflow captures data and that everything is working correctly. It kinda makes sense to me that sense 192.168.9.20 isn’t in vlan8 it may not get the outbound traffic (even though it sends it to vlan8). Ideally I’d want sent and received traffic from anything that touches vlan8. Thanks.