AI is a complex and mysterious topic, so stay with me as we decode what’s going on. First of all, there are many disparate, misleading, and confusing words being used to describe AI at different levels today. This irritating situation suggests that unemployed English literature majors have found work in the marketing departments of semiconductor makers and software companies. So, let’s untangle the mess that these grandiloquent catachrestic bards have created with simple engineering principles, definitions, and examples.
Consider AI as the initial point of a divergent tree diagram (one-to-many), with branches coming off that point. The first branch on the left is Frozen AI, and that’s just another word for the expert systems we heard about in the 1980s. Frozen-AI is the knowledge from an expert about some process, reduced to software. A frozen AI system does not learn, it does not execute the process faster or more efficiently no matter how many times it executes. Take tax return software as an example: you type-in the numbers, the software puts the numbers in the proper places, on the proper forms, and does the math. That’s Frozen-AI
The next branch, on the right, is machine learning (ML). From that branch, we get two other branches: shallow-AI, also known as Narrow-AI or ANI (artificial narrow intelligence), and deep learning (DL).
To read more Warfare Evolution Blogs by Ray Alderman, click here.
Let’s use an industrial application to show how shallow-AI works: making peanut butter. Assume that we have an AI machine that we need to teach. We turn the dials and flip the switches to grind-up one ton of peanuts into powder, and add certain amounts of preservatives, sugar, and oil in the vat. Then we heat it up and stir it. The machine watches (i.e., it records the activity) that the operator does and learns how to do it. After we run several batches, the machine knows exactly how to make peanut butter. Shallow-AI systems deal with a few simple variables and only needs to see a few examples to master the task.
DL is a process that contains thousands or millions of variables, and needs thousands or millions of examples to master a task. Let’s take the Frozen-AI tax return software and turn it into a DL machine. After the machine completes your tax return, it says that you will have a cash flow problem in 30 days, because your wife has been writing checks to a divorce lawyer from her private checking account, and she’s about to dump you. The DL machine went out on the internet, looked into your accounts, gathered all the data about your complete financial situation, evaluated it, and then gave you accurate predictions about your future along with your tax return. That’s DL.
Another example of DL is speech recognition with language translation. Google just released their new Pixel Buds, that link to their Pixel smartphone with Bluetooth. The phone then links to “Google Translate” software. Two people can converse in different languages at the same time, and understand each other: the Pixel set-up recognizes each language and translates it into the native language of the listener. Both speech recognition and language interpretation have thousands of variables and need thousands of examples to become proficient.
The guts of AI is the neural network (NN), also called the artificial neural network (ANN). A physical neural network (PNN) is made up of cells connected to many other cells, like the neurons in our brains. These cells do not hold zeroes and ones like a computer. They contain values between zero and one, that can bias the other connected cells. There are 27 different topologies identified for neural networks today: predictive, deep-forward, fast-forward, convolutional, variational, and a bunch of other types that I can’t explain here. It’s best for both of us if you view the diagrams and read about them. <https://semiengineering.com/using-cnns-to-speed-up-systems/> However, software algorithms running on multicore processors and GPUs, with parallel execution paths, are the basic platform today. These algorithms replace the cells with zeroes and ones, and then calculates the answer.
How do AI algorithms work? Think about it like a spreadsheet (actually, it’s called a Matrix in AI jargon). You bring in your bits (examples) and put them in column 1. Each bit in column 1 is assigned a value, called a weight, and those are put in column 2. The weights of each bit are compared to the weights of the four or five bits around it (lots of math involved here), and those integrated weights are put in column 3. Those weights are integrated with those around it, and put in column 4, and on and on. The answer shows-up in the middle of column 10. The process looks like a convergent tree diagram (many-to-one) when you are done.
What are the most pressing applications in the military for using AI? We’ll start with image processing and facial recognition. We have thousands of hours of intelligence video and still pictures from unmanned aerial vehicles (UAVs), intelligence aircraft, and satellites, but not enough image analysts to view it and report their findings. Movidius released their Neural Compute Stick back in April, that plugs into a PC’s USB port, and has the ability to do both image processing and facial recognition. In August, Intel introduced their new VPU (Vision Processing Unit) that can be designed into electronics on UAVs and ground vehicles. It can perform four trillion operations per second with very low power consumption. We can use these chips to do image analysis in seconds, without a human in the loop. There are many other chips and algorithms coming to market daily.
Voice recognition and language translation is another major area for AI in military applications. We have thousands of hours of intercepted voice communications from our enemies, but not enough linguists to interpret them and report their findings. We can take the Pixel Buds, Pixel smartphone, and Google Translate, and hook those to a search algorithm that looks for key words like tank, bomb, IED (improvised explosive device], highway, etc. in the conversation. Then, we can eliminate all the chatter and print-out only the most important intelligence findings in seconds, without a linguist in the loop.
Another example is space-time adaptive processing (STAP) radar, and cognitive electronic warfare (CEW). Our enemies are very good at jamming our radar signals on the battlefield now. With AI-based STAP, we can overcome their jamming techniques and find the targets. AI-based CEW is just the same problem in reverse: we can better jam or spoof our enemy’s radar.
There are many other areas where AI will infiltrate military systems: autonomous weapons, cyber warfare, better unmanned vehicles, multi-domain warfare, cross-domain warfare, missile defense, etc. And all those platforms will feed information into the mother of all AI machines: The Master War Algorithm. DepSecDef Bob Work started that process with his memo in April 2017, establishing the “Algorithmic Warfare Cross-Functional Team” (Project Maven). All the data, coming from all the platforms and soldiers in the battle, is fed into the War Algorithm machine, and that machine will tell the generals what to do next to defeat the enemy. That’s the plan.
Now, we need to explore artificial stupidity (AS). Billions of AI chips and algorithms are being put into everyday appliances and in the cloud. As a hypothetical example, let’s assume you have an Apple, Google, or Amazon Personal Digital Assistant (Siri, Assistant, or Alexa, respectively). You are nearing retirement and want to know what to do, to stay healthy and active. So, you ask the digital assistant. It goes up to the cloud and looks at trillions of pieces of data from all over the world, and runs that through an AI machine. It comes back and tells you to avoid playing golf! Why? Because a very large percentage of golfers die in their late 60s and 70s. What the assistant found is a very high correlation between death and golf. What we really have here is correlation without causation. Old age is killing retired golfers, not golf itself. Read Charles Wheelan’s book, “Naked Statistics” to understand this kind of AI danger. Einstein is credited with saying (but it was probably Alexandre Dumas or Elbert Hubbard who deserve the recognition): “The difference between genius and stupidity is that genius has its limits.”
So, just understand that we are in the infancy of AI development, and expect to see a lot of AS coming out of the machines for a while. The opposing view is that the more data you have, the better the AI machine learns. These machines can look at billions of pieces of data (examples) in a few seconds. Statistically, that says AI machines will get very good, very fast. Ultimately, it depends on what you ask the AI machine, and how much valid data you have on that topic.
Now you know a little about AI, how the military will use it, and where we are on the curve. What you really need to consider is how and where AI fits into the kill chain (find, fix, fire, finish, feedback). Image analysis and language translation fit into the find and fix phases. Other implementations of AI will go into the fire, finish, and feedback phases, once AI hardware and software are more stable and reliable. In fact, every military platform, test, study, analysis, review, weapon, budget, program, and strategy fits somewhere in the kill chain. Once you look at all the military news and announcements through the eyes of the kill chain, confusion goes away and everything makes sense. That’s the topic of our next episode.