I tend to think of game AI being very easy in implementation if there's no concern about being fair. For example in a FPS it is trivial to have an AI that gets a headshot every time. It'd be pretty trivial to have a fighter game AI that beats you every single time if it just reads your inputs. I remember interview with SSI (make of Panzer General) developers and they say it'd be pretty easy to have an AI that can play at least as well as any human player in the PG series. If you look at a game like Starcraft, the highest difficulty AI is artifcially crippled either by design or by accident or they'd absolutely clown stomp just about any human player.
Yet if you look at the general AIs in games it's pretty clear a lot of time you get the opposite effect where human players can trivially clown stomp the AI that is supposed to be on the hardest setting. While it is not productive to have an AI that never loses, it's pretty strange to see a tough AI that is trivially beaten. One can argue that maybe stuff today is more complicated, but I don't think the depth of games themselves have become any more complicated compare to the past. It's not like Civ 5 takes 100 times the time to master compared to Civ 1. The same strategy you used in Civ 1 is still pretty effective on Civ 5, and your computer is certainly more than 100 times powerful now than it was in Civ 1 in terms of computing power, so even if games have become more complicated it doesn't explain why AI isn't doing better.
I remember seeing somewhere says problems are very easy if you have the right tool to do it. Division was thought to be very difficult until the long division notation was invented. If you think about it, how would you do division without the long division notation? It'd actually be very, very tough! You hear stuff like emergent behavior or genetic alogrithms or neural networks, and yet the AI back in Panzer General almost surely never had any of such concepts and yet it was an adequate challenge for even the best human players. It almost certainly just find the unit each of its unit can maximize damage to and iterated them through a priority list. So maybe the new AIs are doing it wrong by using the wrong tools. Do you really need a neural network to learn that unit A counters unit B if unit A only exists to counter B? Lurkers for example counter M&M in regular Starcraft. That's what they were designed for and you shouldn't need a neural network to learn this. When a player bought Brood Wars they certainly didn't need to experiment pointlessly to come up with this.
Now obviously the hardest part of the AI is make something that is challenging but still beatable, but a lot of games seem to have an AI where it doesn't even feel like they're able to make it challenging even if they tried. You don't want something like say TIE Fighter Super Ace AI where you can dogfight a single T/D for 30 minutes without hitting it, but also don't want your Super Ace AI put up as much resistance as the rookie setting either.
Yet if you look at the general AIs in games it's pretty clear a lot of time you get the opposite effect where human players can trivially clown stomp the AI that is supposed to be on the hardest setting. While it is not productive to have an AI that never loses, it's pretty strange to see a tough AI that is trivially beaten. One can argue that maybe stuff today is more complicated, but I don't think the depth of games themselves have become any more complicated compare to the past. It's not like Civ 5 takes 100 times the time to master compared to Civ 1. The same strategy you used in Civ 1 is still pretty effective on Civ 5, and your computer is certainly more than 100 times powerful now than it was in Civ 1 in terms of computing power, so even if games have become more complicated it doesn't explain why AI isn't doing better.
I remember seeing somewhere says problems are very easy if you have the right tool to do it. Division was thought to be very difficult until the long division notation was invented. If you think about it, how would you do division without the long division notation? It'd actually be very, very tough! You hear stuff like emergent behavior or genetic alogrithms or neural networks, and yet the AI back in Panzer General almost surely never had any of such concepts and yet it was an adequate challenge for even the best human players. It almost certainly just find the unit each of its unit can maximize damage to and iterated them through a priority list. So maybe the new AIs are doing it wrong by using the wrong tools. Do you really need a neural network to learn that unit A counters unit B if unit A only exists to counter B? Lurkers for example counter M&M in regular Starcraft. That's what they were designed for and you shouldn't need a neural network to learn this. When a player bought Brood Wars they certainly didn't need to experiment pointlessly to come up with this.
Now obviously the hardest part of the AI is make something that is challenging but still beatable, but a lot of games seem to have an AI where it doesn't even feel like they're able to make it challenging even if they tried. You don't want something like say TIE Fighter Super Ace AI where you can dogfight a single T/D for 30 minutes without hitting it, but also don't want your Super Ace AI put up as much resistance as the rookie setting either.