Artificial Intelligence is just planning in sheep's clothing. It's just modelling worlds for finding "paths" to a solution to a problem or challenge, and we do that using "<F, A, I, G>".
F - "fluents", facts about the world that are true (often in a binary sense: things are true or false)
A - actions, a set of transformations that make a change in the world's state. This often means you modify fluents: you might take an action called "turn_on_light_switch", which would take the action of making it true that (the fluent) light_switch is on, and making it false that the fluent light_switch is off. (Yeah, it is honestly excessive and even redundant to have 2 fluents that are literally opposites of each other, you could do it for a demo, but the more the fluents, the more variables your planner will need to run through to try and find a goal. If a planner is made to time out if it can't find a goal within a fixed timeout period, you won't find any planning paths)
I - initial state. The set of all fluents that are true at the start of the search.
G - goal state. The fluents we want to be true at the end of applying a sequence of actions. The goal does not have to be the whole world! It just needs to be contained in the current state of the goal at the end of our actions executed.
<F, A, I, G>, that's all that matters to a planning model. It neither knows, nor cares how your plan works, that's your logistical headache.
All of this is a longform set up for a joke to say that studying for my AI midterm I've spent the last 3 weeks running around saying FAIG over and over and I sound like an American pronouncing "fag"
me after a long long day of working on planning problems: anyone want to take a FAIG break