AI is very good at absorbing and incorporating the meaning of the written word. This is transformative to very smart people, who instead of having to work with large teams to get anything done, can prompt expertise in most ways.
Being able to ask the correct questions to GPT requires an amount of verbal intelligence, particularly abstract thinking. With this abstract thinking, nearly everything that’s written down can be structured in a way that saves the employee significant time. Instead of taking the time to train a particular person to do something, now you can train a prompt.
This is going to affect most knowledge workers very badly, and a few in ways that’ll make an almost NBA of all-star IT professionals. Most of the time, the knowledge that’s widely available is going to be automated away, and interpreted. Occasionally, there will be something that involves tacit knowledge or needs reinterpretation. In these targeted cases, this work will be sent back to humans to complete.
This is great for smart people, or people with strong abstract thinking skills, as many of the functions that they could do, they can train AI to do indefinitely, or based on conditional logic.
For those with tacit knowledge, or unwritten knowledge, their edge will increase in the labor market. Knowing how to work with tough personalities, body language, and things not frequently written about will remain outside of the domain of generative AI for the foreseeable future, because the level of intimacy required to understand these things happens through trial and error & behind closed doors.
In summary, AI will be very good for generalists, and specialists at the top of their fields, but bad for knowledge workers in general. This increases the leverage of intelligence more than it has been before, and allows smart people to opt-out of strict hierarchical HR processes, generally punishing incumbent positions.