The AI View From Above

It likely isn’t a surprise to many of you that executives like AI. A survey shows that 74% of executives surveyed have greater confidence in AI-generated insights than advice from colleagues or friends. At the board level, even more (85%) favor AI-driven advice.

That’s amazing to me, and while I might think this is a bit too much trust being placed in these GenAI LLMs, perhaps it’s also partially because they work with too many people who aren’t great at their jobs. Plenty of people skim through data or focus on certain things and might miss the details. While an AI can read and summarize a lot, it might not have the context we expect. I tend to be a bit skeptical of AI summaries, often because they don’t necessarily weigh the different parts of an article the same way that I do. However, they can be helpful.

Even more interesting, 44% of executives say they would trust a GenAI to override their decisions based on insights, and 38% would trust AI to make decisions on their behalf. Business decisions based on data, or conclusions from a lot of inputs are different than producing working code, so I don’t know how accurate these models might be in this context. I do know that I want experienced people reviewing and judging GenAI outputs, and I would not allow an AI to override me without my input.

However, I wouldn’t just discount a GenAI recommendation. I tend to have strong opinions, but loosely held. I’ll change if there is evidence or a good argument to do so. It’s possible a GenAI might see things I miss and produce an insight that gets me to change a decision.

What’s a bit scary about the stats from this survey is that many executives see a skills gap in their staff, and their trust in GenAI might lead them to replace or augment existing staff with more GenAI tools. They might expect hiring can be delayed or slowed (or eliminated) with GenAI filling gaps. This might be especially true as many tech companies talk about how GenAI tools are making them more efficient.

That means that tech professionals should consider a few things. First, learn to work with GenAI tools and use them to prove your value to an organization. This includes learning when not to use them. Second, continue to improve your skills to ensure you can judge Gen AI results and emphasize that you are still the expert. Lastly, as the technology improves, consider adding some skills in how to train an AI to be a better assistant for you. The more efficient you are, especially with a GenAI helper, the more likely you are to impress executives and managers that are choosing which staff to keep.

Steve Jones

Listen to the podcast at Libsyn, Spotify, or iTunes.

Note, podcasts are only available for a limited time online.

Unknown's avatar

About way0utwest

Editor, SQLServerCentral
This entry was posted in Editorial and tagged . Bookmark the permalink.

4 Responses to The AI View From Above

  1. Big Mistake.

    I don’t believe it’s about trust; that’s just the explanation given for the more likely reason looks bad.  I believe it is about padding quarterly/annual bonuses by reducing costs and not by improving quality or productivity.

    I use X’s Grok on a near daily basis and I can testify that it is far from perfect; it often fails to read between the lines. A GenAI LLM is a tool and should be treated as a tool and not as the new employee. Any/all results/conclusions that these GenAI LLMs provide should always go thru humans. How many times have 2 or more qualified people looked at the same data set and came up with different conclusions?

    By relying on theses GenAI LLMs we are trusting that

    The sources they use are solid/reliable and the net is far from that. I know the net is not their only source, but it is one

    The results they produce are able to read between the lines as humans can

    I believe this change is like other past changes in businesses like offshoring are about costs and not to reduce them so the business remains profitable but to reduce them in order to increase the executives bonusses. There is more than 1 example of the pharmaceutical industry choosing the unethical path b/c the costs to doing so, even if caught and fined, are less than the profits made.

    Currently the software industry has been working to eliminate ownership so that no one owns software anymore,  not even a license to use it but a time limited lease or rental. Video gaming is trying to force customers into “always online” even when there is no need for it all so that there is no longer ownership, just renting/leasing and so the control of the products use reverts to the vendor.  Imagine buying a car only to find out a few years later that it’s stopped working because it can no longer connect to the car manufacturers website and verify you own the car. Sound crazy? If nothing is done to stop them, they absolutely will do this. Car manufacturers have already tried to get away with some of this like forcing customers into a service agreement if they want the key fob for the car to work.

    Unfortunately, because much of the business world is steered by large, powerful international corporations that have the kind of money and influence to purchase political favors and even legislation to provide protection from competition we are forced to treat all actions they take as suspicious. I believe the right approach is not to try and ban AI nor fully embrace it to where it replaces humans but to treat it as a tool that empowers the human worker to do more/better and we may have to enforce that via government legislation. These executives aren’t seeking to improve quality of their products/services or productivity but to increase their bonuses and they will not hesitate to do it, fully ignoring the long-term consequences of their actions.If we allow these executives desires to come to fruitarian and they replace most if not all humans with AI powered robots like those being developed at Tesla what exactly is it humans will transition to? “Learn to code” won’t work b/c these things can write their own code. They will be able to do their own maintenance, repair and upgrades. If human work is being done by AI powered bots then exactly who doe these companies think their customer4 base will be? This kind of thing won’t happen overnight but it will start and complete faster then you’d think and once that balance is upset and there no longer are customers b/c all but a very few are employed, the executives will bail, sell off all their stock and take the attitude of “its not my problem” as they retire with the wealth they’ve amassed. =It will be left to the rest of us to fix the mess if it even can be at that point. The only way forward is a hybrid where AI is allowed only as a tool wielded by humans.

    Apologies for the length but this is a very hot topic for me.

    Like

    • NOTE: For some reason none of the formatting that was applied in the draft has come thru with the post. There should be bullet points and better grouping.

      Like

    • way0utwest's avatar way0utwest says:

      I think most decisions made by corporations are made for money reasons. Usually for execs/investors, and rarely for workers. That might be the charter for most corporations, but there has to be a balance as any org still needs good people and needs to invest in them to succeeed.

      AI is a tool, and some are jumping in quickly, likely because they want to save money. It’s the same reason many companies did offshore some part of their business. They want to save money. They often don’t care if quality is lower as well, or at least, slightly lower.

      I’m not sure AI does a worse job of coding than many people, especially inexperienced ones, but it can replace a whole lot of them.

      Liked by 1 person

  2. “.. but there has to be a balance as any org still needs good people and needs to invest in them to succeed.”

    AMEN!

    Like

Comments are closed.