AI makes answers cheap – outcomes are still to be earned
Simon Vumbaca, March 2026
Synopsis: AI can give you a beautiful answer to almost anything. That’s the seduction: it speaks in our tone, organizes our thoughts, and offers a confident path forward—so convincingly that we confuse having an answer with having a result.
The AI Paradox: Output Isn’t Outcomes is a contemplation on that widening gap. AI can multiply what we produce—messages, plans, content, options, decisions—without improving what we build, heal, learn, or become. Because outcomes still live in the real world: in effort, time, relationships, trade-offs, and consequences. And while AI can feel like a modern deus ex machina, it can also soften common sense—making polished output seem like proof, and certainty feel like truth.
AI makes answers cheap—outcomes are still earned.
This piece isn’t anti-AI. It’s pro-discernment. A reminder that AI can be a powerful guide, but it cannot replace the human responsibilities that make results real: judgment, context, skepticism—and the willingness to own what happens next.
We are living in an era of convenience. We increasingly trade depth of attention for convenience, and called it evolution. Lately, all we call innovation and progress has per primary goal to bring us more convenience, to make things easy. In business forums, the phrase “What problem do you solve” is trending more than ever. And so we agree that doing things with focus, processes, and how we used to do them is now quasi-automatically classified as inconvenient. With that in mind, it is only natural that the big, exciting story is how AI output can make anything possible and convenient for us. The last frontiers of the humanly laborious are vanishing and systematically replaced by automated protocols. We multiply what we could do to make things better, faster, and bigger, not what we actually do or become. Let me explain.
THE AI OUTPUT
AI generates an output based on elaborate algorithms and data. The data comes from a variety of sources, depending on the model. The petitioner inputs a specific ask, and the AI computes the output in response. High volumes of data analysis allow the AI to provide responses in a very defined and specific vacuum. Hopefully, the data compared is accurate, relevant, and vetted.
Where AI output has made a great impact is on the way it presents the output. Mirroring the tone of the petitioner, thus making the entire interaction feel more relatable and conversational, almost human. The fact remains. Output is the compilation of a match between the ask and the ideal outcome, ideal based on a series of elements and factors refined by the petitioner. The AI’s tone and output will depend on how one presents the ask and the question’s context. Cleaver staff!
Now, just for fun, take time to ask the same question to your AI in several very different ways (positive and open-ended and negative and closed ended for instance) and the probability is that you will have very different outputs with very different recommended paths to make it real. The more you detail the paths, the more defined and divergent one answer will be from the other path. Why does this matter to us? Because it should help us understand the real value of the output we are receiving is, but a guidance, and not an absolute.
The good news is that the process to produce the output is certain and programmable.
Many people were excited by news published just before 2026: AI resolved an impossible mathematical equation (Meta version). This mathematical equation was so complex that in the last 100 years no one could resolve it, nor come near to resolving it. The resolution proposed by AI, by the way, proved as equally mysterious in its interpretation by humans as the equation itself. In other words: there is an answer no one understands to an impossible equation that no-one could understand, save for the AI algorithm that produced the resolve. A+ for effort and D – for resolve. Let’s pause and reflect for a second on that.
In this news, the use of the word impossible impacted me. In fact, the topic of the article was precisely to highlight that it had become possible and, it was now done. Then I thought that a most accurate title ought to have been: AI solves an impossible equation with an impossible to understand answer! All of it reminded of the famous quote that “All is impossible until it is done”. We believe AI makes the impossible possible.
Here is why I disagree: Take, for instance, running 100m sub10 seconds? Impossible until done.
AI output will tell us what we should do and how we should run to make 100m below 10 seconds. AI will present it in achievable steps. Though running 100m in sub-10 seconds is likely an impossible task for most petitioners, the output will be clear, trusted and, on the face of it, possible. Having that knowledge, we now possess the “secret” information. A deus ex machina of a sort.
AI output alone does not create the outcome. For as precise and credible as it is. Let’s be honest, that output alone will not make one run 100m in sub-10 seconds. We still have to physically train until we can finally run the 100m in under 10s. Most of us will never reach the goal and are likely to give up, comforted by improvement on our running time.
That leads to another important contemplation. We seem to trust AI. Part of the current AI observed effect is the trust we have in anything AI. Even to the point of wiping out established common sense, which makes one accept the output with high trust. The cited psychological basis for this trust is primarily:
1. Automation bias: “AI is probably right”. When millions of data points connect, humans defer to a system, assuming it possesses knowledge beyond our reach. Even when it’s wrong, we justify the mistake. The trust increases if the human is under time pressure or a high cognitive load. (Parasuraman & Riley, 1997; Mosier et al., 1996)
2. Perceived objectivity: “AI is not emotional or biased”. So that makes it instinctively trustworthy. AI feels like math-driven, neutral, unemotional, and without a secretive agenda.
3. Fluency effect: AI output “sounds good, authoritative, so it must be true”. AI answers are typically well-written, structured, and decisive. Our brains use “ease of processing” as a test for truth. If it reads cleanly, we feel it’s reliable, authoritative. (This is related to research on processing fluency and truth judgments; e.g., Reber & Schwarz, 1999)
4. Perceived confidence awards credibility. We humans associate confidence with competence, even though this link is imperfect. Many models produce confident language by default. This increases the trust factor as it comes across as expertise.
5. Consistency: no apparent contradictions. Humans have agendas and can change their minds. AI does not appear to do so. We trust the lack of an agenda.
However, this is not what AI itself is telling us. AI answers the simple question of “Should we trust the output of AI” clearly stating we should not and only use AI outputs as guidance.
There is more. If the AI output is to be trusted, and if the outcome proves to be wrong, who is at fault?
This is an even harder matter to contemplate. As per the above, we trust the outcome of AI output with no much challenge. Yet, we would not necessarily trust the outcome we receive from an expert to the same degree as we trust AI, or without scrutiny.
Another curious factor is the amount of AI output. It appears endless. Regardless of the question asked. I have never seen, to date, an AI answer being: sorry, we do not know.
Anyone with real-life experience should have gained the ability to discern. This normally comes with a pinch of common sense. It is the human version of processing data results based. Common sense will dictate that the likelihood of a 54-year-old man who never really competed in athletics at any significant level for the last 25 years reaching a sub-10 seconds 100 meters run is next to none. Statistically not impossible, but certainly highly improbable.
As AI is data-based, and the data analysis model is part of its DNA, maybe the data pattern-based model of output is the one to trust. It is the closest model to a human way of acquiring common sense.
THE OUTPUT BASED ON A PATTERN MODEL
I recently read an article that disclosed that large language models (LLMs) do not actually generate passwords randomly and instead derive results based on patterns in their training data. They create an output based on the model that trained them. The article concluded LLMs do not truly generate strong passwords. Rather, they produce passwords that seem strong but are easily predictable because of the patterns used in their generation. A single password so generated looks very strong, almost impossible to crack, but in fact, if we look at the same LLM that has generated that one and many other passwords, the model becomes very predictable and weaker.
As developers are increasingly using AI to write most of their code, it creates patterns that make the output very predictable and what was to be safe, actually far less safe.
The volume game of patterns is very strong in a single focused vacuum, but not in its wider deployment.
So, after all, we still need to be extremely careful with the AI output.
THE OUTCOME BASED ON AI
Well, when all the above is said, all that remains is for it to be done. The outcome is the real reason we petition AI for an output. The idea, simple. If we apply the output that AI presents, we would materialise the outcome we wanted to reach. Is it really so?
Very often, many factors influence an outcome. Deviation in a plan is inevitable. In strategy, there is a constant need to reassess and reevaluate, as minor variables may have an unforeseen effect. That could be immediate or further down the path. Either way, it is to be assessed and adapted. We could ask AI to adjust the output to every new single variation, dividing the process into smaller bites. We could insert more and more output data points. This will not affect the outcome. Not until the petitioner has actually taken real action accordingly.
AI output does not guarantee the outcome. The outcome results from doing only. So who should be liable if the output recommends a direction that, once deployed, creates a problem or a liability? Who is at fault? The human who implemented or the AI model? Or maybe both? If so, how do we seek AI model liability?
Recently, this very scenario became a reality. The West Midlands Police relied on AI-generated output to value a specific outcome. The outcome was to establish whether a specific visiting football team posed a risk. AI said yes, likely, maybe. And the police stopped the fans of that specific team from travelling, or at least advised the other authorities to do so. The matter escalated to the highest government level.
The report that ensued stated: “The report finds that West Midlands Police were overly reliant on inaccurate and unverified information for decision-making that proved wholly inadequate to stand up to subsequent scrutiny. Evidence that supported pre-held narratives was readily accepted, while contradictory evidence from authoritative sources was seemingly ignored.
A lack of due diligence meant that failures in evidence gathering went unnoticed and unaddressed, even in the face of scrutiny by a Parliamentary Committee. The evidence used to assess the threat level posed by Maccabi fans was partly based on false information generated by AI that gave a misleading picture of the violence around their fixture with Ajax in Amsterdam.” (https://committees.parliament.uk/committee/83/home-affairs-committee/news/212026/ai-used-to-reinforce-false-narratives-in-maccabi-fan-ban-report-finds)
IN CONCLUSION
Convenience gives you speed; results demand actions and responsibility.
When convenience replaces responsibility, you don’t get better outcomes—you get faster excuses. Based on my Contemplationist observation, provided we understand and use the output as guidance and optionality to challenge other models, we are not at risk. Relinquishing the entirety of our actions based on the output, hoping for the defined outcome, is to date still a risky business. And there lies the paradox. AI did not promise us any outcome. We can decide to use the proposed AI output. This output is based on data points. These data points, based on probabilities and pattern models of past events that a system has entered and processed. We want to believe this should help us achieve our output faster and with more apparent in-depth knowledge. Even if it is not our knowledge but simply a produced output. And there lies the paradox. Even if we use the best navigation software, we still need to drive the car, if only to avoid the unexpected. With driverless cars, we are relinquishing our decision-making model for a statistical data-driven pattern model. We know it is not reliable once scaled.
The real AI paradox lies precisely there. AI never promised results. It offers probabilities, patterns, and checklists—useful, often brilliant—but still only instructions. The resolution is not in the response. It is we, the humans, who granted AI the power it now seems to have claimed in the conscious collective. AI did not ask for it. How could it? One remembers that power comes from both being given and being taken. Maybe the best way to avoid the paradox, to make use of the AI outputs as intended, is for us to be more critical of what we humans can realistically achieve. That means asking ourselves basic questions such as “So what”, “Is it within my current capabilities”, or “Is this incremental to my actions?” before adopting the output. It’s in the real-world decisions we make after it. At best, AI is a compass—not a chauffeur. Not yet anyway.




