Neither Too Much nor Too Little: A “Touch Base” on the Current State of AI

Neither Too Much nor Too Little: A “Touch Base” on the Current State of AI
Neither Too Much nor Too Little: A “Touch Base” on the Current State of AI

February 23, 2026

Motivated by the many comments — some fearful, others excessively enthusiastic — about artificial intelligence, I set out to “touch base”: to ground the discussion with a personal perspective on this tool which, no matter how useful or impressive it may seem, is still just that — a tool.

It is not magic. It is not omniscient. Nor is it the end of human work.

It is technology. And like any technology, it must be evaluated with judgment.


How Is It Different from Previous Tools?

A calculator, under normal conditions, does not fail. If it does, there is a bug — you fix it and move on. Its behavior is deterministic.

AI, by contrast, fails probabilistically. It operates on statistical predictions and pattern matching, not mathematical certainty. It may produce mediocre correct answers or be confidently wrong.

This makes it an inherently uncertain and sometimes erratic tool.

From this follows an important conclusion: total dependence on AI is conceptually flawed.


How Does It Integrate into Daily Work?

In my recent experience, I used AI while developing a couple of medium-complexity gems that required:

  • Precision algorithms
  • Approximation algorithms
  • A binding to port calls from the C GD library to Ruby

I tested multiple tools from the current ecosystem:

  • Gemini
  • GPT
  • Qwen
  • Leonardo
  • Microsoft solutions (Copilot, among others)

The overall result can be summarized as follows: useful, yes; reliable on their own, no.

They work well as assistants, poorly as substitutes.

In practice, AI-generated code is rarely usable directly or suitable for production. Often it is more labor-intensive to correct, debug, and adapt than writing the solution from scratch, which means that in many cases the most efficient option is simply to discard it.

However, the interaction is valuable for brainstorming, exploring alternative approaches, and early prototyping, where the cost of error is low and iteration speed matters more than correctness.


A real-world example of why delegating critical decisions to probabilistic systems without human control remains a bad idea.

Errors don’t just happen — they can be devastating.


Neither Too Much nor Too Little

If I had to describe the phenomenon from a market perspective, I would say AI is to science and engineering what fast food is to nutrition: accessible, quick, standardized… but low in nutritional value and flavor if consumed as a sole diet.

This should place us in a deeply critical position. The tool demands judgment.

If an algorithm — with finite and verifiable variables — must be exhaustively supervised before use, imagine the risks in far more complex domains such as psychology, medicine, or legal advice.

Moreover, the errors are measurable. Reviewing a long conversation often reveals recurring patterns such as:

  • Repeated incorrect answers
  • Implicit changes in assumptions or parameters
  • Loss of context
  • Argumentative loops or contradictions

The Case of Images

AI-generated images were once a viral phenomenon. Today, many people actively avoid them.

The reason is simple: the human eye has learned to detect artificial patterns. When an image “looks AI-generated,” the content often loses credibility and may not even be read.

This is the result of a massive wave of low-value content produced for SEO or engagement, with little intention of delivering genuine knowledge or meaningful communication.

Signal has been drowned in noise.


Juniors + AI: A Delicate Combination

Regardless of philosophical stance, AI undeniably enables the completion of medium-complexity tasks that previously required more time or specialized knowledge.

Where we once requested a calculation, we now request a full algorithm or a finished image.

The issue is not capability but reliability — errors and hallucinations. These factors require rigorous processes such as:

  • Continuous critical observation
  • Systematic testing
  • Thorough debugging
  • Expert validation before any production deployment

Without these safeguards, risk increases significantly.

For junior professionals, the danger is greater: they may receive seemingly correct solutions without having the conceptual tools to evaluate their validity.


So Has AI Failed?

Not at all.

What is happening is much simpler: every technology must be adopted with judgment and critical thinking.

There is an old metaphor — “trinkets and shiny beads” — describing the uncritical adoption of something new simply because it is new. AI runs that exact risk.

Asking AI to convert RGB to HEX, draft a letter, or propose a medium-complexity algorithm is unquestionably an improvement over previous tools.

But this advantage comes with a responsibility: human auditing.

We can audit an algorithm if we possess the relevant technical expertise. We cannot necessarily evaluate, with equal confidence, a medical, mechanical, electronic, or linguistic recommendation.

For this reason, using AI as a direct production-ready solution is a decision I categorically reject at this time, due to the risks posed by failures in complex systems.

In its current state, the technology calls for caution rather than blind adoption driven by narratives that it is inevitable and will replace everything.


Final Thoughts

The purpose of this article is not to reject AI, but to cool down the discussion.

We live in a technological whirlwind where adopting tools without reflection is easy. “Touch base” means pausing, evaluating calmly, and understanding where we truly stand.

AI will evolve and improve. But predictions that entire professions will disappear or that human activity will be fully replaced are, at present, premature.

Conversely, it would also be imprudent to place entire layers of science and engineering in the hands of systems that still produce inconsistent results.

In many cases, the time spent verifying AI output exceeds the time required to perform the task manually from the outset.

Neither apocalypse nor utopia.

Neither too much… nor too little.

Article content

Leave a comment