Evolution in current AI does not imply AGI and will AI take over humanity; does it even matter?

arjun dhar
5 min readNov 15, 2023

Am tired of party conversations about how a machine did something unexpected it was not designed to do, and thus machines can harm humans in unexpected ways. There are always unexpected consequences of any system.

Before we discuss philosophical questions, we must have a sound understanding on the capabilities, stages of AI and where this is all going.

Stages of AI

There is a lot of text that range from three to ten stages of AI. I disagree with both these narratives. To summarize, there is a difference between capability and a stage. A stage by definition, implies a strict form of separation and necessary step prior to the next in sequence. If for any reason a stage precedence cannot be determined then it isn’t really a “Stage”.

There are only two relevant stages:

  1. Artificial Narrow or Specific AI: Where we are.
  2. Artificial General Intelligence (AGI): Where there is any capability to develop awareness or basis of purpose for machines to be malicious without any intent provided.

Why any other stage does not matter

To understand why, lets discuss generic capabilities of AI that already exist with no clear distinction between the following

  1. Rule based: Rules defined by human via algorithms or Domain Specific Languages (DSL).
  2. Heuristic based: Machine learning (Regression, Decision Trees etc.) which also includes Deep learning (Artificial neural networks and variants), Reinforcement learning and more that even I may not be aware of. Also note, there is a lot of overlap between these.
  3. There is also symbolic logic, probabilistic reasoning, sequential decision-making techniques, planning hierarchical reinforcement learning, probabilistic programming etc.

Lets not forget Rule based systems

For those who think Rule based systems and equivalents are too primitive to be considered as AI. I’ll leave food for thought; even with advancements in AI and ML today; rules provide effective constraints to guide and optimize ML tasks. Another way of looking at it, is that Correlation does not imply causation. Which means while ML can figure out the relationships, it cannot explain them and neither can it be trusted completely to imply direct dependence of cause and effect. While a traditional Rule based system, probabilistic reasoning and other offer that with confidence. NLP for example; has a lot of implicit rules in its models. Part of Speech (PoS) tagging, Hidden Markov models (HMM), the use of dictionaries and pre-tagged data sets are nothing but rules; and more.

All capabilities are essential and must co-exist.

So called stages are merely capabilities, that will co-exist and augment one another to solve specific tasks.

However, solving tasks does not imply machines have a broader context to:

  1. Purpose: Why the task needs to be solved.
  2. Perform broader tasks comparable to humans: Cross disciplinary ability to mix and match various capabilities along with domain knowledge. You cannot do this by simply aggregating existing AI capabilities and knowledge out there.
  3. Find what the user wants.
  4. Ability to take responsibility.

Norfolk Terriers are not gonna be that upset if you call them Norwich Terriers. … Whereas classifying a human as a gorilla, as Google found out, is really expensive

No matter how able the systems today get, they do not provide an intuitive evolutionary path to the above. Even AGI at best can address #2, #3. AGI may evolve to address #1 and #4 however that is so far out, that I’d neither speculate nor give its own stage or category.

AGI is not an evolution of ANI

Another myth is that todays deep learning (ANI) will evolve to AGI

Please refer to Practical AI podcast or its transcript > Achieving provably beneficial, human-compatible AI. There is a comment by Stuart Russel

First of all, I should point out that I don’t think deep learning evolves into AGI.

The following is a quote from the same podcast

… Artificial General Intelligence is not going to be reached by just having bigger deep learning networks and more data. AGI and human intelligence require fundamental capabilities that are just not present in deep learning technology as we currently understand it. So deep learning systems don’t know anything, they can’t reason, and they can’t accumulate knowledge, they can’t apply what they learned in one context to solve problems in another context etc. And these are just elementary things that humans do all the time. …

The point being, that we are far from this type of threat. Despite the progress made, it does not imply any credible threat by itself (without the interference of humans who intend to make it a threat).

Assume we cross AGI; what next?

The blog so far explains why getting to AGI is not as perceivable in the near future as the hype makes it out to be. But

lets cut the crap, and get down to what really people are talking about.
…“Human survival”.

If history has taught us anything; its humans that have inflicted maximum damage on humans.

As of writing this blog there are 2 wars in progress. One in Ukraine, and one in Gaza. Both with the possibility to spark another world war and for sure regional wars.

There is the narrative that there is no example of a more superior species being benevolent and hence super intelligence is a threat to humanity.

Even if this is true, lets not forget the irrational hatred that humans keep for one another based on ideologies that are based speculation about communities than facts. Whatever purpose AI may arrive at to destroy humanity, it would do so without any intent to make us suffer in the process. Humans are a threat to their own survival in a realistic sense; AI is a threat in the worst case. Which is a bigger practical threat? Maybe super intelligence will provide more avenues for the survival of mankind from itself, or space disasters like comets etc.

Overall, I feel humans have less to fear from AI than themselves. Fears and speculations of AI overtaking humanity are credible, however pose no threat compared to what humans are capable of.

We are more likely to meet our end destroying ourselves or to a cataclysmic event before AGI or its extension; Super Intelligence destroys us. In fact AI may be our savior and the only thing that keeps humans from destroying humans.

AI may be our savior and the only thing that keeps humans from destroying humans.

--

--

arjun dhar

Software development enthusiast since I was 8 yrs old. Love communicating on anything regarding innovation, community development … ∞