How does AI evolve?

(Note:  We may earn commissions from products/services you click on.  This is at no extra cost to you.)

Table of Contents

The concept of smart machines that can perform tasks requiring human intelligence is common now. Creative minds are coming up with impressive ideas that work like humans. For instance, self-driving cars, conversational bots, Robo-advisors, and smart assistants like Alexa and Siri. While this is admirable, can they evolve to meet the ever-changing needs of the world?

Click this affiliate link to order AI and Machine Learning for Coders:  A Programmers Guide to Artificial Intelligence.

You can become a data scientist – click this affiliate link to register for the Data Science Certification Course using R.

Click this affiliate link now to register for Python Certification Training and boost your career to the next level.

Become a Java expert – click this affiliate link to register for Comprehensive Java Course Certification Training 

How does AI evolve?
How does AI evolve?

How AI is Evolving

There are big brains behind Artificial Intelligence (AI) – they work tirelessly to ensure that responsive and efficient software is channeled to the market. It takes time to come up with a highly functional AI algorithm. For instance, the neural networks, a machine learning type used for translating languages and driving cars, loosely mimics the brain’s structure. The system alters the strength of connections between artificial neurons as its way of learning. It features smaller neuron subcircuits that carry out specific functions such as identifying road signs. Connecting these structures takes time. 

AI is evolving – researchers have developed software that uses a concept similar to the Darwinian evolution to build programs that improve with time without human intervention. The program uses a loose evolution approximation to discover algorithms. Ideally, the program randomly combines mathematical operations to create a population of 100 candidate algorithms. The program then tests these algorithms on simple tasks – for instance, image recognition. Here it has to identify the picture as either truck or cat accurately. 

Click this affiliate link to order AI and Machine Learning for Coders:  A Programmers Guide to Artificial Intelligence.

AI programs develop thousands of candidates and use computing power to weed them out. This approach can stumble on some classic machine learning techniques. The program usually compares the performance of each algorithm against handmade algorithms in each cycle. Top performers are isolated, and their copies “mutated.” The mutation involves random editing, replacing, or deleting part of its code to create a slight variation of the best algorithms. 

Click this affiliate link to register for Machine Learning Using Python Training.

The “mutants” are then added to the population while the older programs are culled, and the process repeats itself. The system creates multiple thousands; so, it churns through thousands of algorithms until it finds the best solution. To speed up the search, the system occasionally exchanges algorithms between populations. This prevents the occurrence of any evolutionary dead ends and also automatically weeds out duplicate algorithms.

What Is Artificial Intelligence?

Artificial intelligence uses computers and machines to imitate the decision-making and problem-solving capabilities of the human mind. So, there are many definitions associated with artificial intelligence:

It is a science and engineering of developing intelligent machines, more so smart computer programs. It is much similar to using computers to understand better human intelligence. However, AI does not confine itself to biologically observable methods – John McCarthy. 

Typically, Artificial Intelligence simulates the human intelligence process using computers and other machines. These machines and computers can therefore perform jobs that can function without the help of humans. 

AI applications are wide-ranging – for instance, they are massively used in natural language processing, expert systems, speech recognition, and machine vision.

Click this affiliate link to register for Data Science Certification Training Using R.

Types of Artificial Intelligence

Artificial intelligence majorly falls under two categories:

  • Weak AI/Narrow AI/Artificial Narrow Intelligence (ANI)

This type of AI is designed for specific tasks, especially commonplace tasks. You’ll get it in apps like Apple’s Siri, IBM Watson, Amazon’s Alexa, and autonomous vehicles.

  • Strong AI

It features two parts – Artificial General Intelligence (AGI)/general AI and Artificial Super Intelligence (ASI).

  • AGI is a theoretical AI type where machines have intelligence similar to humans. Typically, it has self-aware consciousness to solve problems, learn, and plan for the future.
  • ASI/Superintelligence – it surpasses the human brain ability; it’s majorly theoretical – no practical examples, but researchers are tirelessly working on it. The best example would be science fiction, e.g., the superhuman.

The common known types of AI in use currently are:

Reactive Machines – they don’t store memories; they perceive the world and react to it, e.g., IBM’s Deep Blue. 

Limited Memory – they retain data, but they cannot add to libraries of experiences. 

Theory of Mind – their design aims to imitate human models. They are not yet made.

Self-Awareness – common in science fiction 

Click this affiliate link to register for Machine Learning Using Python Training.

What Is Evolution of Artificial Intelligence?

Artificial intelligence evolution is the gradual change of AI programs from simple to robust designs that can perform functions independently. That is to say, these programs think better with time, and they are able to make complex decisions. 

Previously, AI was just imaginative writing from famous fiction writers. Today, it is a reality in many sectors – healthcare, communication, predictions, etc. Ideally, all these technologies feature machine-learning algorithms, which enable them to react and respond in real-time.

In 1950, Alan Turing envisioned the idea that machines can think. He even went ahead to create the Turing test that is in use even today. According to Turing, humans use the available information and reason to decide and solve issues, and so can machines. The biggest limitation was computers as they could not store commands, i.e., they could not remember. Additionally, computing was costly. 

So computers had to be developed. Between 1957 and 1974, computing was available, and AI developed. The computers were faster and more accessible. Also, machine algorithms improved. Earlier works from researchers like Newell and Simon’s General Problem Solver were promising toward the objective of problem-solving. 

Click this affiliate link to order AI and Machine Learning for Coders:  A Programmers Guide to Artificial Intelligence.

The fund boost and expansion of the algorithmic toolkit reignited the AI concepts. John Hopfield and David Rumelhart highly popularized deep learning techniques. They allowed computers to learn user experience. Also, Edward Feigenbaum introduced expert systems – they mimicked the decision-making process of a human expert.

Landmark AI goals were achieved during the 1990s and 2000s. For instance, IBM’s Deep Blue defeated reigning world chess champion and grandmaster Gary Kasparov in a chess-playing computer program in 1997. In the same year, Dragon Systems’ speech recognition software was implemented on Windows. Since then, multiple advancements have been achieved in the AI field. 

Click this affiliate link to register for Data Science Certification Training Using R.

How Did AI Originate?

The term Artificial intelligence was formally coined in 1956 at Dartmouth College, New Hampshire. However, the idea of artificial intelligence dates back to antiquity as ancient philosophers mulled over the idea of artificial beings. 

In ancient Greece, there were myths about robots. Also, Chinese and Egyptian engineers built automatons. However, great interest in AI originated when classical philosophers, mathematicians, and logicians mechanically manipulated symbols, eventually leading to digital computers. 

There were many activities in the 20th century. However, the 1950s saw significant advances in AI activities as various computer scientists unveiled research-based findings. For instance, Claude Shannon published “Programming a Computer for Playing Chess. This was the first publication to discuss a chess-playing computer program.

In the same year, Allan Turing’s “Computing Machinery and Intelligence” brought the idea that machines can think. Other significant events in the 1950s were the creation of the checkers-playing computer program (1952) and the coining of the terms AI (1956) And Machine Learning (1959). 

1960 saw the creation of an assembling robot, Unimate, to work for General Motors. The following years saw a heuristic problem-solving program (SAINT), an interactive computer program (ELIZA), a general-purpose mobile robot, and an anthropomorphic robot. 

There were major developments between the 1950s and early 2000s. Ideally, robots with varying capabilities were created. The rapid growth was mainly experienced in the 1980s across the globe. From 2000 to the present day still, massive developments have been experienced in the AI field. For instance, in 2010, Microsoft launched Kinect for Xbox 360, a gaming device that tracked human body movement using a 3D camera and infrared detection.

In 2011 Apple unveiled Siri, an Apple iOS operating systems virtual assistant. It uses a natural-language user interface to infer, observe, answer, and recommend things to its human user. 

2016 saw the creation of a humanoid robot named Sophia. Other developments include the development of dialogue agents by Facebook, and Samsung’s Bixby, a virtual assistant.

Click this affiliate link to register for Data Science Certification Training Using R.

Is AI the Next Step in Evolution?

Companies are increasingly using artificial intelligence to streamline their work. Automated and intelligent machine systems can do what humans can, and they offer a higher degree of efficiency without needing breaks. This is what makes AI the future. 

Development of technology and tools that augment and enhance natural human capabilities determine human progress. Many experts are delving deeper into the AI field, and they believe that the use of artificial intelligence will better the lives of many people. However, there are some concerns, too, as it might affect the means to be human. 

AI cannot replace or compete with humans (it challenges human cognition). Rather, humans use and integrate AI into their awareness. So, evolution is shifting from biological to technological. It is humans who create intelligent systems. So, in the process, they become super-intelligent. So, the future needs both AI and humanity. The challenge is, every human must keep up with technology. You’ll feel out of place when you operate in a super-intelligent world when you are not super intelligent. 

For businesses and industries, AI supports and augments human cognition. Smart systems save time, lives, and money. Also, it makes it possible for humans to enjoy the tailormade future. Technology makes life more comfortable and easier to handle. Each innovation injects a unique dimension into human life – every technological revolution is the next iteration to scale human output.

Every part of life – politics, economics, culture, history, and traditions is interwoven with technology. So, technology is an integral part of the functioning of a society. But human cognition is a point where technology hasn’t affected. That’s why many people envision conflict and fear. 

While AI is a revolutionary technology that will change how many things work, it will not replace humans. Ideally, AI only complements and augments human capabilities. 

Click this affiliate link to order AI and Machine Learning for Coders:  A Programmers Guide to Artificial Intelligence.

Evolution of AI Timeline

Artificial intelligence has come a long way. Its journey had a couple of periods of booms and droughts. Between 1974 and 1980, and 1987 and 1993, there was reduced funding and interest in AI. These periods were broadly termed as AI winters. After 1993 interest in AI began to gain momentum again, thanks to the great victory of the chess-playing computer by IBM – Deep Blue, which defeated grandmaster Garry Kasparov in 1997. Today, there is a severe AI boom.

So, what are the major milestones in AI development? Here is the timeline. 

1642 – invention of the digital calculating machine Pascal 

1854 –George Boole invents the Boolean algebra 

1913 – Whitehead and Russell revolutionize the Formal logic in Principia Mathematica.

1950 – Alan Turing asserts machines can exhibit human behavior. He developed Turing Test 

1956 – Phrase ‘Artificial Intelligence’ was first coined at a Dartmouth College conference.

1959 – Marvin Minsky and John McCarthy establish the MIT AI Lab. In the same year, the term machine learning was coined. 

1997 –IBM’s Deep Blue defeats world chess champion Garry Kasparov

2002 – iRobot creates the first robot for the home – Roomba, an autonomous vacuum cleaner

2009 – Google starts its self-driving car project, later building its first autonomous car.

2010 – Microsoft launches Kinect for the Xbox

2011 – Apple released the iPhone 4S with Siri, a natural language-based virtual assistant.

2014 – Tesla introduced Autopilot, a software that was later upgraded for fully autonomous driving.

2014 – Amazon launches Echo, an intelligent voice-activated speaker

2016 –DeepMind AlphaGo algorithm (Google) defeated the world Go champion Lee Sedol 4-1.

2020 – Covid-19 prompts more investments in AI, leading to a boom in the machine learning APIs for many apps.

2026 – AI will write high quality and plagiarism free high school material

2027 – AI to drive trucks better than humans

2030 – China plans to be the primary AI innovation Centre.

2049 – AI to write short stories or novels to make it to the New York Times best-seller list.

Click this affiliate link to register for Machine Learning Using Python Training.

How Will Human Beings Ensure That They Stay Ahead of Artificial Intelligence?

Artificial intelligence technology is almost everywhere. Developers are researchers creating these technologies and programs to streamline processes and save on money. Soon, the entire world will be full of artificial superintelligence.

This continuous technological growth is lovable as it reduces the workload on humans. For instance, you can have a home robot to do all home chores for you. Also, if you want a glass of water or to eat fruit, instruct the machine/robot to bring it to you. Many people will appreciate such a lifestyle, and that is why there is much support for this. 

But an increasing number of people are getting worried about AI too. If things begin working independently, obviously, the results will be terrible. Negative effects range from health issues to job losses. So, humans must be vigilant to avoid such scenarios from occurring. Here are tips:

  • Humans should get out of their comfort zone. Humans search for too much comfort and create machines with no specific needs, e.g., home robots. Robots can be essential in some industrial processes. But at home, you can do some chores without getting tired. Stop being too lazy, and start doing some tasks. 
  • Companies creating artificial technologies should be regulated. Government or company regulation can help streamline the powers given to robots being created. Too much power might spell doom for humans. 
  • Models should not be trained to act like humans, i.e., don’t give the machine too many powers over you. Give them 50%-60% but not 100%. This is to prevent a machine from overpowering humans when it gets out of control. 

Artificial intelligence is a tool that undergoes training. If you wrongly train them, they’ll develop characters that do not hold in general. So it is vital to ensure that the training data is balanced to avoid skewed conclusions by the machines. 

Click this affiliate link to order AI and Machine Learning for Coders:  A Programmers Guide to Artificial Intelligence.

Artificial Intelligence and the Future of Humans

It is apparent that AI has revolutionized many things, and experts predict there is more to come. However, some people have serious concerns about AI advancements – how will they affect humanity? 

Undoubtedly, digital life is disrupting the age-old human way of doing things and augmenting human capacities. Over half of the world has one or more forms of code-driven systems offering previously unimagined opportunities and unprecedented threats.

Presently, we can see traces of artificial intelligence in almost all sectors of our life. This incredible technology has injected both positive and negative vibes. AI experts claim that by 2029, the computer will have the same intelligence level as humans. Typically, the computer will pass a valid Turing test; thus, achieving human levels of intelligence. 

Other predictions made earlier indicated that AI will amplify human effectiveness and at the same time threaten human autonomy and capabilities. Threats will occur in certain areas where AI systems will have better understanding and capabilities than humans. Such areas include pattern recognition, reasoning and learning, decision making, language translation, speech recognition, and visual acuity. Intelligent systems will be vehicles, communities, utilities and buildings, business processes, and farms. The smart systems aim to save money, time and lives alongside offering individuals opportunities to enjoy a more customized future. 

The brighter side of AI is:

  • Precision medicine – combining environment, genetics, and lifestyle to prevent and treat diseases. Users get custom-designed drugs, better diagnosis, and digital therapeutics 
  • Driverless cars – uses machine learning technology to learn patterns for perception, prediction, and planning 
  • Virtual assistants – e.g., Siri and Alexa that uses natural language processing to accomplish tasks
  • Implantable – brain-machine interfaces to solve problems including paralysis, anxiety, addiction, etc. 

Darker side

  • Mass surveillance
  • Modern warfare
  • Massive job losses
  • Socioeconomic inequality

Click this affiliate link to register for Data Science Certification Training Using R.

How Artificial Intelligence Will Change the Future

In the next decade, AI will be the next big business. Currently, many companies are using one or more forms of artificial intelligence. Its proponents are creating systems that help streamline operations and customize the life of those who will be using them. 

Artificial intelligence applications run in homes, offices, vehicles, home computers, farms, etc. Many people are using these systems without knowing that they are using them. For instance, iPhone users have access to Siri, an AI system that lets them interact with their phones easily and quickly. All digital devices are attached to AI, and the momentum behind these systems is growing each day. 

With the availability of massive data, which computers can quickly gather alongside funding and specialists in AI areas, it has become much easier to develop, test, and use smart systems. 

AI will change the future in the following ways:

  • Better entertainment access. Currently, Netflix, an AI system, is king. But shortly, film studios will be using predictive programs to analyze film scripts storyline and forecast their box office potential. Besides, it will be possible for movie lovers to alter movies with custom virtual actors. 
  • Better diagnosis, treatments, and prevention of diseases. The systems will make it possible to tailor treatments and prevention measures to your genome. This is after thorough data analysis.  
  • Better Cybersecurity. The self-learning and automation capabilities can protect data, keeping people safe from large-scale and small-scale threats. 
  • Better transportation. Significant development is driverless cars. There is already a self-driving car, thanks to Google. Also, there are a host of driverless trains in Europe. 
  • Reduced digital privacy. Big AI companies can access peoples’ data easily
  • Massive job losses as the AI systems will perform many tasks that humans do faster and cheaply. 


It’s true; AI is a disruptive technology. Everyone needs to be on the lookout about how this technology works. From chatbots to smartphones, AI is commonplace in human life, but some even do not know that they are using it. Good thing; the momentum behind artificial intelligence is building up fast, thanks to the massive interest in building structures that support it. 

With such an increasing pace, users of digital devices must be on the lookout for new changes. Keenly following these changes and evolution might help you benefit more from the systems and prevent the harm these systems bring. Remember, computer superintelligence threatens your existence. Even if the effect may not be felt today, some issues surrounding it need to be resolved. 

It’s agreed that AI had a humble beginning. Different computer scientists had other concepts, but they were geared towards the idea that machines could reason. There were various stages of the opinion that machines could logically think. For many years, scientists and data analysts tried to develop systems that could prove that idea right, but they failed. 

Until 1997, when Deep Blue defeated the then chess world champion, the idea of artificial intelligence was rekindled. Since then, there has been the creation of smart systems to aid in several processes. Today, all machines created can work independently, but their intelligence is still lower than that of humans.

Computer scientists believe that they can create super-intelligent computers, i.e., computers that can pass the Turing test. That will be the peak of artificial intelligence. However, while making these systems, scientists should not give them more power than humans. Super intelligent computers will be able to reason logically and have emotions. Providing such a machine with more ability may spell trouble. 

Click this affiliate link to order AI and Machine Learning for Coders:  A Programmers Guide to Artificial Intelligence.

Luis Gillman
Luis Gillman

Hi, I Am Luis Gillman CA (SA), ACMA
I am a Chartered Accountant (SA) and CIMA (SA) and author of Due Diligence: A strategic and Financial Approach.

The book was published by Lexis Nexis on 2001. In 2010, I wrote the second edition. Much of this website is derived from these two books.

In addition I have published an article entitled the Link Between Due Diligence and Valautions.

Disclaimer: Whilst every effort has been made to ensure that the information published on this website is accurate, the author and owners of this website take no responsibility  for any loss or damage suffered as a result of relience upon the information contained therein.  Furthermore the bulk of the information is derived from information in 2018 and use therefore is at your on risk. In addition you should consult professional advice if required.