Is machine learning and AI a technology?

Table of Contents

What is technology?

Technology has been part of human existence ever since we first started using stones. Centuries ago, humans began using primitive tools. Around one million years ago, humans began using fire. About 75,000 years ago, technological advancements improved a lot.

When we started making and farming our food and building our homes, we could spread into new areas of the globe.

 

Is machine learning and AI a technology
Is machine learning and AI a technology

The word technology first emerged in the English language in 1610. At the time, the definition appeared as: “discourse on the subject of art.” The word originates from the Greek word “Tekhnologia,” translating to “systematic treatment of a technique.”

Click this affiliate link to register for AI and Deep Learning with TensorFlow Certification.

Over time, technology has revolutionized our world and daily lives. Technology has created unique systems and tools, putting vital information at our fingertips.

The advancement in technology has led to multi-functional devices like the Internet of Things devices and smartphones. Computer systems are faster, more efficient, and highly-powered than ever before. With all of these revolutions, technology makes our lives more efficient, faster, better, and more fun.

Technology has also given us brand new computing systems in recent decades, like smartwatches, tablets, and voice assistant devices. With these devices, we can instantly do things like transfer money and make payments for everything from shoes, food delivery, food, gadgets, and more. Technology has changed how we work, meet each other, and consume all types of media. Technology has made fun advancements, but it’s also made essential advancements in safety when it comes to home security and medical devices.

The word technology has several meanings, depending on the context of its use. It refers to devices, systems, and methods, specifically those resulting from scientific knowledge used for practical purposes. 

What is AI? Is AI technology?

What is Artificial Intelligence?

Artificial Intelligence is a popular buzzword you’ve probably heard or read. Blogposts and articles about robots, technology, and the digital age may fill your head when you think about Artificial Intelligence. What is Artificial Intelligence, and for what is it used?

Click this affiliate link to register for AI & Deep Learning with TensorFlow Certification

Artificial Intelligence is a technological advancement that deals with programming technology to problem solve. Artificial Intelligence is often talked about in conjunction with machine learning or deep learning and big data.

The concept of AI has changed over time. At the center, there has always been the concept of building machines with the ability to simulate human intelligence.

Human beings are uniquely capable of interpreting the world around us and using the information as a catalyst to bring about change. 

Artificial intelligence simulates creative and assertive thought, particularly learning – using computers’ digital, binary logic.

Research and development work in AI is into two branches. The first is “applied AI,” which uses these principles of simulating human thought processes to carry out a specific task. The second is known as “generalized AI,” – which seeks to develop machine intelligence that can turn their hands to any job, much like a human.

Research into applied and specialized AI is already providing breakthroughs in the field of quantum physics. It models and predicts the behavior of systems comprised of billions of subatomic particles, to medicine, where it diagnoses patients based on genomic data.

Its uses range from fraud detection to improving customer service by predicting what services customers will need in the financial world. In manufacturing, it manages workforces and production processes and indicates faults before they occur, enabling advanced maintenance.

In the consumers’ world, more and more of the technology we are adopting into our everyday lives is becoming powered by AI. From smartphone assistants like Apple’s Siri and Samsung’s Bixby to self-driving and delivery bots, which many are predicting will become a new part of shopping within our lifetimes.

Generalized AI is a bit further off. A deep dive into the human brain would require a comprehensive overview of the organ than we currently have and more computing power. Computer technology is fast evolving, new generations of computer chip technology known as ‘neuromorphic’ processors, designed to run brain-simulator code more efficiently. Cloud systems such as Amazon Web Services serve as cognitive computing platforms. They use high-level simulations of human neurological processes to perform an ever-growing range of tasks without being specifically taught how to do them.

Click this affiliate link to register for AI and Deep Learning with TensorFlow Certification.

What is AI?

Artificial Intelligence is the study, theory, and development of computer programs or intelligent programs that can solve problems that usually require human intelligence. Things like visual pattern recognition, speech recognition, decision-making, and word translation are all things that would typically need human intelligence. Still, now computer programs can use their intelligence to solve these tasks.

In June of 1965, a group of computer scientists and mathematicians met at Dartmouth to discuss a computer that could think. They didn’t know what to call it, but their conversations there created the spark that ignited artificial intelligence. Since the “Dartmouth workshop,” there have been highs and lows for this intelligence development. A lot of work seen over the past few years has been on developing and integrating systems that exhibit human intelligence into our everyday lives. 

How is Artificial Intelligence different from human intelligence? While the computer can learn and adapt to its surroundings, humans created it at the end of the day. Human Intelligence has a great capacity for multitasking, social interactions, and self-awareness. This intelligence makes it very different from machines. 

Click this affiliate link to register for Machine Learning Using Python Certification.

There are so several schools of thought and decision-making that artificial intelligence can’t master. Computing feelings isn’t something that we can train a machine to do, no matter how smart. Cognitive learning and machine learning are distinct and separate from each other. While Artificial Intelligence applications can dash and be more objective and accurate, its capability stops replicating human intelligence. Human thought encompasses so much more than a machine can learn, no matter how intelligent it is.

Click this affiliate link to order AI and Machine Learning for Coders:  A Programmers Guide to Artificial Intelligence.

How does Artificial Intelligence work?

It’s one thing to know what Artificial Intelligence is; it’s another to understand how it works. Artificial Intelligence operates by processing data through highly efficient algorithms. It processes massive data sets with its algorithms, learning from the data sets’ patterns or features. There are many techniques and subfields in Artificial Intelligence, including:

Machine learning: Machine learning uses features to find hidden insights from data without being programmed for what to look for or generalize. Machine learning is a common way for computer systems to find patterns and improve their intelligence over time.

Deep learning: Deep learning utilizes substantial neural networks with many layers, taking advantage of its size to process vast amounts of data with intricate patterns. Deep understanding is an element of machine learning, just with more massive data sets and more computing power.

Computer vision: In Artificial Intelligence, computer vision utilizes pattern recognition and deep learning to understand a picture or video. Machines can look around and take photos or videos in real-time, and interpret the surroundings. 

The overall goal of AI is to make software that can learn about input and explain its output. Artificial Intelligence gives human-like interactions, but won’t be replacing humans anytime soon.

Recent developments in Artificial Intelligence

These recent advances have come about due to the imitation of human thought processes. The field of research that has been most fruitful in recent years has become known as “machine learning.” It has become crucial to mainstream Artificial Intelligence that the words “artificial intelligence” and “machine learning” are relatively used interchangeably. Machine learning represents the current situation in the broader field of AI. The foundation of machine learning is that machines can be programmed to thin like us rather than learn to do everything step by step. Machines can learn to work by observing, classifying, and learning from their mistakes, as humans learn from theirs.

The application of neuroscience to Artificial Intelligence system architecture has led to the development of artificial neural networks. Advancements in this field have evolved over the last half-century. Computer systems with high computing and processing power make daily human tasks easier.

Perhaps the single most significant enabling factor has been the explosion of data unleashed since mainstream society merged with the digital world. Data available from things we share on social media to machine data generated by connected industrial machinery means computers now have a universe of information. This data explosion helps computers learn more efficiently and make better decisions.

Click this affiliate link to order AI and Machine Learning for Coders:  A Programmers Guide to Artificial Intelligence.

Machine learning definition

Machine Learning

At a very high-level, machine learning is simply the study of teaching a computer program or algorithm on how to improve upon a set task given progressively. On the research side of things, machine learning views the theoretical and mathematical modeling of how it operates. However, it is the study of building computer systems that show this improvement. There are several ways to frame this idea. Still, mainly there are three major recognized categories: supervised learning, unsupervised learning, and reinforcement learning.

Machine learning techniques

Supervised Learning: Supervised learning is a technique of machine learning. It is easy to understand and simple to implement. It is very similar to teaching a young child with the use of flashcards.

Given data in the format of examples with different labels, we can feed a learning algorithm. These example label pairs one by one, allowing the algorithm to predict the label for each instance, giving it feedback as to whether it expected the right answer or not. Over time, the supervised learning algorithm will learn to approximate the exact nature of the relationship between examples and their labels. When fully-trained, the supervised learning algorithm will observe a new, never-before-seen example and predict a good label for it.

Supervised learning is a task-oriented process. It is highly focused on a particular task, feeding more and more examples to the algorithm until it can perform it. This learning type will most likely appear, as exhibited in many of the following standard applications:

• Ad Popularity: The particular advertisement that performs best is called a supervised learning task. Many of the ads you see on your phone or computer as you browse the internet are there because a learning algorithm said they were of reasonable popularity (and clickability). Furthermore, its placement is associated with a particular website or a specific search query. This (if you find yourself using a search engine) is mainly due to a learned machine learning algorithm saying that the matching between ad and placement will be useful.

• Email Spam Classification: If you use a modern email system, for example, Google Mail, chances are you’ve used an email spam filter. That email spam filter is a supervised learning system. When fed with email examples and labels (spam/not spam), these systems learn how to filter out malicious emails not to harass their user preemptively. Many of these also behave so that a user can provide new labels to the system and learn user preference.

• Face Recognition: Do you use Instagram? Your picture has most likely been in a supervised learning algorithm trained to recognize your face’s patterns. Having a system that takes a picture, finds features of the faces, and guesses who is in the image (suggesting a tag) is a supervised process. It has several layers to it, finding faces and then identifying them.

Unsupervised Learning: Unsupervised learning is a machine learning technique with no labeled features in the data set. Instead, our unsupervised learning algorithm would be fed a lot of data and given the tools to find patterns within its properties. It learns to group, cluster, and organize the data in a way such that a human (or other intelligent algorithms) can come in and make sense of the newly collected data.

What makes unsupervised learning such an attractive area? It is that an overwhelming majority of data in this world is unlabeled. Intelligent algorithms that can take a large amount of unlabeled data and make sense of it are massive for many industries. That alone could help boost productivity in several fields.

For example, what if we had an extensive database of every research paper ever published. We have an unsupervised learning algorithm that groups these research papers in a way so that you are always aware of the current progression within a particular field of research. As you write your career up and take notes, the algorithm makes suggestions to you about related pieces, works you may wish to cite, and works that may help you push that research domain forward. With such a tool, your productivity will significantly increase.

In unsupervised learning tasks, the data influence the outcomes and their applications. Some areas of unsupervised learning include:

• Recommendation Systems: If you’ve ever used YouTube or Netflix, you’ve most likely encountered a video recommendation system. These systems are frequent in the unsupervised learning field. We also see the watch history of many users. It considers users who have watched similar videos like you and then enjoyed other videos you have yet to see. The recommendation system can see this relationship in the data and prompt you with such a suggestion.

• Buying Habits: Your buying habits are stored in a database somewhere. Data is being used actively at this time. Customers’ buying habits can be used in unsupervised learning algorithms to group customers into similar purchasing clusters or groups. This data helps companies market to these grouped segments and can even resemble recommender systems.

• Grouping User Logs: Less user-facing, but still very relevant; we can use unsupervised learning to group user logs and issues. These solutions help companies identify central themes to problems their customers face and rectify them by improving a product feature or designing an FAQ to handle common issues. Either way, it is something done. If you’ve ever submitted a problem with a product or submitted a bug report, it was likely fed to an unsupervised learning algorithm to cluster it with other similar issues.

Reinforcement Learning: Reinforcement learning is relatively different from supervised and unsupervised machine learning techniques. One key difference between supervised and unsupervised machine learning techniques is the presence or absence of labels on the data. The connection to reinforcement learning is a bit more complicated. People try to link reinforcement learning closer to supervised learning and unsupervised learning by defining it as a type of learning technique that relies on a time-dependent sequence of labels. 

Reinforcement learning is when a computer system learns from its previous mistakes and improves over time. Put a reinforcement learning algorithm into any environment, and it will make a lot of errors in the beginning. So long as we provide some signal to the algorithm that associates good behaviors with a positive sign and wrong actions with a negative one. We can reinforce our algorithm to prefer the right moves over bad mistakes. Over time, our reinforcement learning algorithm learns to make fewer mistakes and improves.

Reinforcement learning is very behavior-driven. The fields of neuroscience and psychology influence it. Suppose you’ve heard of Pavlov’s dog, you may already be familiar with reinforcing an agent in an environment, albeit a biologically simulated one.

To understand reinforcement learning, let’s break down an example. Let’s look at teaching an agent to play the game, Mario.

For any reinforcement learning problem, we need an agent and an environment and a way to connect the two through a feedback loop. To connect the agent to the ground, we give it a set of actions that it can take that affect the environment. To join the agent’s environment, we have it continually issue two signals to the agent: an updated state and a reward (our reinforcement signal for behavior).

• Video Games: One of the most common uses of reinforcement learning is teaching a computer system to play games. Look at Deep Mind’s reinforcement learning application, AlphaZero, and AlphaGo, which learned to play Go. Our Mario example is also typical. Currently, I don’t know any production-grade match that has a reinforcement learning agent deployed as its game AI. Still, I can imagine that this will soon be an exciting option for game developers to employ.

• Industrial Simulation: For robotic applications (think assembly lines), it is useful to have our machines learn to complete their tasks without hardcoding their processes. These applications can be a cheaper and safer option; they are less prone to failure. We can also build our machines to use less electricity, to save costs. 

• Resource Management: Reinforcement learning is useful for navigating complex environments. It can handle the need to balance specific requirements. For example, Facebook’s data centers. They use reinforcement learning to balance the need to satisfy our power requirements but do it as efficiently as possible, cutting high costs. Cheaper data storage costs for us and less of an impact on the environment we all share.

You might also be interested in Big Data and Hadoop Certification Training.  Click this affiliate link to find out more.

Conclusion

The application of machine learning to our society and industry leads to advancements across many human endeavors.

In medical science, machine learning is being applied to genomic data to help doctors understand and predict cancer spreads.

Data from outer space is being collected here on Earth through giant radio telescopes. And after being analyzed with machine learning, it helps us dig through the data and uncover the secrets of black holes.

Click this affiliate link to order AI and Machine Learning for Coders:  A Programmers Guide to Artificial Intelligence.

In e-commerce, machine learning matches shoppers with products they want to buy online. The bricks’ n’ mortar world allows store owners and assistants to personalize their users’ services.

In the war against terrorism, machine learning predicts the behavioral patterns of those wanting to harm the innocent.

In our daily lives, machine learning now powers Google’s search and image algorithms. The algorithms accurately match us with the right information we need.

Natural language processing (NLP) is the process of computers understanding and communicating with us in human language, thanks to machine learning. This process has led to breakthroughs in translation technology and the voice-controlled systems we increasingly use daily, including Amazon’s Echo, Apple’s Siri, etc.

Without a doubt, machine learning is a technology with far-reaching, innovative impact. The science-fiction dream of robots capable of working alongside us and augmenting our inventiveness and imagination with flawless logic and superhuman speed is no longer a dream. Machine learning is the key that has unlocked it, and its potential future applications are almost unlimited.

Luis Gillman
Luis Gillman

Hi, I Am Luis Gillman CA (SA), ACMA
I am a Chartered Accountant (SA) and CIMA (SA) and author of Due Diligence: A strategic and Financial Approach.

The book was published by Lexis Nexis on 2001. In 2010, I wrote the second edition. Much of this website is derived from these two books.

In addition I have published an article entitled the Link Between Due Diligence and Valautions.

Disclaimer: Whilst every effort has been made to ensure that the information published on this website is accurate, the author and owners of this website take no responsibility  for any loss or damage suffered as a result of relience upon the information contained therein.  Furthermore the bulk of the information is derived from information in 2018 and use therefore is at your on risk. In addition you should consult professional advice if required.