Skip links
  • Who is iOCO?
  • Partners
  • Success Stories
  • Insights

How companies can leverage AI to speed up delivery

[siteorigin_widget class=”SiteOrigin_Widget_Image_Widget”][/siteorigin_widget]
[siteorigin_widget class=”SiteOrigin_Widget_Image_Widget”][/siteorigin_widget]

There is a considerable amount of hype surrounding artificial intelligence (AI), as it seems that virtually every discussion around technology and the move to digital and initiatives to enhance services and quality are peppered with references to it.

There can also be a degree of fear expressed amid concerns around job losses due to AI, with routine tasks being increasingly performed by machines.

I feel the best place to start with any discussion around AI is to unpack what it is, where it hails from and what it can do.

AI is the ability of a machine or computer to imitate the capabilities of the human mind. It taps into multiple technologies to equip machines in planning, acting, comprehending, learning and sensing with human-like intelligence.

AI systems may perceive environments, recognise objects, make decisions, solve problems, learn from experience and imitate examples. These abilities are combined to accomplish actions that would otherwise require humans to perform.

AI may have entered everyday conversation over the last decade or so, but it has been around much longer than that. However, it should be emphasised that the relatively recent rise in its prominence is not by accident.

AI technology, and especially machine learning, relies on the availability of vast volumes of information. The proliferation of the internet, the expansion of cloud computing, the rise of smartphones, and the growth of the internet of things has created enormous quantities of data which daily continues to increase, exponentially.

This treasure trove of information, combined with the huge gains made in computing power, made the rapid and accurate processing of enormous data possible. Today, AI is completing our chat conversations, suggesting e-mail responses, providing driving directions, recommending the next movie we should stream, vacuuming our floors and performing complex medical image analyses.

The origins of AI

The rise of electronic computing is what made AI a real possibility. Note that what is considered AI has changed as technology has evolved. For example, a few decades ago, machines that could perform optimal character recognition (OCR) or simple arithmetic were categorised as AI. Today, OCR and basic calculations are not considered AI but rather an elementary function of a computer system.

AI technology, and especially machine learning, relies on the availability of vast volumes of information.

In 1950, Alan Turing, the man famous for breaking the WWII Enigma code, published the Computing Machinery and Intelligence paper. In it he attempted to answer the question of whether machines can think.

He outlined the Turing Test that determines whether a computer shows the same intelligence as a human. The test holds that an AI system should have the ability to hold a conversation with a human without the human knowing they are speaking to an AI system. In 1956, the first AI conference was held at Dartmouth College.

Enhancements ensued through the decades, when in the 2000s the internet revolution drove AI to unprecedented heights.

How does AI work?

AI is complex but to put it simply, it is described as reverse-engineering human capabilities and traits onto a machine. The system uses computational power to exceed what the average human is capable of, and it must learn to respond to certain actions.

It relies on historical data and algorithms to create a propensity model. Machines learn from experience to perform cognitive tasks that are ordinarily the preserve of the human brain and automatically learn from features or patterns in data.

AI is founded on two pillars – engineering and cognitive science. Engineering involves building the tools that rely on human-comparable intelligence where large volumes of data are combined with a series of instructions (algorithms) and rapid iterative processing.

Cognitive science involves emulating how the human brain works, and brings to AI multiple fields, including machine learning, deep learning, neural networks, cognitive computing, computer vision, natural language processing and knowledge reasoning.

AI systems are not monolithic, and it is not just one type of system but is rather a diverse domain.

There are the simple low level types of AI with which people are most likely to interact and which are mainly deployed to drive efficiencies. The other end of the spectrum is the advanced systems that emulate human intelligence and can complete complex tasks creatively and abstractly.

Strictly speaking, the latter is still confined to the realms of Hollywood but the race towards its realisation is accelerating.

Strategic technology trends

Gartner research recognises AI engineering as a top strategic technology trend that involves providing engineering discipline to an organisation. It goes on to note that this is necessary because only 53% of projects make it from AI prototypes to production.

It is noted that more CIOs are turning to robotic process automation (RPA) to streamline enterprise operations and reduce costs.

RPA permits businesses to automate mundane rules-based business processes, enabling users to devote more time to serving customers or other higher-value work. Others appear to view RPA as a stop gap en route to intelligent automation via machine learning and artificial intelligence tools, which can be trained to make judgements about future outputs.

In my next article, I will expand on the uses of AI and explain how it is utilised to automate software testing and can lead to a reduction in costs, while at the same time introducing new levels of speed and scalability that would otherwise not be achievable.

Link to original article…

🍪 This website uses cookies to improve your web experience.