Artificial intelligence tools enhance application security

Artificial intelligence tools enhance application security

Unpacking some of the magic surrounding artificial intelligence and revealing what it can do to secure business applications.

Artificial intelligence (AI) is not only the flavour of the month − it may well end up being the game-changer of the century. Having said this, information regarding it can be buried in jargon that conceals the business application potential of this remarkable technology.

I would like to unpack some of that magic and reveal what it can do to secure business applications.

Next-generation AI tools are significantly improving organisations’ overall security posture by adding new testing layers and reducing risk.

Global ICT vendors are investing hundreds of millions in the development of solutions that aim to unlock the potential of AI. This is achieved using large language models (LLMs) to solve complex problems and address key challenges in enterprise application development.

In a bid to remain competitive, businesses are increasingly driving to implement agile development practices, such as DevOps, to keep abreast of commercial demand. This has pressured developers to produce applications more quickly and the fastest way to do that is to use open source software (OSS) components.

Open source refers to any software with accessible source code that anyone can modify and share freely. These are distributed freely, and as such, are cost-effective, with many developers benefitting by starting with OSS and then modifying it to add the functionality they want.

Next-generation AI tools are significantly improving organisations’ overall security posture by adding new testing layers and reducing risk.

Source code is the portion of software users don’t see; it’s the code programmers can create and edit to change how software works. By having access to a program’s source code, developers or programmers can improve software by adding features to it or fixing parts that don’t always work correctly.

I would first like to expand on LLM and then I’ll get to open source in more detail. LLM is defined as a type of AI algorithm that uses deep learning techniques and massively large data sets to understand, summarise, generate and predict new content.

In the AI world, a language model serves a similar purpose to that of human language as it provides a basis for communication and generation of new concepts. The term generative AI is closely connected with LLMs, which have been specifically designed to help generate text-based content.

The first AI language models trace their roots to the Eliza language model reported to have debuted in 1966 at MIT and is one of the earliest examples.

All language models are first trained on a set of data, which then make use of various techniques to infer relationships before ultimately generating new content based on the trained data.

An LLM is the evolution of the language model concept in AI that dramatically expands the data used for training and inference. In turn, it provides a massive increase in the capabilities of the AI model.

While there isn’t a universally accepted figure for how large the data set for training needs to be, an LLM typically has at least one billion or more parameters − the latter is a machine learning term for the variables present in the model on which it was trained that can be used to infer new content.

Are there benefits to using open source software, and if yes, what are they?

There is a lot of pressure on developers today to build and deploy applications more quickly. To successfully achieve their goals within short software release cycles, developers frequently use OSS components. The benefits of this include flexibility, cost, transparency, reliability and collaboration.

Now let’s get to the important matter of open source security. This is also known as software composition analysis and is a methodology that provides users with better visibility into the open source inventory of their applications.

Understand, the most pressing question around open source security pertains to risk. No technology specialist would disagree that it must be managed because it is a fact that some open source components are vulnerable from the outset, while others disintegrate over time.

Some 300 000+ open source components are downloaded annually by the average company. In 2018, across billions of open source component release downloads, one in 10 had known security vulnerabilities (10.3%) and 51% of JavaScript package downloads contained known security vulnerabilities.

That adds up to a 71% increase in confirmed or suspected open source-related breaches since 2014. This may be disconcerting, but it will not stop the necessary use of OSS components because there is simply no other way to produce applications at the speed at which they are demanded by businesses.

The only way around this issue is to ensure companies are equipped to identify open source vulnerabilities in their software. It is crucial to secure the code consumed from open source components and not just the code you write.

There must be systems in place to scan code continuously and automatically from open source environments. Only in this way will companies be able to identify and remediate vulnerabilities at source.

Precise open source intelligence solutions can provide a 360-degree view of application security issues across custom code and open source components, and this can be done in a single scan.

Moreover, machine learning-assisted auditing solutions can reduce noise and false positives, while streamlining security and improving developer efficiency to over 98% accuracy. Now that’s impressive.

In my next article, I will expand on the value AI brings to DevSecOps.

 

Written by: Paul Meyer, Security Solutions Executive, iOCO Tech

Originally featured here