Runtime Intelligence, meet AI

Runtime Intelligence, meet AI
Combining the two types of intelligence into state of the art visibility and context

At Kodem, it’s all about runtime.

Kodem’s Runtime intelligence is a game changer for vulnerability management and open source security. It allows security practitioners to gain deep visibility into how their applications interact and easily determine what happens in reality, so they can focus on real threats, empowered by automatic risk scoring and triaging processes and built-in false positives elimination.

Not only does runtime intelligence surface the full application context in which the code is being executed (the ‘where’ and the ‘how’), but it also provides deep visibility into the specific code blocks, functions, methods and symbols, that are being used (the ‘what’) and determine whether they are being used in a vulnerable way.

This deep visibility often seems advanced in comparison to traditional vulnerability databases (such as NVD, the national vulnerability database) and traditional vulnerability assessment procedures, that rely on generic threat indicators to calculate the risk from a given software package, hence raising doubts about the ability to enrich runtime intelligence findings with the existing crowdsourcing databases.

There is a consensus though – generic risk assessment doesn’t cut it anymore. It’s crucial to deeply understand how vulnerabilities can pose risk and be utilized by attackers given a specific environmental setting, a task considered to be a daunting manual task that requires domain expertise and cannot be accomplished at scale.

This is where AI comes in!

Large Language Models (LLMs in short, like ChatGPT, Llama and others) are a groundbreaking technology in many fields. These models can analyze and summarize big chunks of information, generate content, support advanced semantic search and much more.

At Kodem, we're most excited about combining artificial intelligence (AI) with our runtime intelligence capabilities. This combination of tech helps us offer a better and more precise service to our customers.

We utilize AI in different ways to simplify application security - from combining AI and runtime intelligence for contextualizing risk assessment and prioritization to alerts generation and optimizing remediation processes. One of the most interesting ways in which we leverage AI is for analyzing the exploitability of open source software on a function-level basis.

Automatically analyze function-level exploitation.

Kodem runtime intelligence identifies the software packages that are actually being used in runtime and whether they are being used in a vulnerable way. This significantly cuts down the amount of irrelevant data, or 'noise', that security teams have to sift through. But our efforts to eliminate the false positives for our customers does not end there. We take it a step further by merging our runtime capabilities, which pinpoint the functions that are actually in use, with LLM-based techniques that can identify potentially exploitable functions. This powerful combination allows us to further minimize the noise that security teams have to deal with, making their work efficient and effective.

First, we harness the potent capabilities of LLMs in code analysis to identify vulnerable segments of code. LLMs ability to process large contexts and learn tasks enables us to pinpoint these weak spots. The beauty of open-source software (OSS) is that it provides a wealth of data sources for this purpose, ranging from the code itself to online discussions and posts. All this empowers us to achieve extensive coverage of vulnerable code with precision.

In order to determine exploitability, we must identify which vulnerable functions are being used in a running application and how they are being used. Kodem closely monitors the functions actually being used while maintaining an extremely low performance impact.

Putting these two capabilities together, by matching the vulnerable functions we have pinpointed with the de-facto runtime triggered functions, we are able to eliminate false-positives and better highlight truly vulnerable parts in the code.

Picture of code


Take for instance the Go package “golang.org/x/text”, an open source library for text processing. This package is commonly used both directly and indirectly in customer environments and is associated with a known denial of service vulnerability which is relatively easy to exploit on versions prior to 0.3.8 (CVE-2022-32149).

Upgrading the package version to 0.3.8 would resolve the issue. However, since this package is widely in use both directly and indirectly, it will be time consuming and would require valuable development and QA time.

Looking closer, this package has many usable functions, but exploiting the associated vulnerability is possible if and only if a specific function is triggered at runtime (the ParseAcceptLanguage function if you’re asking). And since this function is effectively never used - you can safely prioritize this fix lower. On the other hand, if this exploitable function is indeed used by your application, you are likely to give it a higher priority.

These are exciting times in application security.

Generative AI is rapidly changing how we develop and secure applications and has multiple advantages that can revolutionize the application security paradigm.

Combining these advantages with the best in class visibility and context from Kodem runtime intelligence, opens the door for even better contextualized risk scoring, prioritization methods and actionability.

If you want to hear more about the possibilities of leveraging AI for your application security program – feel free to contact us and book an interactive session with me or one of Kodem’s AppSec experts.

Tags:
#
AI
#
Runtime Intelligence