Basic Dynamic Analysis

Table of Contents

1. Lecture 4 - Querzoni

2. Dynamic Analysis

It consists of deliberately execution of the malware, while monitoring the result. It requires a safe environment, the most important side effect that we want to avoid is the possible spreading of the malware in the production network. Whenever we execute a malware we have to be sure that it is airgapped or at least isolated from the production network. All of the following approaches are foundamental, and have to be used to correctly perform the analysis. Often is better to start with basics tools to extract as many information as possible, and use those information to guide the analysis in more complex tools.

2.1. Purpose of Dynamic Analysis

In static analysis we can reach a dead end, due to obfuscation, packing or lack / limitations of tools. The fact that malware coder will put in place different techniques with the purpose to confuse the understanding of the analyst, to increase the level of complexity of the code. The main goal of dynamic analysis is to understand of the malware behavior, via different approacches like diffing, monitoring, tracing and debugging. This activity presents some caviats. The malware coder may try to insert evasion techniques in the code that tricks our abilities to perform dynamic analysis; for example there can be checks inside the code that, if the sample is executed in a safe environment, will execute decoy functions or maybe self-destroy, destroy the machine or simply quit.

2.2. Diffing

It is the most basic kind of dynamic analysis: make a snapashot of some characteristics of the testing environment, execute the malware and then take another snapshot. Then compare the two snapshots, to understand which changes on the system the malware produced. The pros of this technique is the fact that artifacts can be observed easily, but it can miss evidence that is created during malware activities and erased on purpose by the malware, and the fact that there’s a limit of the amount of variables that can be taken into account. Typically we tend to restrict this techniques on a limited set of aspects like file-system, or register key. Some tools that can be used are regshot and autoruns.

2.3. System Monitoring

Here the idea is to start monitoring tools, that enable the analyst to observe what happens during the malware execution. Some tools that can be used are procmon, and WireShark; to understand which files are opened and if it is interacting with the operating system. The problem when using this approach is that it can lead to collect too much information and weed out irrelevant data (think about all the Operating System’s processes that runs in the background).

2.4. API Tracking

A similar approach to system monitoring, is API Tracing, in this approach the analyst collect and records all the important API calls made by the suspicious process. This approach provides visibility into activity beyond the typical file / process / registry and network shown by other tools; also this approach can collect large amount of data, but API specific interpretation will take less time than static analysis. Potentially if the malware is a user level one, just looking and the API calls will give a lot of informations about the malware behavior; even if modern OSs have the capabilities to detect if a software is interacting in a suspicius way, can be difficoult for it to understand if a process is using the OS API to perform a malicious activity for the user. Some tools used are: WinApiOverride, Rohitab API Monitor.

Some API calls may be related to a malicious behavior, when some API calls are so general that they can’t be related with malicious software. For this reason it is foundamental to individuate which is relevant and which is not relevant when analyzing a sample.

2.5. Debugging

Set breakpoints inside the suspicious file, to stop its execution at a given location and inspect its state. The problem with this approach is that it is potentially overwhelming, maybe is better to understand just how some core functions works, so focus this approach just on some parts of the code.

2.6. Limits of Dynamic Analysis

There are some limitation that arises when performing dynamic analysis:

  • in general a single path of execution is examide; maybe the observation performed will not contain the behavior of function that are not executed during the debug. If We want to understand the full potential of the software is necessarly to combine static and dynamic analysis in order to force the malware in takin gspecific paths during its execution. There also more advanced techiques behiond debbugers like fuzzer and symbolic execution. Fuzzers will test the software with an high range of input, and symbolic execution will transform part of the code in individual executable files to better understand their behavior.
  • Analysis environment is hardly invisible to the malware, the point is how smart is the malware to understand if it in a analysis environment? Some malware perform very naive checks, others will perform an impressive amount of checks, in order to understand if it was the case to perform some evasive actions. There are some tools that are more evasive that others, anything that acts statically is completely invisible.
    • Program instrumentation can be used to perform dynamic analysis, but it an be easily identified by a amalware, but if you are able to understand when the malware perform the checks will be easy to fool the check itself. Of course program instrumentation is the most fine-grained way to inspect the malware execution (line by line execution).
    • Intrumentation can be performed also at the OS level, for example anti-virus softwares need administrator priviliges to observe software interaciton with the OS; in principle this way is invisible for user level software; in practice, this not really true, because OS level software like antivirus, will have an user level interface that can be spotted. The problem with OS Instrumentation is the fact that it will not work well against OS level malware, and in general OS level software has a limited visibility of the activity inside the program if it does not interact with the OS.
    • The third level of instrumentation is hardware instrumentation, it provides virtual hardware where sample can execute, in pricniple it is completely transparent to the malware; but there are some couple of problems, for example if the virtualized environment, it will expose virtualized hardware to the OS that can be spotted by a malware. The other liitation of this approach is that the amount of information collected isimpressive, and it is difficoult to differentiante the actions performed by the malware; typically this tecnique is used in sandboxes to limit the enourmous amount of information collected. There are several sandboxes available, are expensive but easy-to-use because they are capable of automate dynamic analysis (Joe Sandbox).

2.6.1. Virtual Machines

Using VMs is the most common way of analyzing malware, they’re good for taking snapshots and restore a clean environment, and of course it protects the host machine from the malware. The virtualized environment can also be used to create e virtualized network, so after analyzing the malware in isolation, it is possible to expose another fake machine to the malware, to look at which kind of information the malware is sending.

There are several proof-of-concepts of VMescape attacks, but we can assume that a VM environemnt is safe enough.

2.6.2. Real Machines

3. Launching Malware

Running executable files is easy, but malware can hide inside DLLs, to run them is possible to use RUNDLL32.exe provided by Microsoft.

Author: Andrea Ercolino

Created: 2022-12-12 lun 12:09