"The Codenomicon tools are amazing. Using them is like being attacked by the most relentless adversary who uses every possible method to find flaws in your code

We fixed subtle crash bugs in Samba that had been in the code for over ten years. We would never have found those bugs without the Codenomicon tools.

If you're serious about implementing protocols correctly, you need the Codenomicon tools."

-- Jeremy Allison,
   Co Creator of Samba


The BUZZ on FUZZING

podcast

Fuzzing or fuzz testing is a negative software testing method that feeds a program, device or system with malformed and unexpected input data in order to find defects.

Learn more about the world of fuzzing and the Codenomicon difference:

Introduction >

Fuzzing Value >

Fuzzing, Where It Fits >

Fuzzing Frameworks >

Fuzzing, Static vs. Model >

The Codenomicon Difference >

Introduction

Fuzzing enables software testers, developers and auditors to easily find defects that can be triggered by malformed inputs via external interfaces. This means that fuzzing is able to cover the most exposed and critical attack surfaces in a system relatively well, and identify many common errors and potential vulnerabilities quickly and cost-effectively.

The best-known example of early fuzzing comes from testing Unix command-line tools with fuzzed parameters in 1989 by Miller et al. (see http://www.cs.wisc.edu/~bart/fuzz/), although the idea itself can be attributed to multiple earlier sources. During the last 10-15 years, fuzzing has gradually developed towards a full testing discipline with support from both the security research and traditional QA testing communities, although some people still suffer from misconceptions regarding its capabilities, effectiveness and practical implementation.

<top>

Fuzzing Value

Fuzzing is especially useful in analyzing proprietary systems, as it does not require any access to source code. The system under test can be viewed as a black-box, with one or more external interfaces available for injecting tests, but without any other information available on the internals of the tested system. Having access to information such as source code, design or implementation specifications, debugging or profiling hooks, logging output, or details on the state of the system under test or its operational environment will help in root cause analysis of any problems that are found, but none of this is strictly necessary. Having any of this information available turns the process into gray-box testing, and is recommended for organizations that have access to the details of the systems under test.

A practical example of fuzzing would be to send malformed HTTP requests to a web server, or create malformed Word document files for viewing on a word processing application. The programs that are used to create fuzz tests or perform fuzz testing are commonly called fuzzers. Currently dozens of commercial and free fuzz testing frameworks and fuzzers of highly varying quality exist. Some are oriented towards testing only one or a few interfaces, while some are open frameworks for creating fuzz tests for any structured data.

<top>

Fuzzing, Where It Fits

Fuzzing is often compared to code auditing and other white-box testing methods. Code auditing looks at the source code of a system in an attempt to discover defective programming constructs or expressions. While code auditing is another highly valuable technique in a software tester's or developer's toolbox, code auditing and fuzzing are really complementary to each other. Fuzzing focuses on finding some critical defects quickly, and the found errors are usually very real. Fuzzing can also be performed without any understanding the inner workings of the tested system, whereas code auditing requires full access to source code. Code auditing is usually able to uncover more issues over time, but it also finds more false positives that need to be manually verified by an expert before they can be declared real, critical errors.

Neither fuzzing nor code auditing is able to provably find all possible bugs and defects in a tested system or program. As a rule of thumb, the effectiveness of fuzzing is based on how thoroughly it covers the input space of the tested interface (input space coverage), and how good are the representative malicious and malformed inputs for testing each element or structure within the tested interface definition (quality of generated inputs). It is generally accepted that fuzzers which are not fully model-based, or which populate their tests with random or semi-random data, are very inefficient and can only find the simplest possible programming errors.

<top>

Fuzzing, Frameworks

Some fuzzers are implemented as fuzzing frameworks, which means that they provide an end-user with a platform for creating fuzz tests. Fuzzing frameworks typically require a considerable investment in time and resources to develop tests for a new interface. If the framework does not offer ready-made test data for common structures and elements, efficient testing also requires considerable expertise in designing inputs that are able to trigger faults in the tested interface. Some fuzzing frameworks integrate user-contributed test modules, interface definitions or other auxiliary components back to the platform, bringing new tests within the reach of other users, but for the most part fuzzing frameworks require tests to be implemented from scratch by each end-user. These factors limit the accessibility, usability and applicability of fuzzing frameworks considerably.

<top>

Fuzzing, Test Suites

In contrast to fuzzing frameworks, another category of fuzzers consists of test suite-based fuzzers. These package a set of tests that have been pregenerated with fuzzing methods into a test suite or test tool that can be used in actual testing. In this approach, tests are designed in advance and implemented in some kind of modeling language (open or proprietary). The test definitions are then packaged into an application for running the tests. Usually this type of fuzzing requires only minimal work from the end-user (tester), as the interface model as well as test case definitions have already been created beforehand. The tester needs to configure only a few basic settings to start running the tests, and does not even need to fully understand the specifications or other details of the tested protocol or file format. This type of testing is ideally suited for any testers, developers, network and system administrators, and security consultants.

<top>

Fuzzing, Static vs. Model

Another dimension for comparing fuzzers stems from whether they are model-based or not. Compared with a static, non-stateful fuzzer which may not be able to simulate any protocol deeper than an initial packet, a fully model-based fuzzer is able to test an interface more completely and thoroughly, usually proving much more effective in discovering flaws in practice. A more simplistic fuzzer is unable to test any interface very thoroughly, providing only limited test results and poor coverage. Static fuzzers may not be able to modify their outputs during runtime, and therefore lack the ability to perform even rudimentary protocol operations such as length calculations, cryptographic operations, copying structures from incoming messages into outgoing traffic, or adapting to the exact capabilities (protocol extensions, used profiles) of a particular system under test. In contrast, model-based fuzzers can emulate a protocol or file format interface almost completely, allowing them to understand the inner workings of the tested interface and perform any runtime calculations and other dynamic behaviour that is needed to achieve full interoperability with the tested system. For this reason, tests executed by a fully model-based fuzzer are usually able to penetrate much deeper within the system under test, exercising the packet parsing and input handling routines extremely thoroughly, and reaching all the way into the state machine and even output generation routines.

 

DEFENSICS Architecture

 

<top>

Fuzzing, The Codenomicon Difference

Codenomicon DEFENSICS fuzzers are industry-leading, highly intelligent, fully model-based fuzzers that feature hundreds of thousands of meticulously designed, accurately targeted, ready-made test definitions for more than 150 supported network protocols, wireless interfaces and digital media formats. Our long-standing research background allows us to create extremely effective tests for any interface. Our unique DEFENSICS platform technology provides the capability to model any protocol or digital media format very efficiently, intelligently and thoroughly to achieve maximum functionality for practical fuzz testing. Our test methodology has been publically reviewed, verified and proven to be effective in practice by hundreds of vendors and other organizations worldwide. DEFENSICS fuzzers are extremely easy to install, set up and configure, empowering users to enjoy the benefits of top-of-the-line fuzz testing extremely quickly. DEFENSICS testers have been designed to integrate well with existing test management frameworks, to be fully portable across multiple systems and platforms, to provide automated, repeatable tests for all supported interfaces, and to output extremely detailed reports for all test results.

<top>