Skip to content

teclo/flow-disruptor

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Introduction

The flow-disruptor is a deterministic per-flow network condition simulator. To unpack that description a bit, per-flow means that the network conditions are maintained and simulated separately for each TCP connection (and in fact each flow of the connection).

Deterministic means that we normalize as many network conditions as possible (e.g. RTTs, throughput), and any changes in conditions happen at exactly at preconfigured times rather than randomly. For example the configuration could specify that the connection experiences a packet loss exactly 5s after it was initiated, and then a packet loss every 1s after that.

Why write yet another network simulator? The use case here is fairly specific; we want to compare the behavior of different TCP implementations under controlled conditions. Doing this with random packet loss, or with different connections having different RTTs would be quite tricky.

Configuration

The configuration file must be passed in using the --config command line argument, and is in protocol buffer text format. A SIGHUP will cause flow-disruptor to re-read the config file.

Example: The following configuration sets up a traffic profile for connections to 10.0.1.56:45004, with a RTT of 0.5 seconds and a downlink throughput throttle of 20Mbps and uplink throttle of 1Mbps. A packet loss is caused after the first 100kB is transferred, and at 200kB intervals after that. Additionally 5 seconds after the start of the connection the downlink throttle is reduced to 10Mbps for 5 seconds, and then raised back to the original 20Mbps. This is repeated at 10 second intervals.

profile {
    id: "500ms-20Mbps"
    filter: "tcp and host 10.0.1.56 and src port 45004"
    target_rtt: 0.50
    dump_pcap: true

    downlink {
        throughput_kbps: 20000
        volume_event {
            trigger_at_bytes: 100000
            repeat_after_bytes: 200000
            effect {
                drop_bytes: 1000
            }
        }       
    }
    uplink {
        throughput_kbps: 1000
    }

    timed_event {
        trigger_time: 5.0
        duration: 5.0
        repeat_interval: 10.0

        downlink {
            throughput_kbps_change: -10000
        }
    }
}

The configuration consists of a number of profiles, each matching some traffic based on a pcap filter and specifying the network settings for those connections. The filter is only checked for the SYN packet, after that the connection is permanently tied to that profile. The profiles should be specified in the config file in priority order; if a SYN matches multiple profiles, the earliest one that matches will be used.

Profile

string id: Unique ID; used for finding matching profiles when a profile is reloaded, and as a component of the trace file names.

string filter: The pcap filter for the traffic that matches this profile. If multiple profiles match, the first one will be used. The filter is only checked for the SYN packet, not for later ones. An empty or unspecified filter matches all traffic.

bool dump_pcap: If true, generate two traces file for each connection matching this profile. (One file for each interface).

double target_rtt: The RTT target (uplink RTT + downlink RTT + artificial delay) for this connection. In seconds.

LinkProperties downlink, LinkProperties uplink: Initial link properties for each interface.

repeated TimedEvent timed_event: Events that happen to the connection at a specified time-

LinkProperties

uint32 throughput_kbps: Maximum throughput for data sent toward this interface.

uint32 max_queue_bytes: Maximum amount of data that can be queued due to the throughput limit (there’s no limit on the queuing from artificial latency). Any packets in excess of the maximum are dropped.

repeated VolumeTriggeredEvent volume_event: Events that happen based on amount of data transferred.

TimedEvent

double trigger_time: Time (in seconds from beginning of connection) that the event happens on.

double duration: How long (in seconds) the event lasts.

double repeat_interval: If non-zero, the event will be repeated at this interval (in seconds).

Effect effect: What actually happens when this event is triggered.

VolumeTriggeredEvents

VolumeTriggeredEvents happen to a connection half, based on the amount of data transferred in that direction.

uint64 trigger_at_bytes: The event is triggered once this many bytes have been sent.

uint64 active_for_bytes: The event is active until this many bytes have been sent to it after it was triggered.

uint64 repeat_after_bytes: If non-zero, repeat the event this many bytes after the previous time the event happened.

LinkPropertiesChange effect: What actually happens when this event is triggered.

Effect

double extra_rtt: Add this amount of extra delay to the connection.

LinkPropertiesChange downlink, LinkPropertiesChange uplink: Changes to properties of just one of the two interface.

LinkPropertiesChange

int32 throughput_kbps_change: Change the link throughput by this amount. When the event is over, undo the change. int32 drop_bytes: Drop all packets, until at least this many bytes have been dropped.

Installation

Install the following dependencies.

  • C++11 compiler
  • CMake
  • GFlags
  • libpcap
  • libev
  • protobuf

Build with cmake --build . && make, the output will be in bin/flow-disruptor/.

Running

The program functions as a layer 2 bridge between two network interfaces. The easiest way to do that in a normal setup for all traffic from a single machine is to set up a pair of veth interfaces.

You’ll probably also want to make sure that all kinds of segmentation offload functionality is turned off on both network interfaces (ethtool -k ... to check, ethtool -K ... to turn off).

See run.sh in the repository for an example of this setup.

About

A deterministic per-flow network condition/fault simulator

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published