How to Begin Implement Network Buffer Rate in NS3

To implement network buffer rate using NS3 which needs to replicate and examine how data is buffered and managed in network devices. The buffer rate can be denoted how rapidly data arrives or leaves the buffer (buffer throughput), or else how well buffer is used over time. This parameter is essential in video streaming, congestion management, or real-time applications. Let’s see how to execute the network buffer rate through following steps using ns3:

Steps to Begin Implement Network Buffer Rate in NS3

  1. Understand Network Buffer Rate
  • Buffering in Networks:
    • During congestion or delays, buffers are often leveraged within routers, switches, and end devices, to temporarily save packets.
  • Buffer Metrics:
    • Buffer Rate: Data arriving or leaving the buffer for each unit time.
    • Buffer Utilization: How buffer is equated to their highest capacity.
    • Buffer Overflow: Overflow arises when the incoming data surpasses the capacity of buffer.
  • Use Cases:
    • During congestion, estimate the buffering within routers.
    • Measure the performance of buffer for streaming or real-time applications.
  1. Set Up ns3 Environment
  • Install ns3:

git clone https://gitlab.com/nsnam/ns-3-dev.git

cd ns-3-dev

./ns3 configure –enable-examples –enable-tests

./ns3 build

  • Confirm the installation by executing:

./ns3 run hello-simulator

  1. Design the Network Simulation
  • Scenario:
    • Replicate a sender-receiver network including intermediate routers, which buffer packets.
    • Estimate the rate of buffer and usage over time.
  • Topology:
    • Sender: It transmits the traffic on diverse rates.
    • Router: Defends the traffic and sends to the receiver.
    • Receiver: It supports to receive and handle the traffic.
  1. Steps to Implement Network Buffer Rate

(a) Create Nodes

  • Make nodes that contains sender, router, and receiver:

NodeContainer sender, receiver, router;

sender.Create(1);

receiver.Create(1);

router.Create(1);

(b) Set Up Network Links

  • Associate nodes to utilize PointToPointHelper for configuring the network links:

PointToPointHelper p2p;

p2p.SetDeviceAttribute(“DataRate”, StringValue(“10Mbps”));

p2p.SetChannelAttribute(“Delay”, StringValue(“2ms”));

NetDeviceContainer senderToRouter = p2p.Install(NodeContainer(sender.Get(0), router.Get(0)));

NetDeviceContainer routerToReceiver = p2p.Install(NodeContainer(router.Get(0), receiver.Get(0)));

(c) Install Internet Stack

  • We need to install the Internet stack at all nodes like sender, router, receiver:

InternetStackHelper stack;

stack.Install(sender);

stack.Install(router);

stack.Install(receiver);

(d) Assign IP Addresses

  • Allocate an IP addresses to every devices:

Ipv4AddressHelper address;

address.SetBase(“10.1.1.0”, “255.255.255.0”);

Ipv4InterfaceContainer senderRouterInterfaces = address.Assign(senderToRouter);

address.SetBase(“10.1.2.0”, “255.255.255.0”);

Ipv4InterfaceContainer routerReceiverInterfaces = address.Assign(routerToReceiver);

  1. Simulate Traffic
  • Make traffic with the support of UDP or TCP applications.
  • Sender:

UdpEchoServerHelper echoServer(9);

ApplicationContainer serverApp = echoServer.Install(sender.Get(0));

serverApp.Start(Seconds(1.0));

serverApp.Stop(Seconds(10.0));

  • Receiver:

UdpEchoClientHelper echoClient(Ipv4Address(“10.1.2.1”), 9);

echoClient.SetAttribute(“MaxPackets”, UintegerValue(1000));

echoClient.SetAttribute(“Interval”, TimeValue(Seconds(0.01)));

echoClient.SetAttribute(“PacketSize”, UintegerValue(1024));

ApplicationContainer clientApp = echoClient.Install(receiver.Get(0));

clientApp.Start(Seconds(2.0));

clientApp.Stop(Seconds(10.0));

  1. Measure Buffer Rate

(a) Attach Trace to Queue

  • Access the buffer (queue) on the router to leverage the TrafficControlHelper:

TrafficControlHelper tch;

tch.SetRootQueueDisc(“ns3::FifoQueueDisc”, “MaxSize”, StringValue(“100p”));

tch.Install(router.Get(0)->GetDevice(1));

(b) Measure Buffer Rate and Utilization

  • Connect custom callbacks for observing the queue size and buffer events:

void MonitorQueueSize(Ptr<QueueDisc> queue) {

uint32_t size = queue->GetCurrentSize().GetValue();

NS_LOG_UNCOND(“Time: ” << Simulator::Now().GetSeconds() << “s, Buffer Size: ” << size << ” packets”);

}

void MeasureBufferRate(Ptr<QueueDisc> queue) {

uint32_t enqueued = queue->GetStats().nPacketsEnqueued;

uint32_t dequeued = queue->GetStats().nPacketsDequeued;

NS_LOG_UNCOND(“Enqueued Packets: ” << enqueued << “, Dequeued Packets: ” << dequeued);

}

Ptr<QueueDisc> queue = router.Get(0)->GetObject<TrafficControlLayer>()->GetQueueDisc(0);

Simulator::Schedule(Seconds(1.0), &MonitorQueueSize, queue);

Simulator::Schedule(Seconds(1.0), &MeasureBufferRate, queue);

  1. Analyze Buffer Performance

Log Buffer Statistics

  • Make use of QueueDisc statistics:

NS_LOG_UNCOND(“Dropped Packets: ” << queue->GetStats().nPacketsDropped);

Visualize Buffer Rate

  • Detail the logs and use matplotlib tool in Python to design buffer size, enqueued rate, and dequeued rate over time.
  1. Complete Example Code

Here’s an instance to execute the buffer rate monitoring:

#include “ns3/core-module.h”

#include “ns3/network-module.h”

#include “ns3/internet-module.h”

#include “ns3/point-to-point-module.h”

#include “ns3/applications-module.h”

#include “ns3/traffic-control-module.h”

using namespace ns3;

void MonitorQueueSize(Ptr<QueueDisc> queue) {

uint32_t size = queue->GetCurrentSize().GetValue();

NS_LOG_UNCOND(“Time: ” << Simulator::Now().GetSeconds() << “s, Buffer Size: ” << size << ” packets”);

Simulator::Schedule(Seconds(0.5), &MonitorQueueSize, queue); // Re-schedule

}

void MeasureBufferRate(Ptr<QueueDisc> queue) {

uint32_t enqueued = queue->GetStats().nPacketsEnqueued;

uint32_t dequeued = queue->GetStats().nPacketsDequeued;

NS_LOG_UNCOND(“Time: ” << Simulator::Now().GetSeconds() << “s, Enqueued Packets: ” << enqueued << “, Dequeued Packets: ” << dequeued);

Simulator::Schedule(Seconds(0.5), &MeasureBufferRate, queue); // Re-schedule

}

int main(int argc, char *argv[]) {

CommandLine cmd;

cmd.Parse(argc, argv);

NodeContainer sender, router, receiver;

sender.Create(1);

router.Create(1);

receiver.Create(1);

PointToPointHelper p2p;

p2p.SetDeviceAttribute(“DataRate”, StringValue(“10Mbps”));

p2p.SetChannelAttribute(“Delay”, StringValue(“2ms”));

NetDeviceContainer senderToRouter = p2p.Install(NodeContainer(sender.Get(0), router.Get(0)));

NetDeviceContainer routerToReceiver = p2p.Install(NodeContainer(router.Get(0), receiver.Get(0)));

InternetStackHelper stack;

stack.Install(sender);

stack.Install(router);

stack.Install(receiver);

Ipv4AddressHelper address;

address.SetBase(“10.1.1.0”, “255.255.255.0”);

Ipv4InterfaceContainer senderRouterInterfaces = address.Assign(senderToRouter);

address.SetBase(“10.1.2.0”, “255.255.255.0”);

Ipv4InterfaceContainer routerReceiverInterfaces = address.Assign(routerToReceiver);

UdpEchoServerHelper echoServer(9);

ApplicationContainer serverApp = echoServer.Install(sender.Get(0));

serverApp.Start(Seconds(1.0));

serverApp.Stop(Seconds(10.0));

UdpEchoClientHelper echoClient(Ipv4Address(“10.1.2.1”), 9);

echoClient.SetAttribute(“MaxPackets”, UintegerValue(1000));

echoClient.SetAttribute(“Interval”, TimeValue(Seconds(0.01)));

echoClient.SetAttribute(“PacketSize”, UintegerValue(1024));

ApplicationContainer clientApp = echoClient.Install(receiver.Get(0));

clientApp.Start(Seconds(2.0));

clientApp.Stop(Seconds(10.0));

TrafficControlHelper tch;

tch.SetRootQueueDisc(“ns3::FifoQueueDisc”, “MaxSize”, StringValue(“100p”));

Ptr<QueueDisc> queue = tch.Install(router.Get(0)->GetDevice(1)).Get(0);

Simulator::Schedule(Seconds(1.0), &MonitorQueueSize, queue);

Simulator::Schedule(Seconds(1.0), &MeasureBufferRate, queue);

Simulator::Run();

Simulator::Destroy();

return 0;

}

  1. Validate and Extend
  • Validation:
    • Confirm the behaviour of buffer in diverse traffic loads using logs.
  • Extensions:
    • Integrate congestion for measuring the performance of buffer.
    • Replicate adaptive queue approaches such as CoDel or RED.

In this manual, Network Buffer Rate has been explicated through a systematic mechanism and supported with NS3 snippets that were implemented and analysed. We’re prepared to dive deeper into advanced concepts and aspects if needed.