SF-TAP: Scalable and Flexible
Traffic Analysis Platform
Running on Commodity
Hardware
1
Yuuki Takano, Ryosuke Miura, Shingo Yasuda
Kunio Akashi, Tomoya Inoue
NICT, JAIST
(Japan)
in USENIX LISA 2015
Table of Contents
1. Motivation
2. Related Work
3. Design of SF-TAP
4. Implementation of SF-TAP
5. Performance Evaluation
6. Conclusion
2
Motivation (1)
• Programmable application level traffic analyzer
• We want …
• to write traffic analyzers in any languages such
as Python, Ruby, C++, for many purposes (IDS/
IPS, forensic, machine learning).
• **not** to write codes handling TCP stream
reconstruction (quite complex).
• modularity for many application protocols.
3
Motivation (2)
• High speed application level traffic analyzer
• We want …
• to handle high bandwidth traffic.
• to handle high connections per second.
• horizontal and CPU core scalable analyzer.
4
Motivation (3)
• Running on Commodity Hardware
• We want …
• open source software.
• not to use expensive appliances.
5
Related Work
6
BPF [USENIX ATC 1993]
netmap [USENIX ATC 2012]DPDK
pcap
GASPP [USENIX ATC 2014]
SCAP [IMC 2012]
libnids libprotoident
nDPI
l7-filter
(low level traffic capture)
(flow oriented analyzer) (application traffic detector)
SF-TAP
+ modularity and scalability
High-level Architecture
of SF-TAP
7
CPU CPU CPU CPU
Flow Abstractor
CPU CPU CPU CPU
Flow Abstractor
CPU CPU CPU CPU
Flow Abstractor
CPU CPU CPU CPU
Flow Abstractor
Cell Incubator
The Internet
SF-TAP Cell SF-TAP Cell SF-TAP Cell SF-TAP Cell
Intra Network
Core ScalingCore Scaling Core Scaling Core Scaling
Horizontal Scaling
Analyzer Analyzer Analyzer Analyzer Analyzer Analyzer Analyzer Analyzer Analyzer Analyzer Analyzer Analyzer Analyzer Analyzer Analyzer Analyzer
10GbE 10GbE
Design Principle (1)
• Flow Abstraction
• abstract flows by application level protocols
• provide flow abstraction interfaces like /dev, /proc or BPF
• for multiple programming languages
• Modular Architecture
• separate analyzing and capturing logic
• easily replace analyzing logic
8
Design Principle (2)
• Horizontal Scalable
• analyzing logic tends to require many
computer resources
• volume effect should solve the problem
• CPU Core Scalable
• both analyzing and capturing logic should
be core scalable for efficiency
9
Design of SF-TAP (1)
10
NW I/F
HTTP I/F
TLS I/F
Flow Abstractor
Flow
Classifier TLS Analyzer
HTTP Analyzer
HTTP Proxy
TCP and UDP
Handler
filter and
classifier
rule
L7 Loopback I/F
DB
Forensic
IDS/IPS
etc...
Application
Protocol Analyzer
etc...TCP Default I/F
UDP Default I/F
Analyzer PlaneAbstractor Plane
Capturer
Plane
SF-TAP Cell
Incubator
Flow
Identifier
Flow
Separator
Separator
Plane
separated traffic
SF-TAP Cell
L3/L7 Sniffer
SSL
Proxy
etc...
other SF-TAP cells
IP Packet
Defragmenter
L2 Bridge
mirroring
traffic
Packet Forwarder
IP Fragment
Handler
defined 4 planes
Analyzer Plane
application level analyzers
Forensic, IDS/IPS, etc…
Abstractor Plane
flow abstraction
Separator Plane
flow separation
Capturer Plane
traffic capturing
(ordinary tech.)
(users of SF-TAP implements here)
(we implemented)
(we implemented)
Design of SF-TAP (2)
SF-TAP Cell Incubator
11
SF-TAP Cell
Incubator Flow
Separator
separated traffic other SF-TAP cells
L2 Bridge
Packet Forwarder
IP Fragment
Handler
Packet Forwarder
layer 2 bridge
layer 2 frame capture
IP Fragment Handler
handle fragmented packets
Flow Separator
separate flows to multiple Ifs
Design of SF-TAP (3)
SF-TAP Flow Abstractor
12
NW I/F
HTTP I/F
TLS I/F
Flow Abstractor
Flow
Classifier
TCP and UDP
Handler
filter and
classifier
rule
L7 Loopback I/F
TCP Default I/F
UDP Default I/F
Flow
Identifier
IP Packet
Defragmenter
TCP and UDP Handler
Flow Identifier
IP Packet Defragmenter
reconstruct TCP flows
identify flows by IP and port
nothing to do for UDP
defragment IP packets if needed
Design of SF-TAP (4)
SF-TAP Flow Abstractor
13
NW I/F
HTTP I/F
TLS I/F
Flow Abstractor
Flow
Classifier
TCP and UDP
Handler
filter and
classifier
rule
L7 Loopback I/F
TCP Default I/F
UDP Default I/F
Flow
Identifier
IP Packet
Defragmenter
Flow Classifier
classify flows by
regular expressions
output to abstraction IFs
Implementation
• SF-TAP cell incubator
• C++11
• it uses netmap, available on FreeBSD
• SF-TAP flow abstractor
• C++11
• it uses pcap or netmap
• available on Linux, *BSD, and MacOS
• Source Code
• https://github.com/SF-TAP
• License
• 3-clauses BSD
14
(updated from the paper)
Performance Evaluation (1)
15
Figure 8: Total Memory Usage of HTTP Analyzer
Figure 9: Packet Drop against CPS
packet drop against connections per second
(pcap)
4K 10K 50K
Performance Evaluation (1)
16
forwarding performance of SF-TAP cell incubator
Mpps
0
4
8
12
16
fragment size (bytes)
64 128 256 512 1024
ideal (1) α->γ (2) α->β (3) α->β (3) α->γ
e 14: Forwarding Performance of Cell Incubator
risks are incurr
Host-based I
it is not suitab
used in today’s
power. Therefo
operating with
support the futu
face of the flow
implementation
protocols.
7 Related
Wireshark [38]
Figure 11: CPU Load of Flow Abstractor versus Traffic
Volume
Figure 16 shows the CPU loads of the 15th CPU. At
5.95 Mpps, the load average was approximately 50%, but
at 10.42 Mpps, the loads were close to 100%. More-
over, at 14.88 Mpps, CPU resources were completely
consumed. This limitation in forwarding performance
was probably caused by the bias, which in turn was due
o the flow director [10] of Intel’s NIC and its driver. The
flow director cannot currently be controlled by user pro-
grams on FreeBSD; thus, it causes bias depending on net-
work flows. Note that the fairness regarding RSS queues
s simply an implementation issue and is benchmarked
or future work.
Finally, the memory utilization of the cell incubator
depends on the memory allocation strategy of netmap.
The current implementation of the cell incubator requires
approximately 700 MB of memory to conduct the exper-
ments.
6 Discussion and Future Work
Figure 12: Physical Memory Usage of Flow Ab
(10K CPS)
1 GbE x 12
10 GbE x 2
α β
γ
cell incubator
Figure 13: Experimental Network of Cell Incu
6.1 Performance Improvements
We plan to improve the performance of the flow
tor in three aspects.
(1) The UNIX domain socket can be replaced
other mechanism such as a memory-mapped file o
memory attach [6]; however, these mechanisms
suitable for our approach, which abstracts flows
Thus, new mechanisms for high-performance m
passing, such as the zero-copy UNIX domain so
zero-copy pipe, should be studied.
(2) The flow abstractor currently uses the mallo
to γ to β to β and γ
α
β
γ
Other Features
• L7 Loopback interface for encapsulated flows
• Load balancing mechanism for application
protocol analysers
• Separating and mirroring modes of SF-TAP cell
incubator
• See more detains in our paper
17
Conclusion
• We proposed SF-TAP for application level traffic
analysis.
• SF-TAP has following features.
• flow abstraction
• running on commodity hardware
• modularity
• scalability
• We showed SF-TAP has achieved high performance
in our experiments.
18

More Related Content

PDF
Tutorial of SF-TAP Flow Abstractor
PDF
FARIS: Fast and Memory-efficient URL Filter by Domain Specific Machine
PPTX
The Next Linux Superpower: eBPF Primer
PDF
BPF - in-kernel virtual machine
PPTX
Staring into the eBPF Abyss
PDF
BPF - All your packets belong to me
PPTX
2016 NCTU P4 Workshop
PDF
20170925 onos and p4
Tutorial of SF-TAP Flow Abstractor
FARIS: Fast and Memory-efficient URL Filter by Domain Specific Machine
The Next Linux Superpower: eBPF Primer
BPF - in-kernel virtual machine
Staring into the eBPF Abyss
BPF - All your packets belong to me
2016 NCTU P4 Workshop
20170925 onos and p4

What's hot (20)

PDF
Networking and Go: An Epic Journey
PDF
The linux networking architecture
PDF
Socket Programming- Data Link Access
PDF
Programming Protocol-Independent Packet Processors
PDF
LinuxCon 2015 Linux Kernel Networking Walkthrough
PDF
The Next Generation Firewall for Red Hat Enterprise Linux 7 RC
PDF
eBPF Tooling and Debugging Infrastructure
PDF
Ebpf ovsconf-2016
PDF
BPF: Next Generation of Programmable Datapath
PDF
DevConf 2014 Kernel Networking Walkthrough
PDF
[Webinar Slides] Programming the Network Dataplane in P4
PPT
Lec7 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Dynamic Sch...
PDF
2015 FOSDEM - OVS Stateful Services
ODP
Sockets and Socket-Buffer
PDF
PDF
OSN days 2019 - Open Networking and Programmable Switch
PDF
eBPF Debugging Infrastructure - Current Techniques
PDF
Network Mapper (NMAP)
PDF
introduction to linux kernel tcp/ip ptocotol stack
PDF
Kernel Recipes 2013 - Nftables, what motivations and what solutions
Networking and Go: An Epic Journey
The linux networking architecture
Socket Programming- Data Link Access
Programming Protocol-Independent Packet Processors
LinuxCon 2015 Linux Kernel Networking Walkthrough
The Next Generation Firewall for Red Hat Enterprise Linux 7 RC
eBPF Tooling and Debugging Infrastructure
Ebpf ovsconf-2016
BPF: Next Generation of Programmable Datapath
DevConf 2014 Kernel Networking Walkthrough
[Webinar Slides] Programming the Network Dataplane in P4
Lec7 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Dynamic Sch...
2015 FOSDEM - OVS Stateful Services
Sockets and Socket-Buffer
OSN days 2019 - Open Networking and Programmable Switch
eBPF Debugging Infrastructure - Current Techniques
Network Mapper (NMAP)
introduction to linux kernel tcp/ip ptocotol stack
Kernel Recipes 2013 - Nftables, what motivations and what solutions

Similar to SF-TAP: Scalable and Flexible Traffic Analysis Platform (USENIX LISA 2015) (20)

PDF
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
PDF
6 open capi_meetup_in_japan_final
PDF
RINA overview and ongoing research in EC-funded projects, ISO SC6 WG7
PPTX
eBPF Basics
PDF
Network Programming: Data Plane Development Kit (DPDK)
PDF
Snabbflow: A Scalable IPFIX exporter
PDF
CAPI and OpenCAPI Hardware acceleration enablement
PPSX
FD.IO Vector Packet Processing
PPSX
FD.io Vector Packet Processing (VPP)
PDF
Rlite software-architecture (1)
PDF
Dataplane networking acceleration with OpenDataplane / Максим Уваров (Linaro)
PDF
Using a Field Programmable Gate Array to Accelerate Application Performance
PPTX
Introduction to DPDK
PPTX
LEGaTO: Software Stack Runtimes
PDF
BKK16-103 OpenCSD - Open for Business!
PDF
Automatically partitioning packet processing applications for pipelined archi...
PDF
OpenPOWER Acceleration of HPCC Systems
PPTX
Apache Kafka
PDF
Japan's post K Computer
PDF
DPDK Summit - 08 Sept 2014 - NTT - High Performance vSwitch
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
6 open capi_meetup_in_japan_final
RINA overview and ongoing research in EC-funded projects, ISO SC6 WG7
eBPF Basics
Network Programming: Data Plane Development Kit (DPDK)
Snabbflow: A Scalable IPFIX exporter
CAPI and OpenCAPI Hardware acceleration enablement
FD.IO Vector Packet Processing
FD.io Vector Packet Processing (VPP)
Rlite software-architecture (1)
Dataplane networking acceleration with OpenDataplane / Максим Уваров (Linaro)
Using a Field Programmable Gate Array to Accelerate Application Performance
Introduction to DPDK
LEGaTO: Software Stack Runtimes
BKK16-103 OpenCSD - Open for Business!
Automatically partitioning packet processing applications for pipelined archi...
OpenPOWER Acceleration of HPCC Systems
Apache Kafka
Japan's post K Computer
DPDK Summit - 08 Sept 2014 - NTT - High Performance vSwitch

More from Yuuki Takano (14)

PDF
アクターモデル
PDF
π計算
PDF
リアクティブプログラミング
PDF
Transactional Memory
PDF
CUDAメモ
PDF
【やってみた】リーマン多様体へのグラフ描画アルゴリズムの実装【実装してみた】
PDF
SF-TAP: L7レベルネットワークトラフィック解析器
PDF
MindYourPrivacy: Design and Implementation of a Visualization System for Thir...
PDF
SF-TAP: 柔軟で規模追従可能なトラフィック解析基盤の設計
PDF
Measurement Study of Open Resolvers and DNS Server Version
PPTX
Security workshop 20131220
PDF
Security workshop 20131213
PDF
Security workshop 20131127
PDF
A Measurement Study of Open Resolvers and DNS Server Version
アクターモデル
π計算
リアクティブプログラミング
Transactional Memory
CUDAメモ
【やってみた】リーマン多様体へのグラフ描画アルゴリズムの実装【実装してみた】
SF-TAP: L7レベルネットワークトラフィック解析器
MindYourPrivacy: Design and Implementation of a Visualization System for Thir...
SF-TAP: 柔軟で規模追従可能なトラフィック解析基盤の設計
Measurement Study of Open Resolvers and DNS Server Version
Security workshop 20131220
Security workshop 20131213
Security workshop 20131127
A Measurement Study of Open Resolvers and DNS Server Version

Recently uploaded (20)

PDF
The Digital Engine Room: Unlocking APAC’s Economic and Digital Potential thro...
PPTX
Presentation - Principles of Instructional Design.pptx
PDF
TicketRoot: Event Tech Solutions Deck 2025
PDF
Chapter 1: computer maintenance and troubleshooting
PDF
State of AI in Business 2025 - MIT NANDA
PDF
Secure Java Applications against Quantum Threats
PPTX
Report in SIP_Distance_Learning_Technology_Impact.pptx
PDF
EIS-Webinar-Regulated-Industries-2025-08.pdf
PDF
Ebook - The Future of AI A Comprehensive Guide.pdf
PDF
EGCB_Solar_Project_Presentation_and Finalcial Analysis.pdf
PPTX
Blending method and technology for hydrogen.pptx
PDF
Technical Debt in the AI Coding Era - By Antonio Bianco
PPTX
maintenance powerrpoint for adaprive and preventive
PDF
Examining Bias in AI Generated News Content.pdf
PPTX
Slides World Game (s) Great Redesign Eco Economic Epochs.pptx
PDF
ELLIE29.pdfWETWETAWTAWETAETAETERTRTERTER
PDF
substrate PowerPoint Presentation basic one
PPTX
Strategic Picks — Prioritising the Right Agentic Use Cases [2/6]
PPTX
From Curiosity to ROI — Cost-Benefit Analysis of Agentic Automation [3/6]
PDF
Uncertainty-aware contextual multi-armed bandits for recommendations in e-com...
The Digital Engine Room: Unlocking APAC’s Economic and Digital Potential thro...
Presentation - Principles of Instructional Design.pptx
TicketRoot: Event Tech Solutions Deck 2025
Chapter 1: computer maintenance and troubleshooting
State of AI in Business 2025 - MIT NANDA
Secure Java Applications against Quantum Threats
Report in SIP_Distance_Learning_Technology_Impact.pptx
EIS-Webinar-Regulated-Industries-2025-08.pdf
Ebook - The Future of AI A Comprehensive Guide.pdf
EGCB_Solar_Project_Presentation_and Finalcial Analysis.pdf
Blending method and technology for hydrogen.pptx
Technical Debt in the AI Coding Era - By Antonio Bianco
maintenance powerrpoint for adaprive and preventive
Examining Bias in AI Generated News Content.pdf
Slides World Game (s) Great Redesign Eco Economic Epochs.pptx
ELLIE29.pdfWETWETAWTAWETAETAETERTRTERTER
substrate PowerPoint Presentation basic one
Strategic Picks — Prioritising the Right Agentic Use Cases [2/6]
From Curiosity to ROI — Cost-Benefit Analysis of Agentic Automation [3/6]
Uncertainty-aware contextual multi-armed bandits for recommendations in e-com...

SF-TAP: Scalable and Flexible Traffic Analysis Platform (USENIX LISA 2015)

  • 1. SF-TAP: Scalable and Flexible Traffic Analysis Platform Running on Commodity Hardware 1 Yuuki Takano, Ryosuke Miura, Shingo Yasuda Kunio Akashi, Tomoya Inoue NICT, JAIST (Japan) in USENIX LISA 2015
  • 2. Table of Contents 1. Motivation 2. Related Work 3. Design of SF-TAP 4. Implementation of SF-TAP 5. Performance Evaluation 6. Conclusion 2
  • 3. Motivation (1) • Programmable application level traffic analyzer • We want … • to write traffic analyzers in any languages such as Python, Ruby, C++, for many purposes (IDS/ IPS, forensic, machine learning). • **not** to write codes handling TCP stream reconstruction (quite complex). • modularity for many application protocols. 3
  • 4. Motivation (2) • High speed application level traffic analyzer • We want … • to handle high bandwidth traffic. • to handle high connections per second. • horizontal and CPU core scalable analyzer. 4
  • 5. Motivation (3) • Running on Commodity Hardware • We want … • open source software. • not to use expensive appliances. 5
  • 6. Related Work 6 BPF [USENIX ATC 1993] netmap [USENIX ATC 2012]DPDK pcap GASPP [USENIX ATC 2014] SCAP [IMC 2012] libnids libprotoident nDPI l7-filter (low level traffic capture) (flow oriented analyzer) (application traffic detector) SF-TAP + modularity and scalability
  • 7. High-level Architecture of SF-TAP 7 CPU CPU CPU CPU Flow Abstractor CPU CPU CPU CPU Flow Abstractor CPU CPU CPU CPU Flow Abstractor CPU CPU CPU CPU Flow Abstractor Cell Incubator The Internet SF-TAP Cell SF-TAP Cell SF-TAP Cell SF-TAP Cell Intra Network Core ScalingCore Scaling Core Scaling Core Scaling Horizontal Scaling Analyzer Analyzer Analyzer Analyzer Analyzer Analyzer Analyzer Analyzer Analyzer Analyzer Analyzer Analyzer Analyzer Analyzer Analyzer Analyzer 10GbE 10GbE
  • 8. Design Principle (1) • Flow Abstraction • abstract flows by application level protocols • provide flow abstraction interfaces like /dev, /proc or BPF • for multiple programming languages • Modular Architecture • separate analyzing and capturing logic • easily replace analyzing logic 8
  • 9. Design Principle (2) • Horizontal Scalable • analyzing logic tends to require many computer resources • volume effect should solve the problem • CPU Core Scalable • both analyzing and capturing logic should be core scalable for efficiency 9
  • 10. Design of SF-TAP (1) 10 NW I/F HTTP I/F TLS I/F Flow Abstractor Flow Classifier TLS Analyzer HTTP Analyzer HTTP Proxy TCP and UDP Handler filter and classifier rule L7 Loopback I/F DB Forensic IDS/IPS etc... Application Protocol Analyzer etc...TCP Default I/F UDP Default I/F Analyzer PlaneAbstractor Plane Capturer Plane SF-TAP Cell Incubator Flow Identifier Flow Separator Separator Plane separated traffic SF-TAP Cell L3/L7 Sniffer SSL Proxy etc... other SF-TAP cells IP Packet Defragmenter L2 Bridge mirroring traffic Packet Forwarder IP Fragment Handler defined 4 planes Analyzer Plane application level analyzers Forensic, IDS/IPS, etc… Abstractor Plane flow abstraction Separator Plane flow separation Capturer Plane traffic capturing (ordinary tech.) (users of SF-TAP implements here) (we implemented) (we implemented)
  • 11. Design of SF-TAP (2) SF-TAP Cell Incubator 11 SF-TAP Cell Incubator Flow Separator separated traffic other SF-TAP cells L2 Bridge Packet Forwarder IP Fragment Handler Packet Forwarder layer 2 bridge layer 2 frame capture IP Fragment Handler handle fragmented packets Flow Separator separate flows to multiple Ifs
  • 12. Design of SF-TAP (3) SF-TAP Flow Abstractor 12 NW I/F HTTP I/F TLS I/F Flow Abstractor Flow Classifier TCP and UDP Handler filter and classifier rule L7 Loopback I/F TCP Default I/F UDP Default I/F Flow Identifier IP Packet Defragmenter TCP and UDP Handler Flow Identifier IP Packet Defragmenter reconstruct TCP flows identify flows by IP and port nothing to do for UDP defragment IP packets if needed
  • 13. Design of SF-TAP (4) SF-TAP Flow Abstractor 13 NW I/F HTTP I/F TLS I/F Flow Abstractor Flow Classifier TCP and UDP Handler filter and classifier rule L7 Loopback I/F TCP Default I/F UDP Default I/F Flow Identifier IP Packet Defragmenter Flow Classifier classify flows by regular expressions output to abstraction IFs
  • 14. Implementation • SF-TAP cell incubator • C++11 • it uses netmap, available on FreeBSD • SF-TAP flow abstractor • C++11 • it uses pcap or netmap • available on Linux, *BSD, and MacOS • Source Code • https://github.com/SF-TAP • License • 3-clauses BSD 14 (updated from the paper)
  • 15. Performance Evaluation (1) 15 Figure 8: Total Memory Usage of HTTP Analyzer Figure 9: Packet Drop against CPS packet drop against connections per second (pcap) 4K 10K 50K
  • 16. Performance Evaluation (1) 16 forwarding performance of SF-TAP cell incubator Mpps 0 4 8 12 16 fragment size (bytes) 64 128 256 512 1024 ideal (1) α->γ (2) α->β (3) α->β (3) α->γ e 14: Forwarding Performance of Cell Incubator risks are incurr Host-based I it is not suitab used in today’s power. Therefo operating with support the futu face of the flow implementation protocols. 7 Related Wireshark [38] Figure 11: CPU Load of Flow Abstractor versus Traffic Volume Figure 16 shows the CPU loads of the 15th CPU. At 5.95 Mpps, the load average was approximately 50%, but at 10.42 Mpps, the loads were close to 100%. More- over, at 14.88 Mpps, CPU resources were completely consumed. This limitation in forwarding performance was probably caused by the bias, which in turn was due o the flow director [10] of Intel’s NIC and its driver. The flow director cannot currently be controlled by user pro- grams on FreeBSD; thus, it causes bias depending on net- work flows. Note that the fairness regarding RSS queues s simply an implementation issue and is benchmarked or future work. Finally, the memory utilization of the cell incubator depends on the memory allocation strategy of netmap. The current implementation of the cell incubator requires approximately 700 MB of memory to conduct the exper- ments. 6 Discussion and Future Work Figure 12: Physical Memory Usage of Flow Ab (10K CPS) 1 GbE x 12 10 GbE x 2 α β γ cell incubator Figure 13: Experimental Network of Cell Incu 6.1 Performance Improvements We plan to improve the performance of the flow tor in three aspects. (1) The UNIX domain socket can be replaced other mechanism such as a memory-mapped file o memory attach [6]; however, these mechanisms suitable for our approach, which abstracts flows Thus, new mechanisms for high-performance m passing, such as the zero-copy UNIX domain so zero-copy pipe, should be studied. (2) The flow abstractor currently uses the mallo to γ to β to β and γ α β γ
  • 17. Other Features • L7 Loopback interface for encapsulated flows • Load balancing mechanism for application protocol analysers • Separating and mirroring modes of SF-TAP cell incubator • See more detains in our paper 17
  • 18. Conclusion • We proposed SF-TAP for application level traffic analysis. • SF-TAP has following features. • flow abstraction • running on commodity hardware • modularity • scalability • We showed SF-TAP has achieved high performance in our experiments. 18