I have been trying to arrange the download by queuing discipline to get the proper order and avoid chalk point, at some point looking closely at the documentation I have no idea what is not right. Anyway, I have a tiny script following structure: #!/bin/bash P_CEIL=50 P_DEV=ppp0 modprobe ifb numifbs=1 ip link set ifb0 up tc qdisc add dev ${P_DEV} ingress tc filter add dev ppp0 parent ffff: protocol ip u32 match u32 0 0 action mirred egress redirect dev ifb0 tc qdisc add dev ifb0 root handle 2: htb default 15 tc class add dev ifb0 parent 2: classid 2:1 htb rate ${P_CEIL}mbit tc class add dev ifb0 parent 2:1 classid 2:10 htb rate 2mbit ceil 2mbit prio 0 tc class add dev ifb0 parent 2:1 classid 2:11 htb rate 3mbit ceil ${P_CEIL}mbit prio 1 tc class add dev ifb0 parent 2:1 classid 2:12 htb rate 1mbit ceil ${P_CEIL}mbit prio 2 tc class add dev ifb0 parent 2:1 classid 2:13 htb rate 10mbit ceil ${P_CEIL}mbit prio 2 tc class add dev ifb0 parent 2:1 classid 2:14 htb rate 5mbit ceil ${P_CEIL}mbit prio 2 tc class add dev ifb0 parent 2:1 classid 2:15 htb rate 37.5mbit ceil ${P_CEIL}mbit prio 3 tc qdisc add dev ifb0 parent 2:10 handle 200: fq_codel tc qdisc add dev ifb0 parent 2:11 handle 210: fq_codel tc qdisc add dev ifb0 parent 2:12 handle 220: fq_codel tc qdisc add dev ifb0 parent 2:13 handle 230: fq_codel tc qdisc add dev ifb0 parent 2:14 handle 240: fq_codel tc qdisc add dev ifb0 parent 2:15 handle 250: fq_codel tc filter add dev ifb0 parent 2:0 protocol ip prio 1 handle 1 fw classid 2:10 tc filter add dev ifb0 parent 2:0 protocol ip prio 2 handle 2 fw classid 2:11 tc filter add dev ifb0 parent 2:0 protocol ip prio 3 handle 3 fw classid 2:12 tc filter add dev ifb0 parent 2:0 protocol ip prio 4 handle 4 fw classid 2:13 tc filter add dev ifb0 parent 2:0 protocol ip prio 5 handle 5 fw classid 2:14 tc filter add dev ifb0 parent 2:0 protocol ip prio 6 handle 6 fw classid 2:15 ... The main purpose is to get all incoming traffic into ifb0 -and from there signalling by firewall (iptables) whatever is necessary to organise the priority as well. Checking the by the command: watch -n1 'tc -s qdisc show dev ifb0' I have got this: Every 1.0s: tc -s qdisc show dev ifb0 rout: Tue Jul 15 21:31:41 2025 qdisc htb 2: root refcnt 2 r2q 10 default 0x15 direct_packets_stat 2 direct_qlen 32 Sent 587858267 bytes 455220 pkt (dropped 6, overlimits 468685 requeues 0) backlog 0b 0p requeues 0 qdisc fq_codel 240: parent 2:14 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0 new_flows_len 0 old_flows_len 0 qdisc fq_codel 220: parent 2:12 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0 new_flows_len 0 old_flows_len 0 qdisc fq_codel 250: parent 2:15 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn Sent 587858155 bytes 455218 pkt (dropped 6, overlimits 0 requeues 0) backlog 0b 0p requeues 0 maxpacket 1492 drop_overlimit 0 new_flow_count 43419 ecn_mark 0 new_flows_len 0 old_flows_len 2 qdisc fq_codel 200: parent 2:10 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0 new_flows_len 0 old_flows_len 0 qdisc fq_codel 230: parent 2:13 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0 new_flows_len 0 old_flows_len 0 qdisc fq_codel 210: parent 2:11 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0 new_flows_len 0 old_flows_len 0 ... I have this which shows that the virtual interface caching the traffic, but it directs only to the default leaf classes. All other classes stay empty. Whatever I switch the default class, it stays there, I can modify the download speed, but that's all what it does. I'm wondering if someone will point me out, what can be wrong. Thank you.