Categories
Uncategorised

A deep dive into RTP

For some years I’ve monitored RTP traffic at points in my network using span ports and tcpdump -T rtp. The latter shows the RTP sequence number, and piping into some perl allows me to look for missing or out of order packets as they traverse the network

I was quite keen on pulling the mpeg transport stream out of these captures though, to replay in VLC at leisure, or convert to thumbnails for monitoring purposes, etc, so I wrote a quick python tool to strip the RTP header from a pcap file and output to one (or many) RTP streams (as well as keep track of the RTP sequence numbers)

I then thought “it would be nice to extract the service name from the stream”, which meant a fascinating dive deep into RTP packet structures.

MPEG transport streams are a fascinating container with all sorts of goodies, so bashed some python together to investigate.

There are of course GUI programs like TSDuck to investigate streams, but they are GUIs, and thus far harder to use for normal people than a bit of linux command line, as they are difficult to integrate into a monitoring page.

By running tcpdump -i ens1 udp -w –, the entire of a udp sniff of a network tap can be output into python, and decoded, with all sorts of benefits.

Broadly speaking the RTP streams I work with contain multiple DVB MPEG services, one for video, one for audio, perhaps things like subtitles etc.

The RTP packet itself has a sequence number from 0 to 65535 and back to zero, but the (typically 7) 188 byte mpegts packets inside that have a 4 bit continuity counter, from 0 to 15 and back to zero. This is unique per PID, so we can track it, although if there is a continuity error there’s no confident way of knowing if you lost 1, 17 or 33 mpeg packets (you can probably infer if the rtp packets are also missing)

Occasionally an mpeg packet will go through which has things like the service and provider description, allowing easy identifying of what’s running through the network, so extracting that is worthwhile. Other bits of information can be helpful to diagnose decoder probelms too — RTP timestamps, varying PCR (program clock reference), interpacket arrival time, etc.

Sometimes you just want to dump the output.

The following is from sudo tcpdump -i ens1d1 -nn port 8100 -w – | ./dumpTS.py -s 1 sat across a passive fibre tap of traffic coming in from our current network provider, before it hits our network equipment.

It shows two RTP streams, one from 192.168.203.5 and one from 172.30.4.129, to 192.168.247.227 and .228. The contents is from an Appear X10, a fairly nice high density encoder.

The 192.168.203.5:8100>192.168.247.227:8100 stream is arriving with very little jitter (a maximum of 600μs, and an average of 340μs). There are 5 PIDs, PID101 is the highest bitrate. Pids 0, 17 and 100 are ‘helper’ pids in this DVB stream – 17 contains the SDT (Service Description Table) to tell you what the service name is, 100 is the PMT (program map table) to tell you what streams make the program up etc.

Generally the errors I tend to see are with the RTP stream – a packet goes missing (often because of Mikrotiks or Juniper SRXs), and things break. However occasionally the error is between the source of the mpeg transport stream and the RTP wrapper. I’ve seen this with AppearTV for example, which internally generates the mpeg stream in the encoding process, then ships it off to an IP interface. RTP analysis shows that the IP traffic passed without problem, however “Garbage in, Garbage Out” applies, and a CC error in the Video pid, jumping from 1 to 3, was detected.

One missing mpeg packet is unlikely to be noticable in the real world, but it’s a facinating level of visibility I haven’t seen before.

I’m not a software developer, I bang code together until it works well enough to solve my problem, then I move on. If you are interested in using the code though it’s at https://github.com/isostatic/rtp_reader

This level of monitoring is great, but it’s written in python, and when trying to process 600mbit of streams the CPU starts crying. What I really want is something to monitor all the traffic for potential RTP streams, and not eat my CPU while doing it, and that’s where I have to dust off my rusty C memories.

Categories
Uncategorised

Dealing with a hostile network

Last summer I was working at an event with some internet provided by a UK educational establishment, hanging off JANET. It’s great, getting about a gigabit of connectivity with the usual test.

Basic web browsing was working, and I went to set up a nodejs application which talks to a server in AWS.

connect error....reason":"Error: websocket error"

Sigh. So I ran curl from the box

curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to 3.x.x.x:443

OK, that’s an unusual error. I could however ssh into the AWS machine on port 22. tcpdump on both ends and compare in wireshark (which is not the nicest thing to do on a 13″ laptop)

The client is saying that the AWS server is sending RSTs after the Client Hello

The RST from 3.11.2xxxx to the client’s private 172.24xxx address

But the server is saying it’s the client sending the RST!

Not only that, the actual SSL traffic seems to be being changed too.

It’s extremely frustrating when you get this type of middlebox. You shouldn’t be breaking communications, and it’s unlikely a user’s setup would be broken enough to allow it to happen with TLS. Middleboxs cause real problems for default traffic, but are generally easy enough to work around if you want to do something nefarious (tunnel traffic via ssh, or wireguard, or DNS)

Of course this same middlebox was breaking out SSTP vpn, so that was something else to fix. Adding a second SSTP port on a high number was enough to get the vpn up, then routing the 3.11.x.x target down that vpn and natting at the far end bypassed the middlebox completely.

Once you realise the problem these can be worked around, but these application level firewalls just mean more time and more workarounds. If you don’t want traffic flowing on a network, send an ICMP reject, don’t spoof traffic.