The mystery of the 502 Kibana errors

We upgraded kibana from version 6.8 to 7.9, and started having issues with access via our apache proxy. Our proxy terminates SSL sessions and uses x509 client certificates for authentication, authorisation and accounting, so it’s a right pain.

Before the upgrade background traffic was about 720 requests an hour — one permament monitoring machine with a page looking at comrex state. Ye old zgrep ” 200 ” ssl.pensieve-kibaccess.log.3.gz|sed -e ‘s/[^:]://’ -e ‘s/:.. .*//’|cut -b -3|uniq -c shows up plenty of HTTP 200s in the log. grep for ” 50. ” and no errors at all. That was before the upgrade.

720 200s an hour, that 1 every 5 seconds. We don’t have many users.

After the upgrade though we started seeing issues – it would load, sometimes, but then we’d get errors. These could be seen in the apache proxy access and error logs, but not in the Kibana log

The numbers were high, while there were still 1200-1800 200 codes an hour, 50x errors were in the 400-900 an hour level

So why was it breaking? Looking at the traffic in wireshark pointed a big red stick at the problem — the Kibana server was resetting TCP

502 actually, but this is cute

Kibana uses nodejs to host itself, and nodejs is resetting the connection. It really shouldn’t to that, there’s a reason TCP RST shows up red in wireshark by default.

I could see these RSTs in real time on the kibana server with sudo tcpdump -i any port 5601 |grep Flags..R

Some DDGing eventually returned the following blog which sounded very familiar https://adamcrowder.net/posts/node-express-api-and-aws-alb-502/.

While running an Express server using (NodeJS 8+) behind an ALB under moderate load (at least ~15 requests per minute, although more apparent under higher loads), you can start to observe occasional 500-range errors on the ALB

Yup, I was sending about 20 requests a minute, I’m using a proxy which isn’t getting anything back from the server other than resets, and it’s intermittent (about 25% of the time). Adam had done similar debugging, and seen the TCP resets

The reason this can occur is because the NodeJS http server is destroying its TCP socket instead of gracefully closing it (waiting for an acknowledgement of the connection closure while still accepting data from the socket).

So to test this hypothesis, disabling keep alives from apache seemed a reasonable step. The following line in the virtualhost does the job, and a quick apache2 reload later, put some dots in the tcpdump window, and see if the errors stop. And they did, instantly.

SetEnv proxy-nokeepalive 1

So there we have it, moving to a recent Kibana/NodeJS broke keepalives, disabling them means our users are happy they can use kibana to see what our comrexes are upto, and we can move onto other more important things, like an OBE with massive jitter in its RTP emissions.

All fixed

OK, that’s working, and everyone’s happy, but a new TCP connection for each request feels a bit 1990s ish. Now the 90s were clearly the best decade in history – Spice Girls, Independence Day, and the Golden Age of Star Trek, however as our proxy has a connection tracking firewall between it and the kibana server, it does feel like a needless waste of resources. Don’t want to break the service too much though, so I create a new virtual host on the proxy to examine without causing everyone else to grumble, and check the error still occurs.

Oh yes, Kibana really doesn’t like the Bee Gees.

In Kibana, installed via Ubuntu package management, grep -r keepAliveTimeout /usr/share/kibana shows up the most likely location. /usr/share/kibana/src/core/server/http/http_tools.js configures server.listener.keepAliveTimeout – and sets it (via some layers) to the value set in server.keepaliveTimeout in kibana.yml. That’s undefined in my install, but looks like it should have a default value of 2 minutes.

Due to the potential cause being this regression bug mentioned in Adam Crowder’s blog with headersTimeout, adding this line might solve everything.

I can do more hotfixing in prod on my laptop, sitting in my pajamas, before my first cup of Earl Grey, than you can do in a year in the field. What Q really meant.

Restart the kibana service and check it still works. Hammering refresh on the “-keepalive” proxy did cause the problem reliably before, even on a nice static resource like /ui/favicons/apple-touch-icon.png, and doing this post edit no longer does, so that’s a good sign. Flip back the main proxy to use keepalives and monitor for RSTs and good to go.

The issue seems to be in kibana at https://github.com/elastic/kibana/issues/44062, maybe it only affects apache, and all the cools kids use nginx and AWS ELB at webscale levels.

A first look at AWS Media Connect and RIST

Previously on random mutterings… as well as streaming with a deliberate 20% loss to an OBD, I’m using RIST to stream to AWS Media connect, same loss profile, same 1 second buffer.

OBD can cope with 10 million packets and no loss. Can AWS Media connect?

AWS statistics are lacking – they are a graph, hiding numbers away, and they only update every minute! I prefer working at a millisecond resolution in streams of text, but we should be able to salvage something from this.

There’s something seriously wrong here, hundreds, if not thousands, of packets being dropped every minute. That’s 10-20 a second.

Am I really dropping packets that much over the internet to AWS? Time to turn off the failures. Lets just leave the ISP circuit to AWS at the only output, and monitor RTP stream leaving the encoder

OK, there’s a few retransmits then, with packet 13959 being resent between 13964 and 65, then 15941 slotting in between 15948 and 49. AWS is far happier with their graph.

So time to introduce packet loss. 20% as before. There is no reason for AWS to perform worse than the OBD — it’s only a few milliseconds away rather than 83, retransmits tend to be 3 milliseconds after the initial packet, and the natural circuit loss is minimal. Looking at plain simply TCP Dump you can see packet 32565 was sent at 344.5ms, never made it to AWS, and AWS asked for a retransmit, with the request arriving at 347.6ms. The very next packet was a resend of 32565. In theory AWS should be able to cope with 250 resends, meaning that with 20% random packet loss you would never have loss in the entire lifespan of the universe.

So throwing back 20% of packet loss and we see the retransmits. Remember we’re looking at a capture post-loss here.

13202 and 13204 is sent, a retransmit is requested (arriving 18:58:36.375312) for 13203, which is successfuly sent between packets 13209 and 13210. 05 and 06 are missing too, they are requested at 18:58:36.376396 and delivered. Clearly AWS is sending requests for retransmits, and OBE is sending those retransmits.

The tricky bit now is to find a request for a retransmit that isn’t send (in theory 1 in 5 retransmits will fail to be sent due to the 20% packet loss). I’ll capture about 20 seconds of this, then prune the log files to capture RTP packet numbers 10,000 through 50,000.

Here we see we dropped packet 10145 about .2936, retransmit request at .2962 – 2.6ms later, then dispatched within 40us, job done.

The retransmit request is 2.6ms after the expected time. Lets look at packet 10148. That should have been sent at .2951, so I’d expect a retransmit about .2977. A retransmit request came in at .2983, so 600us later, seems reasonable what with the internet, router queues, etc. I suspect that packet 19:02:26.298336 was a request for retransmit of packet 10148. That request was ignored (or rather the packet was dropped by the kernel before our monitoring point). Another request arrived a few us later for a retransmit of packet 10150, and that was dealt with

But what about packet 10148? The next retransmit request (after 10150 had been dealt with) was also for 10148. After all at this stage we’d sent 10149, 50 (on a retransmit), 51, 52, 53, 54, 55, 56, 57 59, 60, 63, 64 and 65. We were missing 10148, 58, 61, and 62. The retransmit for packet 10158 would be due to arrive 2.6ms after .3001 – about .3027. 1ms later than that time at .3037, we had a request for a retransmit, then sent packet 10148 and 10158.

So it looks like AWS is asking for retransmits of missing retransmits, that’s good. But that doesn’t explain this:

Why aren’t these packets being recovered!?

Taking the c. 40,000 RTP packets that should have been sent between 19:02.26 and 19:02:46 we can see how many were actually sent. grep c33 /tmp/rtp.log|sed -e ‘s/.*c33 *//’ -e ‘s/ .*//’ does the job for that, producing a list of the sequence numbers 40,023 RTP packets transmitted. Piping into sort -n|uniq|wc -l produces a list of 39,998 unique packet numbers that were sent, including packet 9997 (a retransmit), 50,000, and almost every packets 10000 through 50000. The missing packets were all at the end, 49986, 49989, 49995 and 49996 (all of which would have been retransmitted shortly after).

While we’re at it we can look at retransmits caused by loss on the internet, not the artificial loss being introduced by tc. grep c33 /tmp/rtp.log|sed -e ‘s/.*c33 *//’ -e ‘s/ .*//’|sort -n|uniq -c|sort -n|grep -v ” 1 “ does that for us, showing up 25 packets in the 10,000 that were transmitted by us twice. That’s a reasonable number, 0.25% packet loss on this hop.

Linux plumbing, it involves pipes

So we know every RTP packet in this 20 second period was sent to AWS, we know AWS was asking for retransmits of packets that failed twice (and perhaps more), we know that any downstream losses were being retransmitted. We know AWS is happy if we turn off the forced 20% loss. Why does AWS have such bouncy “Source not recovered packets” graphs? It’s not constant (despite pretty much constant 20% loss) – the numbers reported at 18:02 GMT was 153, and at 18:03 GMT was 30, but at 18:10 it was 1505, and at 18:30 it was zero!

A wider view

Now we only had a 20 second capture in a “low loss” time. I wish AWS had a half-decent resolution on its logs, as 1 minute is awful. 10 seconds maybe, but ideally I’d be after 1 second.

sudo tcpdump -i any -nn host 54.147.109.224 -T rtp | head -50000 > /tmp/rtp.log; head -1 /tmp/rtp.log; grep c33 /tmp/rtp.log|sed -e ‘s/.*c33 *//’ -e ‘s/ .*//’|sort -n|uniq |sed -e ‘s/…$/xxx/’|uniq -c; tail -1 /tmp/rtp.log;

Generates an interesting output. You should see 1000 for most sequence number blocks above 1000. One block at the start will be <1000 as it starts partway through, another at the end will be <1000 for the same reason, stuff under 1000 will be different, and 65xxx will only go upto 536 as RTP sequence numbers top out at 65535.

But you should see a solid list of about 40 blocks of 1,000. In the view below the packets started just after 30,200, ran to 65535, then wrapped round to about 5200. But what on earth happened between 42,000 and 44,000? There should have been 2,000 RTP packets sent, even if it took 5 or 6 attempts – there was plenty of time for retransmits to be in the capture

Zooming in on 42,xxx (grep c33 /tmp/rtp.log|sed -e ‘s/.*c33 *//’ -e ‘s/ .*//’|sort -n|uniq |sed -e ‘s/.$/x/’|grep ^42|uniq -c) shows up some major flakiness. What happened?

All went Pete Tong about 42260

The problem started at 4226x, where 2 of the 10 packets were never sent even after seconds worth of retransmits, but at say 4269x only 5 packets got through. Looking at the capture, we can see a fairly standard packets going through, 0,1,3,5,6,7,8,9. This is post 20% loss remember, so nothing unusual in losing 2 of the 10 on the first go. What we should be seeing though are retransmit requests, and we do. What we don’t see is the packets actually retransmitted

Rememebr that OBE into OBD runs absolutely fine with no losses and plenty of retransmits. The order is 42260, 42254(retransmit), 42261, 42256(r), 42263,65. But look at the source port. Retransmit requests come in from source port 2001 to the ephemeral 33040.

The packets coming back from AWS in this instance are from port 2000 (the RIST port, not the RIST control port). No wonder OBE is ignoring them.

From 20:19:45 when this capture starts, until 20:19:51, all traffic from AWS to the encoder is from port 2001 to port 33020. That’s good. The final message just after we send packet 42261, and it’s a retransmit of 42256. It’s then radio silence on that port until 20:19:53, when the requests for retransmits come back in and start being handled

So why does AWS suddenly switch to sending packets for retransmits from the wrong port? Is it the wrong port? What does the RIST spec say?

Well the spec is at https://vsf.tv/download/technical_recommendations/VSF_TR-06-1_2018_10_17.pdf and is an easy read.

1. RIST senders shall transmit the RTP media packets to the configured IP address of the RIST receiver and a user-selected UDP destination port P, where P is an even number between 2 and 65534. RIST receivers shall listen on UDP port P for media. This transmission is unidirectional, from sender to receiver. The sender may choose any arbitrary source port M for the RTP flow.

2. RIST senders shall periodically transmit the compound RTCP packets specified in section 5.2.1to the configured IP address of the RIST receiver and UDP port P+1. The sender may choose any arbitrary source port R for the RTCP packets. RIST senders shall listen on port R for RTCP packets from the RIST receiver.

3.RIST receivers shall listen on UDP port P+1 for RTCP packets from the sender. These packets will have a source IP address S and a source port R’ (in the absence of NAT devices between the sender and receiver, R and R’ will be the same). RIST receivers shall send the RTCP packets they generate to IP address S and destination UDP port R’, with a source UDP port of P+1. A receiver shall use S and R’ from the last valid RTCP packet it has received from the sender.

4.RIST senders may offer the user the ability to manually configure source ports M and R.

Section 4’s fairly meaningless with many source-NAT devices scramble the source port.

That suggests to me that the receiver should not be responding with any packets to the sender on the even number port.

Update Aug 6th

AWS have confirmed there’s a bug there, so at least I’m not going mad.

A quick look at RIST retransmits

In an effort to understand RIST retransmits a little better, I setup a single RIST stream of 20mbit (2,000 packets per second), with a 1 second RIST buffer – about 10 times round trip time, from an OBE into and OBD, and looked at the RTP traffic on the decoder. The ping time from the encoder and decoder is about 83ms.

I then applied a massive 20% packet loss on the encoder with sudo tc qdisc del dev eno1 root; sudo tc qdisc add dev eno1 root netem loss 20%

On the decoder, run up “tcpdump -nn -T rtp” and look at the sequence numbers coming in. Take 65,000 lines (about 30 seconds) of output.

These are the packets coming in. the majority are in the right order, packet 0, 2, 4, 6, 7, 8, 9, 10, 12, 13, 14, 15, 17 etc. But some are missing. Manually counting the first 50 packets shows 9 missing, which is about expected from 20% random loss.

The decoder asks for these packets again, and here’s some of the retransmits at 183-76 ms later — 108 ms.
That suggests a timeline along this:

  • 14:31:11.75ms: expected arrival of packet 1,2
  • 76ms: expected arrival of packet 3,4
  • 92ms: Packet 1 is MIA, ask for retransmit
  • 100ms: Packet 3,5,11,16 is MIA, ask for a retransmit. Expected arrival of packet 49 and 50 at this stage.
  • 134ms: Retransmit request arrives for packet 1 at encoder, packet sent again
  • 142ms: Retransmit request arrives for packet 3,4,11,16 at encoder, packet sent again
  • 175ms: Packet 1 arrives on decoder
  • 183ms: Packet 3,4,11,16 arrives
  • 191ms: Packet 31 resend arrives
  • 199ms: Packet 49 resend arrives
  • 297ms: Packet 33 resend arrives
  • 413ms: packet 41 resent arrives

So it takes 25ms to realise the packet is missing, and ask for the retransmit. Now this might only be 20ms, and there’s a 5ms delay at the far end before sending.

So do we get all our retransmits? In this block we’re missing packets 1,3,5,11,16,31,33,41,49.

Packed 1 arrived at 175ms. Packet 3 arrives at 183ms, followed by 5, 11 and 16 all in the same millisecond. Packet 31 arrives at 191ms, Packet 49 at 199ms. But where are packet 33 and 41?

Well I’d expect of those 9 retranmits, about 20% would be lost, that’s packet 33 and 41 this time.

Packet 49 turned up at 199ms, on it’s first retransmit, with the retransmission sent at 116ms, 16ms after its expected arrival.

Packet 33 turned up 297ms later. I would guess that the first retransmit failed.

Packet 33 story:

  • Expected 91.5ms, between packet 32 and 34, 1ms after packet 31
  • Both packets were re-requested about 110ms, with the request arriving on the encoder about 152ms.
  • Encoder sent both packets out, but packet 33 was dropped. Packet 31 arrived on the first resend attempt at 191.7ms – 100 ms after it should have arrived (with an 83ms rtt and 15-20ms of overhead/buffer)
  • Packet 33 arrived at t=297ms, another 105ms after packet 31. That must have been a second retransmit attempt.

As such with RIST (on OBEs) it seems you need minimum of RTT+20ms of buffer to recover a single packet loss, 2xRTT+40ms for one retransmit loss, and so on.

With 20% loss how many retransmits are we expecting? Well at 20mbit that’s 2000 packets a second, or 7,200,000 packets an hour.

1.44 million will be lost on first transmit and need a retransmit (20% of 7.2M), and 288k will be lost on first and second transmit (20% of 1.44M).

57,600 will fail even with 3 transmits. 2304 with 5 transmits, 18 will fail with 9 retransmits, so you need more than 10 retransmits (RTT+15ms) to average a completely clean line over an hour, 12 over the course of a day, and 16 for the course of a year.

On this particular experiment I should theoretically be able to cope with about 10 retransmits. That would mean about either 1 or 5 unrecovered packets in 10 million. That’s on average of course.

3 in 28 million, about 1 in 10 million, so seems like I’m getting 10 resends attempts.

NB: some of these numbers look very familiar to those of us from the 90s (1.44M floppy, 28.8k baud).

When FEC can’t cope

When you are streaming low latency video (<1 second glass-glass), FEC is a great way of coping with a little packet loss along the way.

In theory Pro-MPEG FEC can cope with upto 25% packet loss, but in practice it can struggle with 0.001% loss.

FEC works in a similar way to RAID, it sends parity data along with the real data, increasing the amount of data sent, but meaning you can recover if a packet goes AWOL.

We’ve found from much observation over the years that we sometimes suffer internet problems where a circuit stops transmitting for 10, 20, even 50ms – perhaps as routes or interfaces change in an ISP you have no control over. A 10mbit RTP stream sends 1 packet every millisecond, so this can mean 50 lost packets in a row, and the standard FEC matrix sizes can’t cope with more than 20 (even assumes all the parity packets get through).

If your encoder and decoder are relatively close to each other, nowadays you’d use RIST or SRT to cope with packet loss – you lose 100 packets, but you then ask for retransmits and get them back, assuming you have a high enough receive buffer for your round trip.

If you’re trying to do an interview down the line from Austraila and your studio is 350ms away in Europe, these protocols introduce unacceptable delay though — 5xrtt on top of the normal encode process would be nearly a 2 second delay, far beyond what’s acceptable.

OBEs have a “Duplicate Stream” option. It transmits every packet again, from the same source IP, to the same destination IP, just a little later. By setting a duplicate stream of 100ms, and a receive buffer of say 120ms, the stream will be immune to outages of . The nice thing about this is that it also works into other RTP based decoders which have a configurable receiver buffer, as they toss the duplicate packet, and just treat it as out-of-order.

Cisco Firepower, Traceroute, and “Unsupported CLI”

We have a couple of Cisco firepower firewalls inside our network, protecting certain circuits that arrive from buildings we don’t control

One of the many major annoyances with them is their invisibility to traceroute. Paul Stewart’s blog post about enabling them (forcing them to decrement the TTL as packets travel through them) seems just what the doctor ordered: https://packetu.com/2018/08/12/traceroute-through-firepower-threat-defense/

Sadly our firepower management server has other opinions on the matter

While perusing some of the many, many, posts of complaint about Firepowers, I came across a similar problem with http redirects. It seems that the cisco firepower gui blacklists certain commands, but the workaround of using htt redirect outside 80 ratner than http redirect outside 80 was mentioned.

Brilliant. Sure enough adding the flexconfng command of

policy-map global_policy
class class-default
set connectio decrement-ttl

Does the same job, but avoids the GUI blocking the “set connection” line.

There are many problems with these terrible devices, but at least now my traceroutes are working.

Latency comparisons on RTP, SMPTE FEC, and RIST

It’s been a while since the last time I did some latency measurements of different codecs, and I had some time and resources recently to do some tests on some new encoders I haven’t officially measured before.

The encoder and decoder I had available were the latest version of the OBE C200 encoder and decoder, with a Blackmagic card in both.

These tests were all done with 1080i25 inputs. Marking latency down to milliseconds is possible with specialist equipment, however I’m not very interested in whether a given encode/decode is 426ms or 432ms, within a frame or two is fine.

Method

Play out a looped video which counts down, with frame and field numbers. The following 625i25 dv mov (scaled upto 1080i25 on playout) does just that. The age of the mov shows how long since I last did this measuring!

I then set up the following setup, with no genlock, and started playing the loop. Measurements were not taken at the top of the loop.

The countdown appears on monitor A, then a few seconds later on monitor B.

To measure the latency, take a photo of both frames.

Due to local wiring in my test environment, Monitor A (the input) was on the right of Monitor B. The monitors themselves were identical — Blackmagic SmartView Duos. As the monitors are the same, the delay will be the same.

Taking a photo of the monitor shows the encoder frame, and the decoder frame. You then subtract one from the other and you get a latency in frames. The display (showing both fields at once), coupled with the exposure time of the camera, will mean it’s slightly blurred, but it will give you a latency plus-or-minus a frame.

Above we can see the input was at 2 seconds and 19 frames, and the output was still at 3 seconds and 16 frames. This means the output was 22 frames behind the input. There’s no network delay (both encoder and decoder were on the same vlan on the same switch)

Results

I tested various different error corrections to ensure that the reality matched up with the theory. By and large it did.

The base result for an OBE-OBD chain is 9 frames when set to lowest mode. Not all decoders support lowest mode – which I believe is a 1 frame PPP mode.

Our general field target bitrate is 20mbits, this gives enough bandwidth to not see obvious artefacts on our payloads. Where possible we will aim for 45mbit of video, which is the DPP standard.

20mbit – FEC

Set up a stream with 20mbit of video, in a 20,884kbit MUX, containing 422 10bit video, with no error correction. The decoder was in ‘RTP/FEC’ mode, but no latency dialed in. Encoder latency set to lowest, Decoder latency set to lowest. This came out with the expected 9 frame end to end delay.

Adding in an FEC 20×5 matrix — 20 columns, 5 rows, adds 4 frames. The same applies with a 5×20 matrix, and block aligned vs non-block-aligned makes no difference.

Does this make sense? At 20,884kbit, via an RTP stream of 1,316 bytes per packet, it’s about 2000 packets per second, or 80 packets per frame. A 5×20 matrix will create a 100 packet matrix size, or just over a frame. To decode that matrix then requries another 100 packets of delay, which is upto 2.5 frames, so it’s a little higher than I’d expect.

FEC with a 5×5 matrix only added 3 frames, which makes sense as it’s a smaller matrix size.

20mbit – Dual Streaming

OBE/Ds have the ability to dual stream, via two separate networks, or indeed by timeshifting a repeated packet on a single stream on a single network. This is really helpful in places where bandwidth is cheap, like the far east, but the international links are not neccersarilly reliable. Timeshifting the packets helps when your only connectivity is via a single provider, who may have core network problems which cause networks streams to drop for sub-second outages, which FEC can’t cope with, but timeshifting can.

They do this by adding a receive delay, during which they de-duplicate packets.

The receive delay adds exactly as much latency as you would expect. On top of the 9 frames, adding a 100ms buffer adds 2.5 frames, a 300ms buffer adds 5 frames.

Currently my two links from the US to the UK are running at 78ms and 125ms rtt, so a dual stream buffer would need to be 23.5ms — assuming that the round trip time is symetrical. The actual delay in packets is 110, which at 4,407 packets a second is 24.96ms. With network failures on the routing though, packets may well be sent via different paths, so it’s a judgement call on how much delay to put in to cope with dual streaming. Despite not using FEC or RIST, the decoder much be set to FEC or ARQ mode to ensure the stream works.

20mbit – RIST

OBE has two RIST buffers, one on the encoder, one on the decoder. The encoder buffer has no affect on latency – it’s just how many already-sent packets are kept in memory if a retransmission is asked for. This may wish to be higher than a given decoder’s latency in the case of multicast RIST (haven’t tested that yet)

With no RIST, it’s a 9 frame delay. A 100ms decoder buffer should add 2.5 frames, a 600ms buffer should add 15 frames, an 1100ms should add 27.5 frames. The actual measured differences are 3, 15, and 28 frames, which matches the theory well.

How much delay you need is a much more complex question. I believe the theoretical minimum would be enough time for the retransmit request to occur, which would be just over 1x rtt, however the general feeling I get is that 3xrtt is a good number to aim for, behaviour in various situations will be another investigation.

5mbit and 45mbit

45Mbit makes no difference to latency in RIST, Dual Streaming, or normal streaming. With FEC though, it does make a difference. Again with 20×5 FEC, at 45mbit of video it’s 3 frames of delay.

5Mbit does make a difference – even in no-fec mode. I only have 2 results though so I’m unwilling to believe those figures without retesting.

OBE and OBD video settings

I thought it would be useful to measure how the OBE and OBD video settings differ.

Profile: 422 10bit vs main. Made no difference

Decoder latency: Normal adds 4 frames over Lowest

Encoder latency:

  • Lowest = 0 frames
  • Low(PPP), VBV of 1.2 frames (1/20th bitrate) = 1 frame
  • Normal, VBV of 1.2 frames = 16 frames
  • Normal, VBV of 1 seconds = 61 frames

Summary

Even assuming 3xrtt, at 20mbit, RIST adds less delay at 20mbit than 20×5 FEC for any link upto a 30ms rtt, which is pretty much any fixed link in the UK, it’s only once you go transcontinental that FEC may win out.

It’s early days in our use but being able to assign a different DSCP tag to the retransmits may be helpful in future – that way we can shift them through a queue quickly, but drop them if they start building up too much to ensure they don’t impact the main body of the stream, need to think, model and measure more on that.

Next steps

My next investigation as time allows will be remeasuring latencies – both normal and FEC, on to NTT, Ateme, and Evertz encoders and decoders. The FEC delta should be the same as under an OBE/OBD. I ran measurements on NTTs in 2015, and while they shouldn’t have changed, I don’t have the original figures from those days. The summary put the typical end-to-end latency at 11-12 frames.

References

DV Mov with frame/field number https://newweaver.com/vid/clock10tone.mov

Fixing Vim 8.0

Debian Stretch and Ubuntu 18.04 (bionic) have come with vim 8.0, which breaks all sorts of things.

The following vimrc file seems to fix things

source /usr/share/vim/vim80/defaults.vim
set noincsearch
set scrolloff=0
set mouse=

And then a couple of things I like

set background=dark
set tabstop=8 softtabstop=0 expandtab shiftwidth=4 smarttab

The first line has to source the default to enable things like syntax highlighting (which vanish if you just have a normal .vimrc

Hello world!

20 years ago, I ran a website, mainly about a computer game, a fairly novel thing for a teenager to be doing at the time. University and life followed, and the rise of social media meant I didn’t bother, but for the last few years I’ve been thinking of relaunching my own space as somewhere to collect my rants and raves.

I lost control of botf.com in 1999, and all that’s left of weaver.demon.co.uk which predated that site is a redirect. Nevertheless times change, and while I still love Star Trek (and can’t wait for the new Picard series to debut later this year), I thought this time I’d write something a bit more varied.

I work as a broadcast engineer for a large broadcaster, but clearly these pages are all my own views. There may be occasional thoughts on politics – from tuition fees to land value tax, there may be more technical thoughts on analysing RTP tcpdump output. I might not even write anything, but it’s nice to have my own personal server on the internet to throw some things on.