I am getting which I suppose are errors & I really don't have a clue what to do about it.... Latest image, just setup last week. Getting this from mlat.service.log: Mar 08 15:19:54 pi3 adsbexchange-feed[502]: BeastReduce TCP output: High latency, reducing data usage temporarily. Mar 08 15:21:53 pi3 adsbexchange-feed[502]: BeastReduce TCP output: Couldn't flush data for 1.60s (Insufficient bandwidth?): disconnecting: feed.adsbexchange.com port 64004 (fd 5, SendQ 8642) Mar 08 15:21:54 pi3 adsbexchange-feed[502]: BeastReduce TCP output: Connection established: feed.adsbexchange.com (216.48.109.64) port 30004 (sending UUID: <sender-ID>) Guess I should add stats-service.log: Mar 08 15:20:00 pi3 adsbexchange-stats[383]: curl: (28) Operation timed out after 10004 milliseconds with 0 bytes received Mar 08 15:20:00 pi3 adsbexchange-stats[383]: WARNING: curl process returned non-zero (28): []; Sleeping a little extra. Mar 08 15:26:16 pi3 adsbexchange-stats[383]: curl: (7) Failed to connect to adsbexchange.com port 443: Network is unreachable Mar 08 15:26:16 pi3 adsbexchange-stats[383]: WARNING: curl process returned non-zero (7): []; Sleeping a little extra. Mar 08 15:26:44 pi3 adsbexchange-stats[383]: curl: (28) Connection timed out after 10003 milliseconds It's a cabled (PoE) PI3B+, DSL speed 5Mb upload, 50Mb download. No major trafiic running upload or download. Errors occur on average every few minutes. Is this something I can do anything about or is this just a "warning" message? Thanks in advance for any enlightenment....
cloudflare issues for the stats. No that's nothing you can improve. I'm not sure where you are located ... could just be network issues at your ISP. DSL connection speed isn't the only bottleneck that can exist. So .... maybe changing ISP could fix it ... but it's nothing to be concerned about.
Thanks, I am in NL feeding region 4A. Provider does not seem to be a problem, ping to adsbechange.com: 64 bytes from 104.26.5.191 (104.26.5.191): icmp_seq=1 ttl=59 time=13.7 ms 64 bytes from 104.26.5.191 (104.26.5.191): icmp_seq=2 ttl=59 time=9.58 ms However, feed.adsbechange.com is a different issue: 64 bytes from 216.48.109.64: icmp_seq=1 ttl=55 time=145 ms 64 bytes from 216.48.109.64: icmp_seq=2 ttl=55 time=143 ms Traceroute to the adsbexchange.com main site 6 steps, but to the feed.adsbexchange.com 30 (!) steps thru zayo.com. Here are steps 5 & 6: 5 er1.ams1.nl.above.net (80.249.208.122) 21.846 ms 21.622 ms 21.586 ms 6 * ae11.cs3.ams10.nl.zip.zayo.com (64.125.31.104) 151.015 ms *being a major factor. Hmm, abovenet & zayo show as the same NSP on AMS-IX. There is definitely a delay there, not something I can influence I guess switching ISP's will not make a difference as (almost?) all will connect through the AMS-IX. So, it is what it is....
Something up in the UK yesterday... on the mlat map I had a single yellow line to someone miles away that I don't normally have a line to, and other feeders were also suffering with just a few yellow lines. Seems to have "fixed itself" today.
There can't be sync without ADS-B aircraft being received, please open your own thread if you would like to continue talking about this. Likely few / no ADS-B aircraft to sync on.
Usually 150 ms shouldn't be an issue, the data is traveling all the way to the US west coast. Changing ISP might mean the ISP is better connect to AMS-IX. Some ISPs like to skimp on their connection to big exchanges meaning their customers suffer when lots of customers have traffic going via that exchange. Then the bottleneck is on the side of the ISP. Anyhow ... that's guesswork. Could just as well be the peering between AMS-IX and the peered networks of adsbexchange ISP. adsbexchange.com is via cloudflare, so pinging it isn't really useful, feed.adsbexchange.com is what matters.
The adsbexchange stats does POST requests via cloudflare .... that's just fragile and often throws errors ... due for elimination and integration into feed client. What you name mlat.service.log ... is actually adsbexchange-feed .... so that's confusing. Can you show more of the adsbexchange-feed log so i can get an idea of the frequency?
Pretty sure there is packet loss on the route .... nothing much to do about that. Wait until it's fixed. You can try mtr to find out which hop the packet loss is at. I suppose it could be that you're doing something strange like creating a data loop on accident meaning lots of data volume gets sent. Can you private message me your MLAT station name? Then i can take a look at the data coming in on the adsbexchange side.
Update: switched to wifi again, no packetloss. Replaced PoE hat (2018) with PoE+ hat (newest, 2021 model), no packetloss but number of planes and number of messages halved. Back to original PoE and Wifi. Further investigation required. Most interesting is the CPU and number of single messages during usage of the PoE+ hat. See graph2 and graph1:
@James , my thoughts as well. Will see whether a friend with spectrumanalyser will help investigate more and see if a solution is possible but this will take several weeks through time. Right now I have no idea if the RTL stick gets direct overload/desensitization from a HF switching power supply - distance between Pi and RTL is almost zero. Should not be because RTL-SDR V3 is build in aluminum case. Alternatives: signal gets into the USB cable between stick and Pi, unlikely because same effect without stick direct into Pi, or through the antenna @ 60cm/2ft away. Running 230V to the final mounting location is not a good idea ... Things to go figure out now that summer is on it's way.
Update: Replaced Ubiquiti PoE injector with a proper but very old HP PoE switch and again turned wifi off. Surprise: No more packetloss - 0 out of 67.000 MTR pings, no more spurious from the (original 2018) Raspberry PoE hat. The difference: The Ubiquiti PoE is not IEEE802.af/at compliant, the switch is. Even though there are users where these injectors work well, in this specific use case they do not. This was tested with a Pi 3B+. A Pi4 might have different behaviour but I don't currently have one of those available.