If you are using base FA image 3.6.3 and NOT the one on this website then -> sudo apt-get install git git clone https://github.com/adsbxchange/adsb-exchange.git cd adsb-exchange chmod +x setup.sh sudo ./setup.sh
Reboot and check if mlat-client is running in htop. You can check status in: Code: nano /var/log/daemon.log
For that graph I just scrape stats from http://www.adsbexchange.com/coverage-(1-6)/ | grep num_positions every 20 min. I set it up to help visualize a theory I had about why the region 1 mlat was not working the other day (I sure do love making graphs). I can stop the data collectors if its being intrusive, just let me know. I do not scrape any stats from the adsbx mlat servers. But I have been connecting to the adsbx mlat for about a year since I got into this hobby; only region1 since that's where i have pi's set up. I only feed to adsbx. I only participate in the adsbx mlat. Because I also think that censorship is lame and I like the way you guys do things; I always have at least my best setup pi, 000pberryiL connected to the adsbx mlat if not more. I take the hobby pretty serious and spend almost all my free time on it. I think the mlat server is one of the coolest parts of it all and know how fragile it is; so wouldn't connect with anything but the mlat-client and precise coordinates
Just rechecked the sync stats matrix this morning and my feed is now showing https://www.adsbexchange.com/sync-3/
Scraping them is fine, the map is only made once an hour. Should be served by Apache - I have to look. I thought it might have been an open MLAT port sending real-time data or something - that would put load on the MLAT servers. If you would share, I'd actually like that code for ADSBx purposes to send out alerts and build an average number of MLAT over time chart. I assume you are just counting the JSON that makes the MLAT map pins?
I would love to contribute! Right now I run a script via systemd timers (it makes cron feel like am old windup egg timer) to create data for node_exporters text collector. I then scoop it up into a prometheus to visualize in grafana. If you have an existing monitoring framework such as nagios, I could modify the script to exit based on the numbers it sees; for examples, if mlat_users is less than X then exit critical to trigger an email notification. I could also modify it to be a standalone that just runs in the background, just let me know whats works. Keep in mind it's basically a 1 liner made in couple min, so it's kinda ghetto. I'm planning on sexying it up a bit this weekend, adding a timeout to the w3m (w3m is like links or lynx, but not from the dinosaur years) and some better error handling. Both of these scripts require more-utils (for sponge) and w3m. They are setup to write prom data files into /opt/collectors/data (the dir needs to exist and be writable). I can give you the systemd integration I use for scheduling if you'd like too, just let me know. MLAT Members We grab the members count from sync.json for each region, a little shell magic to format things, then we spit out prom data: Code: #!/bin/bash set -e BASE_DIR='/opt/collectors' DACO_NAME='adsbx_sync' DATA=$(for bar in $(seq 1 6); do echo -n "adsbx_sync${bar}_members "; w3m -dump https://www.adsbexchange.com/sync-$bar/sync.json | grep -v 'peers' | grep '\": {'| cut -d '"' -f 2| sort -u | wc -l; done) printf "$DATA\n" | sponge $BASE_DIR/data/$DACO_NAME.prom$$ && mv $BASE_DIR/data/$DACO_NAME.prom$$ $BASE_DIR/data/$DACO_NAME.prom you can get a list of mlat usernames running just: Code: w3m -dump https://www.adsbexchange.com/sync-1/sync.json | grep -v 'peers' | grep '\": {'| cut -d '"' -f 2| sort -u which can then be piped to hexdump or other shell utilities to find encoded characters, non utf8, etc. Above we pipe to wc to get a count of the users, to graph only the count of the users. Right now I'm testing collection rates to find a sweet spot. This data seems to change quite often, more than once a min. MLAT Positions (last hour) We hit data.js, which returns json, at the very top of it all there is num_positions which i believe has the last hours positions found as a value, we grep for that line, some shell magic to format things, then spit out a .prom file with the data. Code: #!/bin/bash set -e BASE_DIR='/opt/collectors' DACO_NAME='adsbx_coverage' DATA=$(for bar in $(seq 1 6); do w3m -dump http://www.adsbexchange.com/coverage-$bar/data.js| grep num_positions | sed "s/var /adsbx_coverage${bar}_/g" | sed 's/ =//g' | sed 's/;//g' ; done) printf "$DATA\n" | sponge $BASE_DIR/data/$DACO_NAME.prom$$ && mv $BASE_DIR/data/$DACO_NAME.prom$$ $BASE_DIR/data/$DACO_NAME.prom I run it every 20 min rather than hourly because it looks like not all regions update at the same time and I thought I saw one update more than once in an hour. I think its safe to say if a region other than 6 has an abnormally low number, or the exact same hour over hour, we gota problem. Previously I would check the positions values on the site when I was trying to figure out if mlat issues were on my end or region wide. I only started graphing the values the other day, but it seems like it's going to make some cool looking graphs, I can't wait to see what a week looks like: I wonder what those long dips in regions 3 and 4 are? I think the large drops to zero that recover quickly are either a restart of the mlat-server or an issues with my data collector (it needs that timeout on the data grab and better error handling) Shout out to Region 2, the top dogs!