Trouble feeding to adsbexchange

Discussion in 'Feeding' started by Wizard, Sep 14, 2018.

  1. gavin

    gavin New Member

    That code gives me this
    6868.png
     
  2. James

    James Guest

    Last edited by a moderator: Jan 26, 2019
  3. MDA

    MDA Administrator Staff Member

    Now try to find something like
    upload_2019-1-25_21-43-49.png
     
  4. gavin

    gavin New Member

    Yes. I'm using 3.6.3

    I'll give that code a go

    Thanks
     
  5. MDA

    MDA Administrator Staff Member

    Check if mlat-client is not running already for 360radar.
     
  6. gavin

    gavin New Member

    Set-up the code so hopefully all is working
     
  7. MDA

    MDA Administrator Staff Member

    Can you see your feed on sync matrix?
     
  8. gavin

    gavin New Member

  9. MDA

    MDA Administrator Staff Member

    Did you reboot?
     
  10. gavin

    gavin New Member

    I didn't
     
  11. pil

    pil New Member

    I think region 3 might be having issues right now:

    [​IMG]
     
  12. MDA

    MDA Administrator Staff Member

    Reboot and check if mlat-client is running in htop.
    You can check status in:
    Code:
    nano /var/log/daemon.log
     
  13. James

    James Guest

    I need to block access to this ... You are connecting to ADSBX MLAT I assume? ...

     
  14. pil

    pil New Member

    For that graph I just scrape stats from http://www.adsbexchange.com/coverage-(1-6)/ | grep num_positions every 20 min. I set it up to help visualize a theory I had about why the region 1 mlat was not working the other day (I sure do love making graphs). I can stop the data collectors if its being intrusive, just let me know.

    I do not scrape any stats from the adsbx mlat servers. But I have been connecting to the adsbx mlat for about a year since I got into this hobby; only region1 since that's where i have pi's set up. I only feed to adsbx. I only participate in the adsbx mlat. Because I also think that censorship is lame and I like the way you guys do things; I always have at least my best setup pi, 000pberryiL connected to the adsbx mlat if not more. I take the hobby pretty serious and spend almost all my free time on it. I think the mlat server is one of the coolest parts of it all and know how fragile it is; so wouldn't connect with anything but the mlat-client and precise coordinates ;)
     
  15. gavin

    gavin New Member

    James likes this.
  16. MDA

    MDA Administrator Staff Member

    360radar feed still working? If yes then you don't need to do anything more.
    Thanks for feeding.
     
  17. gavin

    gavin New Member

    Yes. 360 radar is working perfectly as well so all looks splendid
     
  18. James

    James Guest


    Scraping them is fine, the map is only made once an hour. Should be served by Apache - I have to look. I thought it might have been an open MLAT port sending real-time data or something - that would put load on the MLAT servers.

    If you would share, I'd actually like that code for ADSBx purposes to send out alerts and build an average number of MLAT over time chart.

    I assume you are just counting the JSON that makes the MLAT map pins?
     
  19. James

    James Guest

    AWESOME!

    Are you sending to a custom port with that Pi?
     
  20. pil

    pil New Member

    I would love to contribute!

    Right now I run a script via systemd timers (it makes cron feel like am old windup egg timer) to create data for node_exporters text collector. I then scoop it up into a prometheus to visualize in grafana. If you have an existing monitoring framework such as nagios, I could modify the script to exit based on the numbers it sees; for examples, if mlat_users is less than X then exit critical to trigger an email notification. I could also modify it to be a standalone that just runs in the background, just let me know whats works.

    Keep in mind it's basically a 1 liner made in couple min, so it's kinda ghetto. I'm planning on sexying it up a bit this weekend, adding a timeout to the w3m (w3m is like links or lynx, but not from the dinosaur years) and some better error handling. Both of these scripts require more-utils (for sponge) and w3m. They are setup to write prom data files into /opt/collectors/data (the dir needs to exist and be writable). I can give you the systemd integration I use for scheduling if you'd like too, just let me know.

    MLAT Members
    We grab the members count from sync.json for each region, a little shell magic to format things, then we spit out prom data:
    Code:
    #!/bin/bash
    set -e
    BASE_DIR='/opt/collectors'
    DACO_NAME='adsbx_sync'
    DATA=$(for bar in $(seq 1 6); do echo -n "adsbx_sync${bar}_members "; w3m -dump https://www.adsbexchange.com/sync-$bar/sync.json | grep -v 'peers' | grep '\": {'| cut -d '"' -f 2| sort -u | wc -l; done)
    printf "$DATA\n" | sponge $BASE_DIR/data/$DACO_NAME.prom$$ && mv $BASE_DIR/data/$DACO_NAME.prom$$ $BASE_DIR/data/$DACO_NAME.prom
    
    you can get a list of mlat usernames running just:
    Code:
    w3m -dump https://www.adsbexchange.com/sync-1/sync.json | grep -v 'peers' | grep '\": {'| cut -d '"' -f 2| sort -u
    which can then be piped to hexdump or other shell utilities to find encoded characters, non utf8, etc. Above we pipe to wc to get a count of the users, to graph only the count of the users.
    Right now I'm testing collection rates to find a sweet spot. This data seems to change quite often, more than once a min.

    MLAT Positions (last hour)
    We hit data.js, which returns json, at the very top of it all there is num_positions which i believe has the last hours positions found as a value, we grep for that line, some shell magic to format things, then spit out a .prom file with the data.
    Code:
    #!/bin/bash
    set -e
    BASE_DIR='/opt/collectors'
    DACO_NAME='adsbx_coverage'
    DATA=$(for bar in $(seq 1 6); do w3m -dump http://www.adsbexchange.com/coverage-$bar/data.js| grep num_positions | sed "s/var /adsbx_coverage${bar}_/g" | sed 's/ =//g' | sed 's/;//g' ; done)
    printf "$DATA\n" | sponge $BASE_DIR/data/$DACO_NAME.prom$$ && mv $BASE_DIR/data/$DACO_NAME.prom$$ $BASE_DIR/data/$DACO_NAME.prom
    
    I run it every 20 min rather than hourly because it looks like not all regions update at the same time and I thought I saw one update more than once in an hour. I think its safe to say if a region other than 6 has an abnormally low number, or the exact same hour over hour, we gota problem.

    Previously I would check the positions values on the site when I was trying to figure out if mlat issues were on my end or region wide. I only started graphing the values the other day, but it seems like it's going to make some cool looking graphs, I can't wait to see what a week looks like:

    [​IMG]

    I wonder what those long dips in regions 3 and 4 are? I think the large drops to zero that recover quickly are either a restart of the mlat-server or an issues with my data collector (it needs that timeout on the data grab and better error handling)

    Shout out to Region 2, the top dogs!
     
    MDA likes this.