Thanks, setup with 51301 and it's working. Haven't had time to digest the architecture.. How do the 10 ingest servers merge data from each other before sending to distribution servers, and do the ingest servers send same stream of data to both distribution servers. I agree current client design couldn't handle a large number of entries.
Well it was running for about 7+ hours, socket closed at 8:54:46 AM Pacific. Now it's just in the loop connecting and closing.. Do I need to reserve a port before using?
No. Just send to the port. I'm working on it so feed server will occasionally reboot all day today. If it's in a reconnect loop - you're likely fighting with someone for the port. This is what happens because people don't read the instructions.
They are merged from beast to a json output then that is sent to distribution server and merged again - then sent to web ui servers. I stared working on replace a lot of the json VRS generates with an Apache rewrite rule. Even feeding what static files I could, like images etc using Apache. There are some questionable practices that do't scale such as: VRS plane images are rotated server side for the heading. VRS .NET built-in HTTP to act as webserver. VRS generates a large amount of JSON on request - not just the plane list - but the configs for the UI. Why not serve these as static files based on the config? VRS uses a lot of CPU to decode and merge. I was thinking that we need to write a feeder client that sends compressed json - and a server that accepts the compressed json and merges it into one json feed in VRS format. That's basically how all the other feeder sites work. We have to write a client to decode the 30005 feed. We can't look at dump1090 json and send that. If someone is feeding FlightAware or any other site, those json include the MLAT results. Which if you send FA mlat anywhere they send the lawyers.
Do you have an architecture diagram? Reviewing the other feed services / cloud db etc there are more pieces to everything then the simple feeder -> VRS -> clients
Not really that complex ... incoming feeds -> HAproxy -> feeder feeds balanced btwn 12 consolidators -> data stream from 12 consolidators merged into single feed and sent to 2 merger -> Merger -> 30 connetions to globals -> HAProxy <- incoming requests for data balanced between 30 globals
Okay so it appears API requests simply use VRS along with TCP port. Zip files are generated and served for history, what about the AWS Redshift option how is data feed to that?
In addition the TCP is now served with a Go based TCP relay. Not the most recent code but I'll get the update pushed once testing is done. https://github.com/adsbxchange/tcp-relay-pub-vrs