• Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
  • Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!

How would you design a background C# application to accept external commands?

Collapse
X
  •  
  • Filter
  • Time
  • Show
Clear All
new posts

    How would you design a background C# application to accept external commands?

    I want to write an application which will run in the background, gathering data from multiple sources (remote devices) and aggregating into a DB; my preference is C# and that's how we do most things these days.

    However sometimes I would like to change which sources are consumed. This is not a polling application otherwise I'd just have it re-read config each poll, it maintains open connections, etc. It could still re-read config in a separate thread and kill/open connections I guess. Or I could even go low-tech and make users kill and re-launch the process each time... it's a server application so that would be a sysadmin task performed very rarely - typically we need to release a connection so someone else can connect to the external source which only allows one connection at a time.

    But this is all a bit clunky. When I did these things in C++ we used COM so a little utility GUI application could connect to the server application and interact with it, and C# supports COM but I really dislike it, I'd prefer not having reg-scripts involved for preference and portability.
    Is there an easy way to do the same thing in C#/.NET? The utility app would run on the same server - or it would be such a minimal inconvenience that we can have this limitation if it allows a simpler implementation.

    How might you design this, .NET experts?
    Originally posted by MaryPoppins
    I'd still not breastfeed a nazi
    Originally posted by vetran
    Urine is quite nourishing

    #2
    Would you not want to try and get the .NET code running as azure functions.

    Then you can simply start and stop individual processes to your hearts content through the portal. No server to deal with.

    Comment


      #3
      We're not using Azure, so I guess the short answer is "no"
      Originally posted by MaryPoppins
      I'd still not breastfeed a nazi
      Originally posted by vetran
      Urine is quite nourishing

      Comment


        #4
        My first thought would be to write a windows service and have it spawn multiple threads that connect to each source. I would then have a config table with the source details and if disabled or not.

        Each thread could connect to the remote connection and after updating the db with data from remote source, check the config table, if source is disabled then disconnect. Then the thread would have some kind of timer to connect to the DB to see if its enabled again. If enabled initiate the connection again. That way you could disable sources via a config table.

        Not sure how your remote connections work, do you poll the remote source via the open connection for data then update the db or does the remote source initiate the call via the open connection. Not sure how this works?
        Last edited by woohoo; 24 September 2019, 19:37.

        Comment


          #5
          Originally posted by d000hg View Post
          We're not using Azure, so I guess the short answer is "no"
          You don't need to be in the cloud - Azure functions can be used on premise.

          See

          Azure Functions 2.0: Create, debug and deploy to Azure Kubernetes Service (AKS)

          and

          Azure Functions on Kubernetes - Asavari Tayal - Medium

          for 2 quick overviews.

          Bringing serverless convenience to containers has more details on how to set up the on-premise side of things.
          merely at clientco for the entertainment

          Comment


            #6
            Originally posted by woohoo View Post
            My first thought would be to write a windows service and have it spawn multiple threads that connect to each source. I would then have a config table with the source details and if disabled or not.

            Each thread could connect to the remote connection and after updating the db with data from remote source, check the config table, if source is disabled then disconnect. Then the thread would have some kind of timer to connect to the DB to see if its enabled again. If enabled initiate the connection again. That way you could disable sources via a config table.

            Not sure how your remote connections work, do you poll the remote source via the open connection for data then update the db or does the remote source initiate the call via the open connection. Not sure how this works?
            Thanks. I am still deciphering the communication documentation but I think we maintain an open port to each remote source and it pushes updates to us. Which probably would mean a worker thread waiting on each open connection anyway. So then if we take your approach, I guess either a single thread could keep checking the config data and create/destroy connection threads as needed OR we still have one thread per source and that thread has a separate worker.

            Now there's also the case we add a new source rather than just temporarily disabling existing configured ones, but TBH I think we'd be happier just restarting the system in such cases anyway!

            It'd be neater if the thing was event driven rather than polling the config file/DB but I don't particularly want to design the entire architecture around that when in reality it makes no difference to anyone if it takes a minute for config changes to take effect.
            Last edited by d000hg; 25 September 2019, 10:15.
            Originally posted by MaryPoppins
            I'd still not breastfeed a nazi
            Originally posted by vetran
            Urine is quite nourishing

            Comment


              #7
              I am still deciphering the communication documentation but I think we maintain an open port to each remote source and it pushes updates to us
              The last time I did anything similar to this and really don't know if it helps was for a profiler. So would inject a static class, into c# code, IL before it was compiled. I then had an app that acted as a WCF Server. I used NetNamedPipeBinding for fast communication on the same machine (probably not applicable to you).

              The reason why I mention this is because of the communication documentation you mentioned and not sure how it works yet. It would be much more common for the remote source to hit your WCF server (for example) and pass in the information for you then to log it. I'm not entirely sure why a remote source would keep an open connection and why you would have several of these open connections at the same time.

              But I'm genuinely interested and if you could let me know more information or the approach you take. Either way let me know more.

              Comment


                #8
                The open connection thing may very well be my rustiness on low-level network coding, possibly it just sends stuff to our monitored port which does sound a bit more sensible.

                Why there are multiple is more simple - we're communicating with external specialist devices in several locations. There is no central server so we're communicating independently with each one. Now I suppose that does mean we could launch a whole load of processes, one per external device, instead of a multi-threaded application. But that seems messy
                Originally posted by MaryPoppins
                I'd still not breastfeed a nazi
                Originally posted by vetran
                Urine is quite nourishing

                Comment


                  #9
                  Do the machines push their data on an event, can you control the protocol that the event is triggered on? If you can I would have each device push to a Queue (MSQueue or RabbitMQ) and have a listener on a server to pull each event from the queue and process it.

                  In the cloud IoT Hub is perfect for this. I would then have an azure function to process the data from the IoT Hub Queue and store it it to a database.

                  If you are pulling the data a webService (azure function on a timer ;-) which runs every n seconds and scans a config database and connects to each device to pull the data via what ever communication protocol (named pipes, WCF, HTTP) and pull the data. Do the devices poll back and a message and say hey I have data to process please connect"?. You could then push the data to a queue on the machine for further processing to ensure to ensure the receive data and process data services are de-coupled.
                  Last edited by BlueSharp; 27 September 2019, 08:12.
                  Make Mercia Great Again!

                  Comment


                    #10
                    My current understanding is that first of all this is all raw TCP/IP, you build a message byte-by-byte in a very exact way and then send it over ethernet. This message says "I want to receive a datastream". The remote device then will start tossing data at you each time something of interest happens; as mentioned above it's not clear if the connection remains open and I haven't done this low-level stuff for a long time - but it IS pretty low level. I believe the underlying hardware is really working over serial interface, etc, with an ethernet adapter (can't go into any more details for NDA etc).
                    Originally posted by MaryPoppins
                    I'd still not breastfeed a nazi
                    Originally posted by vetran
                    Urine is quite nourishing

                    Comment

                    Working...
                    X