• Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
  • Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!

So who did BA outsource their IT to?

Collapse
X
  •  
  • Filter
  • Time
  • Show
Clear All
new posts

    #91
    Originally posted by AtW View Post
    FTFY
    The flight was cancelled.

    Comment


      #92
      Originally posted by Troll View Post
      Yes



      Yes again
      Well plagiarised Mr Troll.
      His heart is in the right place - shame we can't say the same about his brain...

      Comment


        #93
        Originally posted by AtW View Post
        FTFY
        Indeed.
        Every contractor should have a taxi company on speed-dial. You never know when you might need a fast car to the airport.

        Comment


          #94
          Originally posted by AtW View Post
          Even if it was meteorite hitting whole datacenter and destroying everything - they should have had another one hundreds of miles away as backup ready to go at moments notice.
          Certainly BA had that 25 years ago and even 40 years ago. If the main DC blew up, the second site or DR would take over seamlessly.

          The above strategy is still used by major banks and corporate's and works fine. I think what BA is saying about their outage is 100% BS. Someone has screwed the database badly and it has been made worse by a bad fix.

          I have tested bank systems where you can pull out the plug on a DB server and on restart the TTM will recover without loss.
          "A people that elect corrupt politicians, imposters, thieves and traitors are not victims, but accomplices," George Orwell

          Comment


            #95
            Originally posted by Paddy View Post
            Certainly BA had that 25 years ago and even 40 years ago. If the main DC blew up, the second site or DR would take over seamlessly.

            The above strategy is still used by major banks and corporate's and works fine. I think what BA is saying about their outage is 100% BS. Someone has screwed the database badly and it has been made worse by a bad fix.

            I have tested bank systems where you can pull out the plug on a DB server and on restart the TTM will recover without loss.
            Certainly true now. However in 1991 I was working on UBS trading floor. Someone decided to test UPS at about 10am one day. It as about 3pm before trading started to resume. Thankfully things have come a long way.

            Comment


              #96
              Telegraph looks like its just the Home page
              Telegraph that, for months, tried to block non subscribers by using cookies.
              bloggoth

              If everything isn't black and white, I say, 'Why the hell not?'
              John Wayne (My guru, not to be confused with my beloved prophet Jeremy Clarkson)

              Comment


                #97
                ToryGraph turned into poor cousin of The Daily Mail - click-baiting tulip behind paywall: they keep praising May as if those Brothers want some massive favour from her...

                Comment


                  #98
                  Originally posted by xoggoth View Post
                  Telegraph that, for months, tried to block non subscribers by using cookies.
                  I had already given up before the paywall.

                  Comment


                    #99
                    Originally posted by original PM View Post
                    Are we really supposed to believe that a company as large as TCS do not have this level of basic protection against outages?

                    Either TCS are a complete and utter bunch of incompetent bollock juggling idiots .....
                    Gets my vote

                    Comment


                      Originally posted by Paddy View Post
                      Certainly BA had that 25 years ago and even 40 years ago. If the main DC blew up, the second site or DR would take over seamlessly.

                      The above strategy is still used by major banks and corporate's and works fine. I think what BA is saying about their outage is 100% BS. Someone has screwed the database badly and it has been made worse by a bad fix.

                      I have tested bank systems where you can pull out the plug on a DB server and on restart the TTM will recover without loss.
                      I'm 5 years out of date with bank DRs but you used to be lucky if the switch happened within 4 hrs after the decision was made, which itself took 3-4hrs. Given the chances of it actually being needed and the cost of regular testing I doubt it has improved significantly. I did work for one of the largest banks and we were pushing the limits of technology even when everything worked, worst case DR would be seriously stressing the technology.

                      DR testing often has a period of checking everything is in-place and worked correctly before commencement, such as checking all the backups actually worked before DR is invoked.

                      I do recall one incident when standalone testing the UPS generators blew everything because they weren't isolated
                      Last edited by BigRed; 29 May 2017, 22:29.

                      Comment

                      Working...
                      X