• Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
  • Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!

GPU, GPGPU & Parallel Programming

Collapse
X
  •  
  • Filter
  • Time
  • Show
Clear All
new posts

    GPU, GPGPU & Parallel Programming

    I'm looking for a new area of development to get into. I've recently been doing some of the usual C# ASP.NET MVC web dev stuff that seems pretty main stream nowadays, but I find it tiresome having to worry about trivialities such as the fine positioning of a text box on a web page, and I'm getting bored of firing off simple SQL queries at a database back end. I'm not sure I want to continue to go down that route.

    I think I'd rather be writing back-end number-crunching type code and keep away from front-end UI stuff. I was thinking that getting into GPU/GPGPU/Parallel Programming would be quite fascinating, and possibly a growth area over the new few years. Apparently there is huge processing power to be exploited in graphics card GPUs due to their multi-core architecture. Also, I read that they can potentially crunch numbers not only faster than the CPU but also with less power consumption (in important consideration, particularly for mobile devices).

    This is all quite reminiscent of transputers in the 1980's. That type of parallel architecture seems to finally be catching on in a big way now that we have multi-core CPUs & GPUs.

    I must admit I've only just hit on this idea, and not really researched it yet so I cannot speak from any great knowledge or experience. I just got started reading "C++ AMP" (by Kate Gregory & Ade Miller). I'm not necessarily limiting myself to C++ at this stage though.

    Have any of you guys dabbled in this area?

    If so, what business domain and languages/tech did you work with?

    What's your opinion of the future prospects for this tech?

    #2
    Dabbled only, so I can't offer much substance, but I didn't find it to be sufficiently broadly applicable for the range of parallel problems I was looking to solve. There's quite a large overhead in learning things like CUDA (for NVIDIA hardware). There are high-level interfaces (e.g. Java) and OpenCL is more broadly applicable, but you need to understand the problem quite deeply to know how to map it efficiently to the hardware (i.e. domain knowledge). I think GP-GPU are very useful indeed for certain problems, particularly on a low budget, but less so for problems that involve a mixture of constraints, such as I/O, memory and processing. That was my impression, anyway.

    TBH, the old adage of keeping things as simple as possible and avoiding optimization unless absolutely necessary is a good one. Whether there's a contracting demand for this level of specialism, I don't really know. I sell domain knowledge and use computing as a means to an end. I would think that, in many instances, you would need this domain knowledge as much as any computational knowledge. I could be wrong though. FWIW, I am finding myself using software parallelism through distributed computing with frameworks like Apache Spark as it's very straightforward to learn and deploy on diverse hardware, and easy to re-use for multiple problems.

    Comment


      #3
      ASP.NET MVC is going to pay the bills, pay the mortgage off, buy you a top of the range executive German car, take the family on nice foreign holidays for the next 10 years. What's not to like. Money for old rope.

      Personally, I'd forget the other stuff, stick with the good ole MS gravy train. Toot toot!

      Comment


        #4
        GPU programming got pretty limited scope, very niche skill - unless you have particular interest in that problem domain then I would not bet a farm on it, HOWEVER what can be useful is leaning how to solve problems in parallel way, especially with multiple "cloud" servers involved - this may or may not include GPUs.

        Comment


          #5
          Originally posted by TTheTTTTT View Post
          What's your opinion of the future prospects for this tech?
          It seems to me it's a very similar area to assembler language programing : it's far and away more efficient than normal high level language programming but the ratio of jobs available to hll programmers to gpu programmers must be something like 1000:1 or worse.

          Basically almost no applications need gpu level programming (nor assembler language programming for that matter). Those application areas that do need it (eg video codecs and such like) have their own special experience requirements which means that a) you won't easily find work just through having the gpu experience and b) the people with the domain experience have already got the gpu experience.

          So it doesn't seem to me to be particularly promising unless you already have experience in a domain that can benefit from gpu level programming. And I speak as someone who has video codec exposure in assembler language on several different hardware platforms.

          Boo

          Comment


            #6
            I experimented with AMP and OpenMP a bit. I'm writing multithreaded code all the time so it's a natural progression and something I would like to get into. I'm not convinced there are the applications beyond a few specialist areas; as with threads it sounds like a great idea to run things in parallel but sometimes when you analyse it it turns out to be less useful than you think, or just plain slower. With the GPU you've got to get the data in and out of the GPU; I would think it's only really going to fly if you have to carry out a big complicated massively parallel calculation on a small amount of data, or it's something that stays in video memory like video decoding or a game. At a previous job they told me they'd implemented something with CUDA (and the sort of thing they do sounds exactly like something that would benefit), but found it was no faster than using the CPU but added a lot of code complexity.

            Interesting though. I've seen more jobs asking for OpenGL.
            Will work inside IR35. Or for food.

            Comment


              #7
              GpGPUs will get more interesting next year once they get more or less unified memory (without crazy access penalties), and more general CPU-like capabilities - otherwise they are limited in the amount of data they can keep in RAM and you need to be able to feed lots of GPU cores with it doing things that only some tasks can benefit from.

              Comment


                #8
                Originally posted by AtW View Post
                GpGPUs will get more interesting next year once they get more or less unified memory (without crazy access penalties), and more general CPU-like capabilities - otherwise they are limited in the amount of data they can keep in RAM and you need to be able to feed lots of GPU cores with it doing things that only some tasks can benefit from.
                Am i being thick or is all this GPU stuff just using the GFX card processing power to do what the general CPU does??????

                What's the difference? Surely a POWER8 4-core or SPARC CPU for example has more oomph than what is effectively a gaming card?

                PS I know I'm missing something....

                Comment


                  #9
                  Originally posted by stek View Post
                  Am i being thick or is all this GPU stuff just using the GFX card processing power to do what the general CPU does??????

                  What's the difference? Surely a POWER8 4-core or SPARC CPU for example has more oomph than what is effectively a gaming card?

                  PS I know I'm missing something....
                  The point about a GPU is that you can get vastly more cores (as in thousands) for a fraction of the price that you would CPU cores. They are, therefore, very well suited to massively parallel tasks, but not all such tasks (e.g. not for tasks that require a lot of I/O and memory).

                  Comment


                    #10
                    Originally posted by jamesbrown View Post
                    The point about a GPU is that you can get vastly more cores (as in thousands) for a fraction of the price that you would CPU cores. They are, therefore, very well suited to massively parallel tasks, but not all such tasks (e.g. not for tasks that require a lot of I/O and memory).
                    OK I get that, but if had access to a SPARC proc with what, 1024 effective vcpus - is that better? Or SGI's old NUBUS tech, where all memory was shared between CPU, IO, GPU the lot?

                    Comment

                    Working...
                    X