Companies today are facing a perfect storm in information technology.
As the world becomes more and more technological, more and more of a company’s success is driven – or limited – by its ability to innovate in IT. And IT innovation, in practice, means making software do new things.
But companies still developing with 1990s and even 1970s era software technology are bogged down in software change management, bug repair, and painful limitations to what their systems can do.
There is a global shortage of skilled software developers, so most teams are spread thinly. Even the best developers spend an average of 50% of their time just dealing with bugs and defects, and much of the remainder on repetitive tasks, not just building new capabilities. As a result, crucial software cannot be deployed – it’s not done when needed, or not reliable when updated.
Some companies have tried just giving up and outsourcing their IT, but found that buying commodity software means having only the same capabilities as one’s competitors – a recipe for business failure, keeping the CIO from ever adding real value to the company. Many have decided that’s a dead end. Companies are demanding more, not less, power from their IT departments – and just when IT leaders are facing both a capacity shortage and a quality shortage.
And it’s no longer possible to solve problems by just buying a faster computer: processor hardware has “hit the wall” with respect to raw speed. It simply takes too much electricity (think battery life and heat) to speed up silicon past about 4 GHz. So hardware makers several years ago switched their focus to multicore and cluster/ cloud architectures instead. Today racks of 16-core servers and even 4-core mobile phones are normal. And chip makers are excited to tell us about their 80-core and even 512-core machines. “Thanks for nothing,” say many app developers, because unfortunately traditional programming languages assume that computers do just one thing at a time. This means the way people are used to writing software just doesn’t work on modern hardware, and a powerhouse machine is often reduced to using just one core – or crashing when the developer tries to use the other cores. (HTTP Error messages, anyone?) This limitation couldn’t have shown up at a worse time for IT departments. Perfect storm, indeed.
These are the sorts of opportunities commercial Haskell users are tackling. Some are turning to Haskell to build new software; more are using Haskell to add to existing systems that need to do more. Companies that believe their competitive advantage lies in IT – being able to do things their competitors cannot match – are choosing Haskell for high-productivity, reliable innovation.
We are hiring salespeople to spread the word about Haskell; gathering software engineers to deliver commercial tools and components that open-source projects aren’t quite covering; underwriting some focused contributions to open-source Haskell projects; and working on new ways to release and deliver reliable Haskell distributions. Through sales of tools, training, and focused consulting, our work will be financially self-sustaining and we’ll keep contributing to the Haskell ecosystem for many years to come. You’ll be seeing work from us on every one of these areas in the coming months.
Companies, researchers, and open-source contributors: we all are committed to building on the remarkable success Haskell is already achieving – and to using this great FP system to help solve the global crisis in software productivity, scale, and quality.
Do you like this blog post and need help with industrial Haskell, Rust or DevOps? Contact us.