Companies today are facing a perfect storm in information technology.
As the world becomes more and more technological, more and more of a company’s success is driven – or limited – by its ability to innovate in IT. And IT innovation, in practice, means making software do new things.
But companies still developing with 1990s and even 1970s era software technology are bogged down in software change management, bug repair, and painful limitations to what their systems can do.
There is a global shortage of skilled software developers, so most teams are spread thinly. Even the best developers spend an average of 50% of their time just dealing with bugs and defects, and much of the remainder on repetitive tasks, not just building new capabilities. As a result, crucial software cannot be deployed – it’s not done when needed, or not reliable when updated.
Some companies have tried just giving up and outsourcing their IT, but found that buying commodity software means having only the same capabilities as one’s competitors – a recipe for business failure, keeping the CIO from ever adding real value to the company. Many have decided that’s a dead end. Companies are demanding more, not less, power from their IT departments – and just when IT leaders are facing both a capacity shortage and a quality shortage.
And it’s no longer possible to solve problems by just buying a faster computer: processor hardware has “hit the wall” with respect to raw speed. It simply takes too much electricity (think battery life and heat) to speed up silicon past about 4 GHz. So hardware makers several years ago switched their focus to multicore and cluster/ cloud architectures instead. Today racks of 16-core servers and even 4-core mobile phones are normal. And chip makers are excited to tell us about their 80-core and even 512-core machines. “Thanks for nothing,” say many app developers, because unfortunately traditional programming languages assume that computers do just one thing at a time. This means the way people are used to writing software just doesn’t work on modern hardware, and a powerhouse machine is often reduced to using just one core – or crashing when the developer tries to use the other cores. (HTTP Error messages, anyone?) This limitation couldn’t have shown up at a worse time for IT departments. Perfect storm, indeed.
A solution 21 years in the makingFortunately there is a new way to make software: Functional Programming with Haskell. Haskell, as you may know, was created over 21 years by a consortium of top universities plus a broad community of other bright innovators. In extensive interviews we’ve conducted at FP Complete, companies say Haskell brings them:
- Productive innovation, needing less time and fewer people per project
- Quality software that reliably works and can be reliably maintained
- Scale to handle any required size and speed of data processing
- High-level maintainable code
- High performance comparable to the fastest languages
- Extraordinary bug-proofing with powerful analysis built in to the compiler
- Portable cross-platform code
- Ability to take advantage of the most powerful modern hardware including multicores, GPUs, clusters, and clouds
Beyond just fixing the crisisAnd just in time to jump on a number of new opportunities. As computing systems become more powerful, Big Data has provided more and more opportunities to innovate. Imagine tracking every condition that changes demand for your products, and adjusting pricing, production, and inputs continuously. Imagine being able to identify complex financial patterns, and to turn the dial consciously between maximum profit and minimum risk. Imagine understanding online user behavior and continuously tuning what offers and articles users see – and being able to change these formulas at any time without breaking anything. Imagine analyzing everything a mobile device senses and proactively giving relevant information to the person carrying it. And imagine doing all this with no ceiling on the power of software or hardware that can be applied.
These are the sorts of opportunities commercial Haskell users are tackling. Some are turning to Haskell to build new software; more are using Haskell to add to existing systems that need to do more. Companies that believe their competitive advantage lies in IT – being able to do things their competitors cannot match – are choosing Haskell for high-productivity, reliable innovation.
Our next stepsOver the next three months we will be publishing several detailed case studies to share what we’ve learned from specific companies using Haskell. We hope you’ll find these useful in your own companies. (Feel free to send me your own success stories and suggestions by emailing aaron /at/ fpcomplete /dot/ com.) We will also be releasing a major new online facility for teaching and learning Haskell, as a free service to the community.
We are hiring salespeople to spread the word about Haskell; gathering software engineers to deliver commercial tools and components that open-source projects aren’t quite covering; underwriting some focused contributions to open-source Haskell projects; and working on new ways to release and deliver reliable Haskell distributions. Through sales of tools, training, and focused consulting, our work will be financially self-sustaining and we’ll keep contributing to the Haskell ecosystem for many years to come. You’ll be seeing work from us on every one of these areas in the coming months.
Companies, researchers, and open-source contributors: we all are committed to building on the remarkable success Haskell is already achieving – and to using this great FP system to help solve the global crisis in software productivity, scale, and quality.