[
{
"name": "IT Compliance",
"slug": "it-compliance",
"permalink": "https://www.fpcomplete.com/categories/it-compliance/",
"pages": [
{
"relative_path": "blog/pathway-to-information-security-management-and-certification.md",
"content": "<p><strong>The Pathway to Information Security Management and Certification</strong></p>\n<p>Information security is a complex area to handle well. The possible risks to information assets and reputation, including computer systems and countless filing cabinets full of valuable proprietary information, are difficult to determine and bring under control. Plus, this needs to be done in ways that don't unduly interfere with the legitimate use of information by authorized users. </p>\n<p>The most practical and cost-effective way to handle information security and governance obligations, and to be seen to be doing so, is to adopt an Information Security Management System (ISMS) that complies with the international standard such as SOC-2 or ISO 27001. An ISMS is a framework of policies, processes and controls used to manage information security in a structured, systematic manner.</p>\n<p><strong>Why implement an ISMS and pursue an Information Security Certification?</strong></p>\n<ul>\n<li>Improve policies and procedures by addressing critical security related processes and controls </li>\n<li>Minimizes the actual and perceived impact of data breaches </li>\n<li>Objective verification that there are controls on the security risks related to Information Assets </li>\n</ul>\n<p>At a high level, the ISMS will help minimize the costs of security incidents and enhance your brand. In more detail, the ISMS will be used to: </p>\n<ul>\n<li>systematically assess the organization's information risks in order to establish and prioritize its security requirements, primarily in terms of the need to protect the confidentiality, integrity and availability of information </li>\n<li>design a suite of security controls, both technical and non-technical in nature, to address any risks deemed unacceptable by management </li>\n<li>ensure that security controls satisfy compliance obligations under applicable laws, regulations and contracts (such as privacy laws, PCI and HIPAA) </li>\n<li>operate, manage and maintain the security controls </li>\n<li>monitor and continuously improve the protection of valuable information assets, for example updating the controls when the risks change (e.g. responding to novel hacker attacks or frauds, ideally in advance thereby preventing us from suffering actual incidents!). </li>\n</ul>\n<p><strong>Information Security Focus Areas</strong></p>\n<ul>\n<li>What is the proper scope for the organization? </li>\n<li>What are applicable areas and controls? </li>\n<li>Are the proper policies & procedures documented? </li>\n<li>Is the organization living these values? </li>\n</ul>\n<p><strong>What are the Outcomes</strong></p>\n<ul>\n<li>Improved InfoSec policies and procedures </li>\n<li>Confirmation of the implementation of Incident and Risk Management </li>\n<li>Completion of Asset and Risk register </li>\n<li>Implementation of an Information Security Management System (ISMS) for your scope </li>\n<li>Prepared for independent certification auditor </li>\n<li>Gain trust from customers and partners. </li>\n</ul>\n<p><strong>Information Security Certification Preparation Project</strong></p>\n<p><img src=\"/images/blog/info-sec-cert-prep-prep-project.png\" alt=\"Information Security Certification Preparation Project\" /></p>\n<p><strong>Key Project Activities</strong></p>\n<ul>\n<li>Define Certification Scope </li>\n<li>Perform Gap Assessment against the relevant standard (SOC-2, ISO 27001)</li>\n<li>Identify Documentation Requirements </li>\n<li>Identify Evidence Requirements </li>\n<li>Develop New Documents required for certification </li>\n<li>Perform Impact Assessment </li>\n<li>Maintain Data Flow diagrams </li>\n<li>Maintain Risk Register </li>\n<li>Prepare for Pre-Certification Audit </li>\n<li>Remediate findings from Pre-Cert Audit </li>\n<li>Prepare for Stage 1 and Stage 2 </li>\n<li>Obtain Standards Body Certification or audited Report </li>\n</ul>\n<p>FP Complete has extensive experience in the preparation of SOC-2 and ISO 270001 certifications, as well as many other security certifications. Contact us if we can help your organization. </p>\n",
"permalink": "https://www.fpcomplete.com/blog/pathway-to-information-security-management-and-certification/",
"slug": "pathway-to-information-security-management-and-certification",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "The Pathway to Information Security Management and Certification ",
"description": "Information security is a complex area to handle well.",
"updated": null,
"date": "2021-06-10",
"year": 2021,
"month": 6,
"day": 10,
"taxonomies": {
"categories": [
"IT Compliance"
],
"tags": [
"compliance"
]
},
"extra": {
"author": "Jeffrey Silver",
"blogimage": "/images/blog-listing/distributed-ledger.png",
"image": "images/blog/thumbs/intermediate-training-courses.png"
},
"path": "blog/pathway-to-information-security-management-and-certification/",
"components": [
"blog",
"pathway-to-information-security-management-and-certification"
],
"summary": null,
"toc": [],
"word_count": 505,
"reading_time": 3,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
}
]
},
{
"name": "blockchain",
"slug": "blockchain",
"permalink": "https://www.fpcomplete.com/categories/blockchain/",
"pages": [
{
"relative_path": "blog/blockchain-technology-smart-contracts-save-money.md",
"content": "<p>With the cost of goods only going up and the increased scarcity of quality workers and resources, saving money and time in your day-to-day business operations is paramount. Therefore, adopting blockchain technology into your traditional day-to-day business operations is key to giving you back valuable time, saving you money, creating less dependency on workers, and modernizing your business operations for good. There are many ways blockchain technology can help you and your business save money and resources, but one profound way is through the use of smart contracts.</p>\n<p>Smart contracts are software contracts that execute predefined logic based on the parameters coded into the system. Smart contracts are digital agreements that automatically run transactions between parties, increasing speed, accuracy, and integrity in payment and performance. In addition, smart contracts are legally enforceable if they comply with contract law. </p>\n<p>The smart contract aims to provide transactional security while reducing surplus transaction costs. In addition, smart contracts can automate the execution of an agreement so that all parties are immediately sure of the outcome without the need for intermediary involvement. For example, instead of hiring a department to handle contract review and purchasing, your business can run smart contracts that enforce the same procedures more effectively at substantial cost savings. In addition, your business can use smart contracts to manage your corporate documents, regulatory compliance procedures, cross-border financial transactions, real property ownership, supply management, and the chronology of ownership of your business IP, materials, and licenses. </p>\n<p>Finance and banking are prime examples of industries that have benefited from smart contract applications. Smart contracts track corporate spending, stock trading, investing, lending, and borrowing. Smart contracts are also used in corporate mergers and acquisitions and are frequently used to configure or reconfigure entire corporate structures. </p>\n<p>Below is an illustration of how smart contracts work:</p>\n<p><img src=\"/images/blog/how-smart-contracts-work.png\" alt=\"CPU usage\" /></p>\n<p>As you can imagine, blockchain technology and smart contracts are still developing. They do have some roadblocks and implementational challenges. Still, these pitfalls and hassles cannot take away from the many benefits blockchain technology offers to businesses needing to save money and resources.</p>\n<p>FP Complete Corporation has direct experience working with blockchain technologies <a href=\"https://www.fpcomplete.com/blockchain/\">(learn more here)</a>, most recently the <a href=\"https://www.fpcomplete.com/blog/levana-nft-launch/\">Levana NFT launch</a>, which relied on blockchain technology written by one of our engineers. Previously, one of our senior engineers released a video titled “<a href=\"https://www.youtube.com/watch?v=jngHo0Gzk6s\">How to be Successful at Blockchain Development</a>,” highlighting our expertise in this area in detail. If you want to learn more about how we can help you with blockchain technology, please <a href=\"https://www.fpcomplete.com/contact-us/\">contact us today</a>.</p>\n",
"permalink": "https://www.fpcomplete.com/blog/blockchain-technology-smart-contracts-save-money/",
"slug": "blockchain-technology-smart-contracts-save-money",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Blockchain Technology, Smart Contracts, and Your Company",
"description": "How Blockchain Technology and Smart Contracts Can Help You and Your Company Save Money and Resources Now!",
"updated": null,
"date": "2022-01-16",
"year": 2022,
"month": 1,
"day": 16,
"taxonomies": {
"tags": [
"blockchain",
"smart contracts"
],
"categories": [
"blockchain",
"smart contracts"
]
},
"extra": {
"author": "FP Complete",
"keywords": "blockchain, NFT, cryptocurrency, smart contracts",
"blogimage": "/images/blog-listing/blockchain.png"
},
"path": "blog/blockchain-technology-smart-contracts-save-money/",
"components": [
"blog",
"blockchain-technology-smart-contracts-save-money"
],
"summary": null,
"toc": [],
"word_count": 442,
"reading_time": 3,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/hedera-platform-audit.md",
"content": "<p><strong>FP Complete Publishes Results of Independent 3rd Party Audits of Hedera Platform and New Hedera Token Service</strong></p>\n<p><em>FP Complete Corporation development specialists conducted a comprehensive review of Hedera's code and technical documentation</em></p>\n<p><strong>Zug, Switzerland – February 9, 2021 –</strong> As part of its goal to deliver\ntransparency to the development community,\n<a href=\"http://www.hedera.com/\">Hedera Hashgraph</a>, the enterprise-grade public\ndistributed ledger, engaged FP Complete, an IT engineering specialist,\nto perform an independent audit of the engineering work by Hedera's\ndevelopment team on the Hedera platform, including the new Hedera Token\nService. The full completed audit reports can be found at:</p>\n<ul>\n<li><a href=\"https://hedera.com/fp-complete-hedera\">Hedera Platform</a></li>\n<li><a href=\"https://hedera.com/fp-complete-hts\">Hedera Token Service</a></li>\n</ul>\n<p>Founded by the former head of Microsoft's own in-house engineering\ntools, Aaron Contorer, FP Complete Corporation is the world's leading\nsupplier of commercial-grade tools and engineering for advanced\nprogramming languages, distributed systems, blockchain, and DevOps\ntechnologies. FP Complete performed an in-depth code review to examine\nthe Hedera software quality, focusing on robustness, security, and\naudibility.</p>\n<p>FP Complete also completed a review of Hedera's code and technical\ndocumentation, enabling the development team to use this ongoing work to\noptimize the engineering methods, tools, and coding standards used to\nimplement the Hedera network. The publication of these results\ndemonstrates the Company's commitment to technical rigor and\ntransparency.</p>\n<p>Dr. Leemon Baird, co-founder and Chief Scientist of Hedera Hashgraph,\ncomments: "These third-party audits by FP Complete illustrate our\ncommitment to high-quality engineering, project transparency, and a\nrigorous and independent auditing process. We are pleased to be able to\npublish these audit results today and look forward to sharing additional\naudit findings as they are completed in the future."</p>\n<p>Wesley Crook, CEO of FP Complete, comments: "We have worked with the\nHedera team to conduct a third-party audit of their codebase to assess\nsecurity, stability, and correctness. Our team of software, blockchain,\nand network architecture experts has provided feedback throughout the\ndevelopment process."</p>\n<hr />\n<h2 id=\"about-hedera\">About Hedera</h2>\n<p>Hedera is a decentralized enterprise-grade public network on which\nanyone can build secure, fair applications with near real-time finality.\nThe platform is owned and governed by a council of the world's leading\norganizations including Avery Dennison, Boeing, Dentons, Deutsche\nTelekom, DLA Piper, eftpos, FIS (WorldPay), Google, IBM, LG Electronics,\nMagalu, Nomura, Swirlds, Tata Communications, University College London\n(UCL), Wipro, and Zain Group.</p>\n<p>For more information, visit\nhttps://www.hedera.com, or follow us on Twitter\nat <a href=\"https://twitter.com/hedera\">@hedera</a>, Telegram at\n<a href=\"https://t.me/hederahashgraph\">t.me/hederahashgraph</a>, or Discord at\n<a href=\"https://www.hedera.com/discord\">www.hedera.com/discord</a>. The Hedera\nwhitepaper can be found at\n<a href=\"https://hedera.com/papers\">www.hedera.com/papers</a>.</p>\n<h2 id=\"about-fp-complete\">About FP Complete</h2>\n<p>FP Complete is an advanced server-side software development and DevOps\nconsulting Company. We specialize in helping FinTech companies solve\ntheir unique set of problems related to data and information integrity,\ndata security, architectural design, systems integration, and regulatory\ncompliance. We are recognized worldwide for our contributions to the\nfunctional programming community using the Haskell programming language.\nOur people and processes have helped countless companies increase the\nvelocity and quality of their delivered software products. From fortune\n500 biotech companies to small blockchain FinTech software companies we\nhave solved unique and complicated problems with expert results.</p>\n<p><a href=\"https://www.fpcomplete.com/\">https://www.fpcomplete.com/</a></p>\n",
"permalink": "https://www.fpcomplete.com/blog/hedera-platform-audit/",
"slug": "hedera-platform-audit",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Hedera Platform Audit",
"description": "FP Complete has conducted a third party audit of the Hedera Platform and New Hedera Token Service. Check out the press release for more information.",
"updated": null,
"date": "2021-02-09",
"year": 2021,
"month": 2,
"day": 9,
"taxonomies": {
"categories": [
"blockchain"
],
"tags": [
"blockchain"
]
},
"extra": {
"author": "FP Complete Staff",
"blogimage": "/images/blog-listing/distributed-ledger.png",
"image": "images/blog/hedera-platform-audit.png"
},
"path": "blog/hedera-platform-audit/",
"components": [
"blog",
"hedera-platform-audit"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "about-hedera",
"permalink": "https://www.fpcomplete.com/blog/hedera-platform-audit/#about-hedera",
"title": "About Hedera",
"children": []
},
{
"level": 2,
"id": "about-fp-complete",
"permalink": "https://www.fpcomplete.com/blog/hedera-platform-audit/#about-fp-complete",
"title": "About FP Complete",
"children": []
}
],
"word_count": 538,
"reading_time": 3,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
}
]
},
{
"name": "devops",
"slug": "devops",
"permalink": "https://www.fpcomplete.com/categories/devops/",
"pages": [
{
"relative_path": "blog/partnership-portworx-pure-storage.md",
"content": "<p><strong>FP Complete Corporation Announces Partnership with Portworx by Pure Storage to Streamline World-Class DevOps Consulting Services with State-of-the-Art, End-To-End Storage and Data Management Solution for Kubernetes Projects.</strong></p>\n<p><strong>Charlotte, North Carolina (August 31, 2022)</strong> – FP Complete Corporation, a global technology partner that specializes in DevSecOps, Cloud Native Computing, and Advanced Server-Side Programming Languages today announced that it has partnered with Portworx by Pure Storage to bring an integrated solution to customers seeking DevSecOps consulting services for the management of persistent storage, data protection, disaster recovery, data security, and hybrid data migrations.</p>\n<p>The partnership between FP Complete Corporation and Portworx will be integral in providing FP Complete's DevSecOps and Cloud Enablement clients with a data storage platform designed to run in a container that supports any cloud physical storage on any Kubernetes distribution.</p>\n<p>Portworx Enterprise gets right to the heart of what developers and Kubernetes admins want: data to behave like a cloud service. Developers and Admins wish to request Storage based on their requirements (capacity, performance level, resiliency level, security level, access, protection level, and more) and let the data management layer figure out all the details. Portworx PX-Backup adds enterprise-grade point-and-click backup and recovery for all applications running on Kubernetes, even if they are stateless.</p>\n<p>Portworx shortens development timelines and headaches for companies moving from on-prem to cloud. In addition, the integration between FP Complete Corporation and Portworx allows the easy exchange of best practices information, so design and storage run in parallel.</p>\n<p>Gartner predicts that by 2025, more than 85% of global organizations will be running containerized applications in production, up from less than 35% in 2019<sup>1</sup>. As container adoption increases and more applications are being deployed in the enterprise, these organizations want more options to manage stateful and persistent data associated with these modern applications.</p>\n<p>"It is my pleasure to announce that Pure Storage can now be utilized by our world-class engineers needing a fully integrated, end-to-end storage and data management solution for our DevSecOps clients with complicated Kubernetes projects. Pure Storage is known globally for its strength in the storage industry, and this partnership offers strong support for our business," said Wes Crook, CEO of FP Complete Corporation.</p>\n<p>“There can be zero doubt that most new cloud-native apps are built on containers and orchestrated by Kubernetes. Unfortunately, the early development on containers resulted in lots of data access and availability issues due to a lack of enterprise-grade persistent storage data management and low data visibility. With Portworx and the aid of Kubernetes experts like FP Complete, we can offer customers a rock-solid, enterprise-class, cloud-native development platform that delivers end-to-end application and data lifecycle management that significantly lowers the risks and costs of operating cloud-native application infrastructure,” said Venkat Ramakrishnan, VP, Engineering, Cloud Native Business Unit, Pure Storage.</p>\n<div><u><strong>About FP Complete Corporation</strong></u></div>\nFounded in 2012 by Aaron Contorer, former Microsoft executive, FP Complete Corporation is known globally as the one-stop, full-stack technology shop that delivers agile, reliable, repeatable, and highly secure software. In 2019, we launched our flagship platform, Kube360®, which is a fully managed enterprise Kubernetes-based DevOps ecosystem. With Kube360, FP Complete is now well positioned to provide a complete suite of products and solutions to our clients on their journey towards cloudification, containerization, and DevOps best practices. The Company's mission is to deliver superior software engineering to build great software for our clients. FP Complete Corporation serves over 200+ global clients and employs over 70 people worldwide. It has won many awards and made the Inc. 5000 list in 2020 for being one of the 5000 fastest-growing private companies in America. For more information about FP Complete Corporation, visit its website at [www.fpcomplete.com](https://www.fpcomplete.com/).\n<p><sup>1</sup> <small>Arun Chandrasekaran, <a href=\"https://www.gartner.com/en/documents/3988395\">Best Practices for Running Containers and Kubernetes in Production</a>, Gartner, August 2020</small></p>\n",
"permalink": "https://www.fpcomplete.com/blog/partnership-portworx-pure-storage/",
"slug": "partnership-portworx-pure-storage",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "FP Complete Corporation Announces Partnership with Portworx by Pure Storage",
"description": "FP Complete Corporation Announces Partnership with Portworx by Pure Storage to Streamline World-Class DevOps Consulting Services with State-of-the-Art, End-To-End Storage and Data Management Solution for Kubernetes Projects.",
"updated": null,
"date": "2022-08-29",
"year": 2022,
"month": 8,
"day": 29,
"taxonomies": {
"tags": [
"devops",
"insights"
],
"categories": [
"devops"
]
},
"extra": {
"author": "FP Complete Staff",
"keywords": "Portworx Pure Storage",
"blogimage": "/images/blog-listing/cloud-computing.png"
},
"path": "blog/partnership-portworx-pure-storage/",
"components": [
"blog",
"partnership-portworx-pure-storage"
],
"summary": null,
"toc": [],
"word_count": 669,
"reading_time": 4,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/canary-deployment-istio.md",
"content": "<p>Istio is a service mesh that transparently adds various capabilities\nlike observability, traffic management and security to your\ndistributed collection of microservices. It comes with various\nfunctionalities like circuit breaking, granular traffic routing, mTLS\nmanagement, authentication and authorization polices, ability to do\nchaos testing etc.</p>\n<p>In this post, we will explore on how to do canary deployments of our\napplication using Istio.</p>\n<h2 id=\"what-is-canary-deployment\">What is Canary Deployment</h2>\n<p>Using Canary deployment strategy, you release a new version of your\napplication to a small percentage of the production traffic. And then\nyou monitor your application and gradually expand its percentage of\nthe production traffic.</p>\n<p>For a canary deployment to be shipped successfully, you need good\nmonitoring in place. Based on your exact use case, you might want to\ncheck various metrics like performance, user experience or <a href=\"https://en.wikipedia.org/wiki/Bounce_rate\">bounce\nrate</a>.</p>\n<h2 id=\"pre-requisites\">Pre requisites</h2>\n<p>This post assumes that following components are already provisioned or\ninstalled:</p>\n<ul>\n<li>Kubernetes cluster</li>\n<li>Istio</li>\n<li>cert-manager: (Optional, required if you want to provision TLS\ncertificates)</li>\n<li>Kiali (Optional)</li>\n</ul>\n<h2 id=\"istio-concepts\">Istio Concepts</h2>\n<p>For this specific deployment, we will be using three specific features\nof Istio's traffic management capabilities:</p>\n<ul>\n<li><a href=\"https://istio.io/latest/docs/concepts/traffic-management/#virtual-services\">Virtual Service</a>: Virtual Service describes how traffic flows to\na set of destinations. Using Virtual Service you can configure how\nto route the requests to a service within the mesh. It contains a\nbunch of routing rules that are evaluated, and then a decision is\nmade on where to route the incoming request (or even reject if no\nroutes match).</li>\n<li><a href=\"https://istio.io/latest/docs/concepts/traffic-management/#gateways\">Gateway</a>: Gateways are used to manage your inbound and outbound\ntraffic. They allow you to specify the virtual hosts and their\nassociated ports that needs to be opened for allowing the traffic\ninto the cluster.</li>\n<li><a href=\"https://istio.io/latest/docs/reference/config/networking/destination-rule/\">Destination Rule</a>: This is used to configure how a client in\nthe mesh interacts with your service. It's used for configuring TLS\nsettings of <a href=\"https://istio.io/latest/docs/reference/config/networking/sidecar/\">your sidecar</a>, splitting your service into subsets,\nload balancing strategy for your clients etc.</li>\n</ul>\n<p>For doing canary deployment, destination rule plays a major role as\nthat's what we will be using to split the service into subset and\nroute traffic accordingly.</p>\n<h2 id=\"application-deployment\">Application deployment</h2>\n<p>For our canary deployment, we will be using the following version of\nthe application:</p>\n<ul>\n<li><a href=\"https://httpbin.org/\">httpbin.org</a>: This will be the version one (v1) of our\napplication. This is the application that's already deployed, and\nyour aim is to partially replace it with a newer version of the\napplication.</li>\n<li><a href=\"https://github.com/psibi/tornado-websocket-example\">websocket app</a>: This will be the version two (v2) of the\napplication that has to be gradually introduced.</li>\n</ul>\n<p>Note that in the actual real world, both the applications will share\nthe same code. For our example, we are just taking two arbitrary\napplications to make testing easier.</p>\n<p>Our assumption is that we already have version one of our application\ndeployed. So let's deploy that initially. We will write our usual\nKubernetes resources for it. The deployment manifest for the version\none application:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">apiVersion</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">apps/v1\nkind</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">Deployment\nmetadata</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">httpbin\n namespace</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">canary\nspec</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">replicas</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">1\n </span><span style=\"color:#268bd2;\">selector</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">matchLabels</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">app</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">httpbin\n version</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">v1\n template</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">metadata</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">labels</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">app</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">httpbin\n version</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">v1\n spec</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">containers</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#268bd2;\">image</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">docker.io/kennethreitz/httpbin\n imagePullPolicy</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">IfNotPresent\n name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">httpbin\n ports</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#268bd2;\">containerPort</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">80\n</span></code></pre>\n<p>And let's create a corresponding service for it:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">apiVersion</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">v1\nkind</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">Service\nmetadata</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">labels</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">app</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">httpbin\n name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">httpbin\n namespace</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">canary\nspec</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">ports</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#268bd2;\">name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">httpbin\n port</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">8000\n </span><span style=\"color:#268bd2;\">targetPort</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">80\n </span><span style=\"color:#657b83;\">- </span><span style=\"color:#268bd2;\">name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">tornado\n port</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">8001\n </span><span style=\"color:#268bd2;\">targetPort</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">8888\n </span><span style=\"color:#268bd2;\">selector</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">app</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">httpbin\n type</span><span style=\"color:#657b83;\">: </span><span style=\"color:#2aa198;\">ClusterIP\n</span></code></pre>\n<p>SSL certificate for the application which will use cert-manager:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">apiVersion</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">cert-manager.io/v1\nkind</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">Certificate\nmetadata</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">httpbin-ingress-cert\n namespace</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">istio-system\nspec</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">secretName</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">httpbin-ingress-cert\n issuerRef</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">letsencrypt-dns-prod\n kind</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">ClusterIssuer\n dnsNames</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#2aa198;\">canary.33test.dev-sandbox.fpcomplete.com\n</span></code></pre>\n<p>And the Istio resources for the application:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">apiVersion</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">networking.istio.io/v1alpha3\nkind</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">Gateway\nmetadata</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">httpbin-gateway\n namespace</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">canary\nspec</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">selector</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">istio</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">ingressgateway\n servers</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#268bd2;\">hosts</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#268bd2;\">canary.33test.dev-sandbox.fpcomplete.com\n port</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">https-httpbin\n number</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">443\n </span><span style=\"color:#268bd2;\">protocol</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">HTTPS\n tls</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">credentialName</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">httpbin-ingress-cert\n mode</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">SIMPLE\n </span><span style=\"color:#657b83;\">- </span><span style=\"color:#268bd2;\">hosts</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#268bd2;\">canary.33test.dev-sandbox.fpcomplete.com\n port</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">http-httpbin\n number</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">80\n </span><span style=\"color:#268bd2;\">protocol</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">HTTP\n tls</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">httpsRedirect</span><span style=\"color:#657b83;\">: </span><span style=\"color:#b58900;\">true\n</span><span style=\"color:#657b83;\">---\n</span><span style=\"color:#268bd2;\">apiVersion</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">networking.istio.io/v1alpha3\nkind</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">VirtualService\nmetadata</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">httpbin\n namespace</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">canary\nspec</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">gateways</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#268bd2;\">httpbin-gateway\n hosts</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#268bd2;\">canary.33test.dev-sandbox.fpcomplete.com\n http</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#268bd2;\">route</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#268bd2;\">destination</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">host</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">httpbin.canary.svc.cluster.local\n port</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">number</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">8000\n</span></code></pre>\n<p>The above resource define gateway and virtual service. You could see\nthat we are using TLS here and redirecting HTTP to HTTPS.</p>\n<p>We also have to make sure that namespace has istio injection enabled:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">apiVersion</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">v1\nkind</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">Namespace\nmetadata</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">labels</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">app.kubernetes.io/component</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">httpbin\n istio-injection</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">enabled\n name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#2aa198;\">canary\n</span></code></pre>\n<p>I have the above set of k8s resources managed via\n<a href=\"https://kustomize.io/\">kustomize</a>. Let's deploy them to get the initial environment which\nconsists of only v1 (httpbin) application:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#b58900;\">❯</span><span style=\"color:#657b83;\"> kustomize build overlays/istio_canary > istio.yaml\n</span><span style=\"color:#b58900;\">❯</span><span style=\"color:#657b83;\"> kubectl apply</span><span style=\"color:#268bd2;\"> -f</span><span style=\"color:#657b83;\"> istio.yaml\n</span><span style=\"color:#b58900;\">namespace/canary</span><span style=\"color:#657b83;\"> created\n</span><span style=\"color:#b58900;\">service/httpbin</span><span style=\"color:#657b83;\"> created\n</span><span style=\"color:#b58900;\">deployment.apps/httpbin</span><span style=\"color:#657b83;\"> created\n</span><span style=\"color:#b58900;\">gateway.networking.istio.io/httpbin-gateway</span><span style=\"color:#657b83;\"> created\n</span><span style=\"color:#b58900;\">virtualservice.networking.istio.io/httpbin</span><span style=\"color:#657b83;\"> created\n</span><span style=\"color:#b58900;\">❯</span><span style=\"color:#657b83;\"> kubectl apply</span><span style=\"color:#268bd2;\"> -f</span><span style=\"color:#657b83;\"> overlays/istio_canary/certificate.yaml\n</span><span style=\"color:#b58900;\">certificate.cert-manager.io/httpbin-ingress-cert</span><span style=\"color:#657b83;\"> created\n</span></code></pre>\n<p>Now I can go and verify in my browser that my application is actually\nup and running:</p>\n<p><img src=\"/images/istio_httpbin_application.png\" alt=\"httpbin: Version 1 application\" /></p>\n<p>Now comes the interesting part. We have to deploy the version two of\nour application and make sure around 20% of our traffic goes to\nit. Let's write the deployment manifest for it:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">apiVersion</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">apps/v1\nkind</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">Deployment\nmetadata</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">httpbin-v2\n namespace</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">canary\nspec</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">replicas</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">1\n </span><span style=\"color:#268bd2;\">selector</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">matchLabels</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">app</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">httpbin\n version</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">v2\n template</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">metadata</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">labels</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">app</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">httpbin\n version</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">v2\n spec</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">containers</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#268bd2;\">image</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">psibi/tornado-websocket:v0.3\n imagePullPolicy</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">IfNotPresent\n name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">tornado\n ports</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#268bd2;\">containerPort</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">8888\n</span></code></pre>\n<p>And now the destination rule to split the service:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">apiVersion</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">networking.istio.io/v1alpha3\nkind</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">DestinationRule\nmetadata</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">httpbin\n namespace</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">canary\nspec</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">host</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">httpbin.canary.svc.cluster.local\n subsets</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#268bd2;\">labels</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">version</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">v1\n name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">v1\n </span><span style=\"color:#657b83;\">- </span><span style=\"color:#268bd2;\">labels</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">version</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">v2\n name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#2aa198;\">v2\n</span></code></pre>\n<p>And finally let's modify the virtual service to split 20% of the\ntraffic to the newer version:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">apiVersion</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">networking.istio.io/v1alpha3\nkind</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">VirtualService\nmetadata</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">httpbin\n namespace</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">canary\nspec</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">gateways</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#268bd2;\">httpbin-gateway\n hosts</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#268bd2;\">canary.33test.dev-sandbox.fpcomplete.com\n http</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#268bd2;\">route</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#268bd2;\">destination</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">host</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">httpbin.canary.svc.cluster.local\n port</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">number</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">8000\n </span><span style=\"color:#268bd2;\">subset</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">v1\n weight</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">80\n </span><span style=\"color:#657b83;\">- </span><span style=\"color:#268bd2;\">destination</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">host</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">httpbin.canary.svc.cluster.local\n port</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">number</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">8001\n </span><span style=\"color:#268bd2;\">subset</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">v2\n weight</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">20\n</span></code></pre>\n<p>And now if you go again to the browser and refresh it a number of\ntimes (note that we route only 20% of the traffic to the new\ndeployment), you will see the new application eventually:</p>\n<p><img src=\"/images/istio_tornado_application.png\" alt=\"websocket: Version 2 application\" /></p>\n<h2 id=\"testing-deployment\">Testing deployment</h2>\n<p>Let's do around 10 curl requests to our endpoint to see how the\ntraffic is getting routed:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#b58900;\">❯</span><span style=\"color:#657b83;\"> seq 10 </span><span style=\"color:#859900;\">| </span><span style=\"color:#b58900;\">xargs</span><span style=\"color:#268bd2;\"> -Iz</span><span style=\"color:#657b83;\"> curl</span><span style=\"color:#268bd2;\"> -s</span><span style=\"color:#657b83;\"> https://canary.33test.dev-sandbox.fpcomplete.com </span><span style=\"color:#859900;\">| </span><span style=\"color:#b58900;\">rg </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\"><title></span><span style=\"color:#839496;\">"\n </span><span style=\"color:#657b83;\"><title>httpbin.org</title>\n <title>httpbin.org</title>\n <title>httpbin.org</title>\n<title>tornado </span><span style=\"color:#b58900;\">WebSocket</span><span style=\"color:#657b83;\"> example</title>\n <title>httpbin.org</title>\n <title>httpbin.org</title>\n <title>httpbin.org</title>\n <title>httpbin.org</title>\n <title>httpbin.org</title>\n<title>tornado WebSocket example</title>\n</span></code></pre>\n<p>And you can confirm how out of the 10 requests, 2 requests are routed\nto the websocket (v2) application. If you have <a href=\"https://kiali.io/\">Kiali</a> deployed,\nyou can even visualize the above traffic flow:</p>\n<p><img src=\"/images/istio_kiali.png\" alt=\"Kiali visualization\" /></p>\n<p>And that summarizes our post on how to achieve canary deployment using\nIstio. While this post shows a basic example, traffic steering and\nrouting is one of the core features of Istio and it offers various\nways to configure the routing decisions made by it. You can find more\nfurther details about it in the <a href=\"https://istio.io/latest/docs/concepts/traffic-management/#virtual-services\">official docs</a>. You can also use a\ncontroller like <a href=\"https://argoproj.github.io/argo-rollouts/features/traffic-management/istio/\">Argo Rollouts with Istio</a> to perform canary\ndeployments and use additional features like <a href=\"https://argoproj.github.io/argo-rollouts/features/analysis/\">analysis</a> and\n<a href=\"https://argoproj.github.io/argo-rollouts/features/experiment/\">experiment</a>.</p>\n<hr />\n<p>If you're looking for a solid Kubernetes platform, batteries included\nwith a first class support of Istio, <a href=\"https://www.fpcomplete.com/products/kube360/\">check out Kube360</a>.</p>\n<p>If you liked this article, you may also like:</p>\n<ul>\n<li><a href=\"https://www.fpcomplete.com/blog/istio-mtls-debugging-story/\">An Istio/mutual TLS debugging story</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/rust-kubernetes-windows/\">Deploying Rust with Windows Containers on Kubernetes</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/cloud-vendor-neutrality/\">Cloud Vendor Neutrality</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/devops-for-developers/\">DevOps for (Skeptical) Developers</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/kube360s-kubernetes-security-focus/\">Secure defaults with Kubernetes Security with Kube360</a></li>\n</ul>\n<div class=\"blog-cta\">\n<p><a href=\"https://www.fpcomplete.com/signups/request-a-demo/\"><img src=\"/images/cta/kube360.png\" alt=\"See what Kube360 can do for you\" /></a></p>\n</div>\n",
"permalink": "https://www.fpcomplete.com/blog/canary-deployment-istio/",
"slug": "canary-deployment-istio",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Canary Deployment with Kubernetes and Istio",
"description": "Want to do canary deployments in your Kubernetes cluster? Read up on our recommended step-by-step process",
"updated": null,
"date": "2022-03-24",
"year": 2022,
"month": 3,
"day": 24,
"taxonomies": {
"tags": [
"DevOps",
"istio",
"Kubernetes"
],
"categories": [
"devops"
]
},
"extra": {
"author": "Sibi Prabakaran",
"blogimage": "/images/blog-listing/cloud-computing.png"
},
"path": "blog/canary-deployment-istio/",
"components": [
"blog",
"canary-deployment-istio"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "what-is-canary-deployment",
"permalink": "https://www.fpcomplete.com/blog/canary-deployment-istio/#what-is-canary-deployment",
"title": "What is Canary Deployment",
"children": []
},
{
"level": 2,
"id": "pre-requisites",
"permalink": "https://www.fpcomplete.com/blog/canary-deployment-istio/#pre-requisites",
"title": "Pre requisites",
"children": []
},
{
"level": 2,
"id": "istio-concepts",
"permalink": "https://www.fpcomplete.com/blog/canary-deployment-istio/#istio-concepts",
"title": "Istio Concepts",
"children": []
},
{
"level": 2,
"id": "application-deployment",
"permalink": "https://www.fpcomplete.com/blog/canary-deployment-istio/#application-deployment",
"title": "Application deployment",
"children": []
},
{
"level": 2,
"id": "testing-deployment",
"permalink": "https://www.fpcomplete.com/blog/canary-deployment-istio/#testing-deployment",
"title": "Testing deployment",
"children": []
}
],
"word_count": 1384,
"reading_time": 7,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/cloud-native.md",
"content": "<p>You hear "go Cloud-Native," but if you're like many, you wonder, "what does that mean, and how can applying a Cloud-Native strategy help my company's Dev Team be more productive?"\nAt a high level, Cloud-Native architecture means adapting to the many new possibilities—but a very different set of architectural constraints—offered by the cloud compared to traditional on-premises infrastructure.</p>\n<p>Cloud-Native architecture optimizes systems and software for the cloud. This optimization creates an efficient way to utilize the platform by streamlining the processes and workflows. This is accomplished by harnessing the cloud's inherent strengths: </p>\n<ul>\n<li>its flexibility, </li>\n<li>on-demand infrastructure; and </li>\n<li>robust managed services. </li>\n</ul>\n<p>Cloud-native computing couples these strengths with cloud-optimized technologies such as microservices, containers, and continuous delivery. Cloud-Native takes advantage of the cloud's distributed, scalable and adaptable nature. By doing this, Cloud-Native will maximize your dev team's focus on writing code, reducing operational tasks, creating business value, and keeping your customers happy by building high-impact applications faster, without compromising on quality. You might even think you can’t do cloud-native without using one of the big cloud providers- this simply isn’t true, many of the benefits of cloud-native are the approaches and emphasis on better tooling around automation.</p>\n<h2 id=\"why-move-to-cloud-native-now\">Why Move to Cloud-Native Now?</h2>\n<p><em>#1 - High-Frequency Software Release</em></p>\n<p>Faster and more frequent updates and new features releases allow your organization to respond to user needs in near real-time, increasing user retention. For example, new software versions with novel features can be released incrementally and more often as they become available. In addition, Cloud-native makes high-frequency software possible via continuous integration (CI) and continuous deployment (CD), where full version commits are no longer needed. Instead, one can modify, test, and commit just a few lines of code continuously and automatically to meet changing customer trends, thereby giving your organization an edge. </p>\n<p><em>#2 - Automatic Software Updates</em></p>\n<p>One of the most valuable Cloud-native features is automation. For example, updates are deployed automatically without interfering with core applications or user base. Automated redundancies for infrastructure can automatically move applications between data centers as needed with little to zero human intervention. Even scalability, testing, and resource allocation can be automated. There are many available automation tools in the marketplace, such as FP Complete Corporation's widely accepted tool, <a href=\"https://www.fpcomplete.com/products/kube360/\">Kube360</a>.</p>\n<p><em>#3 - Greater Protection from Software Failures</em></p>\n<p>Isolation of containers is another important cloud-native feature. Software failures and bugs can be traced to a specific microservice version, rolled back, or fixed quickly. Software fixes can be tested in isolation without compromising the stability of the entire application. On the other hand, if there's a widespread failure, automation can restore the application to a previous stable state, minimizing downtime. Automated DevOps testing before code goes to production (example: linting and software scrubbing) drives faster bug detection and resolution- reducing the risk of bugs in production.</p>\n<h2 id=\"wow-cloud-native-seems-perfect-what-s-the-catch\">WOW – Cloud-Native Seems Perfect – What's the Catch?</h2>\n<p>Switching over to Cloud-Native architecture requires a thorough assessment of your existing application setup. The biggest question you and your team need to ask before making any moves is, "should our business modernize our current applications, or should we build new applications from scratch and utilize Cloud-Native development practices?"</p>\n<p>If you choose to modernize your existing application, you will save time and money by capitalizing on the cloud's agility, flexibility, and scalability. Your dev team can retain existing application functionality and business logic, re-architect into a Cloud-Native app, and containerize to utilize the cloud platform's strengths.</p>\n<p>You can also build a net-new application using Cloud-Native development practices instead of upgrading your legacy applications. Building from scratch may make more sense from a corporate culture, risk management, and regulatory compliance standpoint. You keep running old application code unchanged while developing and phasing in a platform. Building new applications also allows dev teams to develop applications free from prior architectural constraints, allowing developers to experiment and deliver innovation to users.</p>\n<h2 id=\"three-essential-tools-for-successful-cloud-native-architecture\">Three Essential Tools for Successful Cloud-Native Architecture</h2>\n<p>Whether you decide to create a new Cloud-Native application or modernize your existing ones, your dev team needs to use these three tools for successful implementation of Cloud-Native Architecture:</p>\n<ol>\n<li><em>Microservices Architecture</em>. </li>\n</ol>\n<p>A cloud-native microservice architecture is considered a "best practice" architectural approach for creating cloud applications because each application makes up a set of services. Each service runs its processes and communicates through clearly defined APIs, which provide good foundations for continuous delivery. With microservices, ideally each service is independently deployable This architecture allows each service to be updated independently without interfering with another service. This results in:</p>\n<ul>\n<li>reduced downtime for users; </li>\n<li>simplified troubleshooting; and </li>\n<li>minimized disruptions even if a problem's identified. \nWhich allows for high-frequency updates and continuous delivery. </li>\n</ul>\n<ol start=\"2\">\n<li><em>Container-based Infrastructure Platform</em>.</li>\n</ol>\n<p>Now that your microservice architecture is broken down into individual container-based services, the next essential tool is a system to manage all those containers automatically - known as a ‘container orchestrator. The most widely accepted platform is Kubernetes, an open-source system originally developed in collaboration with Google, Microsoft, and others. It runs the containerized applications and controls the automated deployment, storage, scaling, scheduling, load balancing, updates, and monitors containers across clusters of hosts. Kubernetes supports all major public cloud service providers, including Azure, AWS, Google Cloud Platform, and Oracle Cloud.</p>\n<ol start=\"3\">\n<li><em>CI/CD Pipeline</em>.</li>\n</ol>\n<p>A CI/CD Pipeline is the third essential tool for a cloud-native environment to work seamlessly. Continuous integration and continuous delivery embody a set of operating principles and a collection of practices that allow dev teams to deliver code changes more frequently and reliably. This implementation is known as the CI/CD Pipeline. By automating deployment processes, the CI/CD pipeline will allow your dev team to focus on:</p>\n<ul>\n<li>meeting business requirements; </li>\n<li>code quality; and </li>\n<li>security. \nCI/CD tools preserve the environment-specific parameters that must be included with each delivery. CI/CD automation then performs any necessary service calls to web servers, databases, and other services that may require a restart or follow other procedures when applications are deployed.</li>\n</ul>\n<h2 id=\"cloud-native-isn-t-plug-play-is-there-a-comprehensive-tool-that-my-dev-team-can-use\">Cloud-Native Isn't Plug & Play – Is there a Comprehensive Tool that my Dev Team Can Use?</h2>\n<p>As you can probably guess, countless tools make up the cloud-native architecture. Unfortunately, these tools are complex, require separate authentication, and frequently do not interact with each other. In essence, you are expected to integrate these cloud tools yourself as a user. We at FP Complete became frustrated with this approach. So, to save time and provide a turn-key solution, we created Kube360. Kube360 puts all necessary tools into one easy-to-use toolbox, accessed via a single sign-on, and operating as a fully integrated environment. Kube360 combines best practices, technologies, and processes into one complete package, and Kube360 has been proven an effective tool at multiple customer site deployments. In addition, Kube360 supports multiple cloud providers and on-premise infrastructure. Kube360 is vendor agnostic, fully customizable, and has no vendor lock-in.</p>\n<p><strong>Kube360 - Centralized Management</strong>. Kube360 employs centralized management, which increases your dev team's productivity. Increased Dev Team productivity will happen through:</p>\n<ul>\n<li>single-sign-on functionality </li>\n<li>speed-up of installation and setup</li>\n<li>Quick access to all tools</li>\n<li>Automation of logs, backups, and alerts</li>\n</ul>\n<p>This simplified administration hides frequent login complexities and allows single-sign-on through existing company identity management. Kube360 also streamlines tool authentication and access, eliminating many standard security holes. In the background, Kube360 automatically runs everyday tasks such as backups, log aggregation, and alerts.</p>\n<p><strong>Kube360 - Automated Features</strong>. Kube360's automated features include:</p>\n<ul>\n<li>automatic backups of the etcd config;</li>\n<li>log aggregation and indexing of all services; and</li>\n<li>integrated monitoring and alert framework.</li>\n</ul>\n<p><strong>Kube360 - Kubernetes Tooling Features</strong>. Kube360 simplifies Kubernetes management and allows you to take advantage of many cloud-native features such as:\nautoscaling; to stay cost efficient with growing and shrinking demands on systems</p>\n<ul>\n<li>high availability;</li>\n<li>health checks; and</li>\n<li>integrated secrets management.</li>\n</ul>\n<p><strong>Kube360 - Service Mesh</strong>.</p>\n<ul>\n<li>Mutual TLS based encryption within the cluster</li>\n<li>Tracing tools</li>\n<li>Rerouting traffic</li>\n<li>Canary deployments</li>\n</ul>\n<p><strong>Kube360 - Integration</strong>.</p>\n<ul>\n<li>Integrates into existing AWS & Azure infrastructures</li>\n<li>Deploys into existing VPCs</li>\n<li>Leverages existing subnets</li>\n<li>Communicates with components outside of Kube360</li>\n<li>Supports multiple clusters per organization</li>\n<li>Installed by FP Complete team or customer</li>\n</ul>\n<p>As you can see – Kube360 is one of the most comprehensive tools you can rely on for Cloud Native architecture. Kube360 is your one-stop, fully integrated enterprise Kubernetes ecosystem. Kube360 standardizes containerization, software deployment, fault tolerance, auto-scaling, auto-healing, and security - by design. Kube360's modular, standardized architecture mitigates proprietary lock-in, high support costs, and obsolescence. In addition, Kube360 delivers a seamless deployment experience for you and your team.\nFind out how Kube360 can make your business more efficient, more reliable, and more secure, all in a fraction of the time. Speed up your dev team's productivity - <a href=\"https://www.fpcomplete.com/contact-us/\">Contact us today!</a></p>\n",
"permalink": "https://www.fpcomplete.com/blog/cloud-native/",
"slug": "cloud-native",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Confused about Cloud-Native? Want to speed up your dev team's productivity?",
"description": "Learn about Cloud-Native architecture.",
"updated": null,
"date": "2022-01-17",
"year": 2022,
"month": 1,
"day": 17,
"taxonomies": {
"categories": [
"devsecops",
"devops"
],
"tags": [
"kubernetes",
"cloud native"
]
},
"extra": {
"author": "FP Complete",
"keywords": "devsecops, devops",
"blogimage": "/images/blog-listing/cloud-computing.png"
},
"path": "blog/cloud-native/",
"components": [
"blog",
"cloud-native"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "why-move-to-cloud-native-now",
"permalink": "https://www.fpcomplete.com/blog/cloud-native/#why-move-to-cloud-native-now",
"title": "Why Move to Cloud-Native Now?",
"children": []
},
{
"level": 2,
"id": "wow-cloud-native-seems-perfect-what-s-the-catch",
"permalink": "https://www.fpcomplete.com/blog/cloud-native/#wow-cloud-native-seems-perfect-what-s-the-catch",
"title": "WOW – Cloud-Native Seems Perfect – What's the Catch?",
"children": []
},
{
"level": 2,
"id": "three-essential-tools-for-successful-cloud-native-architecture",
"permalink": "https://www.fpcomplete.com/blog/cloud-native/#three-essential-tools-for-successful-cloud-native-architecture",
"title": "Three Essential Tools for Successful Cloud-Native Architecture",
"children": []
},
{
"level": 2,
"id": "cloud-native-isn-t-plug-play-is-there-a-comprehensive-tool-that-my-dev-team-can-use",
"permalink": "https://www.fpcomplete.com/blog/cloud-native/#cloud-native-isn-t-plug-play-is-there-a-comprehensive-tool-that-my-dev-team-can-use",
"title": "Cloud-Native Isn't Plug & Play – Is there a Comprehensive Tool that my Dev Team Can Use?",
"children": []
}
],
"word_count": 1482,
"reading_time": 8,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/levana-nft-launch.md",
"content": "<p><em>FP Complete Corporation, headquartered in Charlotte, North Carolina, is a global technology company building next-generation software to solve complex problems. We specialize in Server-Side Software Engineering, DevSecOps, Cloud-Native Computing, Distributed Ledger, and Advanced Programming Languages. We have been a full-stack technology partner in business for 10+ years, delivering reliable, repeatable, and highly secure software. Our team of engineers, strategically located in over 13 countries, offers our clients one-stop advanced software engineering no matter their size.</em></p>\n<p>For the past few months, the FP Complete engineering team has been working with <a href=\"https://levana.finance/\">Levana Protocol</a> on a DeFi platform for leveraged assets on the Terra blockchain. But more recently, we've additionally been helping launch the <a href=\"https://meteors.levana.finance/\">Levana Dragons meteor shower</a>. This NFT launch completed in the middle of last week, and to date is the largest single NFT event in the Terra ecosystem. We were very excited to be a part of this. You can read more about the NFT launch itself on <a href=\"https://blog.levana.finance/recap-of-the-levana-meteor-shower-128919193f9b\">the Levana Protocol blog post</a>.</p>\n<p>We received a lot of positive feedback about the smoothness of this launch, which was pretty wonderful feedback to hear. People expressed interest in learning about the technical decisions we made that led to such a smooth event. We also had a few hiccups occur during the launch and post-launch that are worth addressing as well.</p>\n<p>So strap in for a journey involving cloud technologies, DevOps practices, Rust, React, and—of course—Dragons.</p>\n<h2 id=\"overview-of-the-event\">Overview of the event</h2>\n<p>The Levana Dragons meteor shower was an event consisting of 44 separate "showers", or drops during which NFT meteors would be issued. Participants in a shower competed by contributing UST (a Terra-specific stablecoin tied to US Dollars) to a specific Terra wallet. Contributions from a single wallet across the shower were aggregated into a single contribution, and contributions of a higher amount resulted in a better meteor. At the least granular level, this meant stratification into legendary, ancient, rare, and common meteors. But higher contributions also lead to the greater likelihood of receiving an egg inside your meteor.</p>\n<p>Each shower was separated from the next by 1 hour, and we opened up the site about 24 hours before the first shower occurred. That means the site was active for contributions for about 67 hours straight. Then, following the showers, we needed to mint the actual NFTs, ship them to users' wallets, and open up the "cave" page where users could view their NFTs.</p>\n<p>So all told, this was an event that spanned many days, had lots of bouts of high activity, was involved in a game that incorporated many financial transactions, and any downtime, slowness, or poor behavior could result in user frustration or worse. On top of that, given the short timeframe this event was intended to be active, attacks such as DDoS taking down the site could be catastrophic for success of the showers. And the absolute worst case would be a compromise allowing an attacker to redirect funds to a different wallet.</p>\n<p>All that said, let's dive in.</p>\n<h2 id=\"backend-server\">Backend server</h2>\n<p>A major component of the meteor drop was to track contributions to the destination wallet, and provide high level data back to users about these activities. This kind of high level data included the floor prices per shower, the timestamps of the upcoming drops, total meteors a user had acquired so far, and more. All this information is publicly available on the blockchain, and in principle could have been written as frontend logic. However, the overhead of having every visitor to the site downloading essentially the entire history of transactions with the destination wallet would have made the site unusable.</p>\n<p>Instead, we implemented a backend web server. We used Rust (with Axum) for this for multiple reasons:</p>\n<ul>\n<li>We're <a href=\"https://www.fpcomplete.com/rust/\">very familiar with Rust</a></li>\n<li>Rust is a high performance language, and there were serious concerns about needing to withstand surges in traffic and DDoS attacks</li>\n<li>Due to CosmWasm already heavily leveraging Rust, Rust was already in use on the project</li>\n</ul>\n<p>The server was responsible for keeping track of configuration data (like the shower timestamps and destination wallet address), downloading transaction information from the blockchain (using the <a href=\"https://fcd.terra.dev/apidoc\">Full Client Daemon</a>), and answering queries to the frontend (described next) providing this information.</p>\n<p>We could have kept data in a mutable database like PostgreSQL, but instead we decided to keep all data in memory and download from scratch from the blockchain on each application load. Given the size of the data, these two decisions initially seemed very wise. We'll see some outcomes of this when we analyze performance and look at some of our mistakes below.</p>\n<h2 id=\"react-frontend\">React frontend</h2>\n<p>The primary interface users interacted with was a standard React frontend application. We used TypeScript, but otherwise stuck with generic tools and libraries wherever possible. We didn't end up using any state management libraries or custom CSS systems. Another thing to note is that this frontend is going to expand and evolve over time to include additional functionality around the evolving NFT concept, some of which has already happened, and we'll discuss below.</p>\n<p>One specific item that popped up was mobile optimization. Initially, the plan was for the meteor shower site to be desktop-only. After a few beta runs, it became apparent that the majority of users were using mobile devices. As a DAO, a primary goal of Levana is to allow for distributed governance of all products and services, and therefore we felt it vital to be responsive to this community request. Redesigning the interface for mobile and then rewriting the relevant HTML and CSS took up a decent chunk of time.</p>\n<h2 id=\"hosting-infrastructure\">Hosting infrastructure</h2>\n<p>Many DApps sites are exclusively client side, leveraging frontend logic interacting with the blockchain and smart contracts exclusively. For these kinds of sites, hosting options like Vercel work out very nicely. However, as described above, this application was a combo frontend/backend. Instead of splitting the hosting between two different options, we decided to host both the static frontend app and the backend dynamic app in a single place.</p>\n<p>At FP Complete, we typically use Kubernetes for this kind of deployment. In this case, however, we went with Amazon ECS. This isn't a terribly large delta from our standard Kubernetes deployments, following many of the same patterns: container-based application, rolling deployments with health checks, autoscaling and load balancers, externalized TLS cert management, and centralized monitoring and logging. No major issues there.</p>\n<p>Additionally, to help reduce burden on the backend application and provide a better global experience for the site, we put Amazon CloudFront in front of the application, which allowed caching the static files in data centers around the world.</p>\n<p>Finally, we codified all of this infrastructure using Terraform, our standard tool for Infrastructure as Code.</p>\n<h2 id=\"gitlab\">GitLab</h2>\n<p>GitLab is a standard part of our FP Complete toolchain. We leverage it for internal projects for its code hosting, issue tracking, Docker registry, and CI integration. While we will often adapt our tools to match our client needs, in this case we ended up using our standard tool, and things went very well.</p>\n<p>We ended up with a four-stage CI build process:</p>\n<ol>\n<li>Lint and build the frontend code, producing an artifact with the built static assets</li>\n<li>Build a static Rust application from the backend, embedding the static files from (1), and run standard Rust lints (<code>clippy</code> and <code>fmt</code>), producing an artifact with the single file compiled binary</li>\n<li>Generate a Docker image from the static binary in (2)</li>\n<li>Deploy the new Docker image to either the dev or prod ECS cluster</li>\n</ol>\n<p>Steps (3) and (4) are set up to only run on the <code>master</code> and <code>prod</code> branches. This kind of automated deployment setup made it easy for our distributed team to get changes into a real environment for review quickly. However, it also opened a security hole we needed to address.</p>\n<h2 id=\"aws-lockdown\">AWS lockdown</h2>\n<p>Due to the nature of this application, any kind of downtime during the active showers could have resulted in a lot of egg on our faces and a missed opportunity for the NFT raise. However, there was a far scarier potential outcome. Changing a single config value in production—the destination wallet—would have enabled a nefarious actor to siphon away funds intended for NFTs. This was the primary concern we had during the launch.</p>\n<p>We considered multiple social engineering approaches to the problem, such as advertising to potentially users the correct wallet address they should be using. However, we decided that most likely users would not be checking addresses before sending their funds. We <em>did</em> set up some emergency "shower halted" page and put in place an on-call team to detect and deploy such measures if necessary, but fortunately nothing along those lines occurred.</p>\n<p>However, during the meteor shower, we did instate an AWS account lockdown. This included:</p>\n<ul>\n<li>Switching <a href=\"https://www.fpcomplete.com/products/zehut/\">Zehut</a>, a tool we use for granting temporary AWS credentials, into read-only credentials mode</li>\n<li>Disabling GitLab CI's production credentials, so that GitLab users could not cause a change in prod</li>\n</ul>\n<p>We additionally vetted all other components in the pipeline of DNS resolution, such as domain name registrar, Route 53, and other AWS services for hosting.</p>\n<p>These are generally good practices, and over time we intend to refine the AWS permissions setup for Levana's AWS account in general. However, this launch was the first time we needed to use AWS for app deployment, and time did not permit a thorough AWS permissions analysis and configuration.</p>\n<h2 id=\"during-the-shower\">During the shower</h2>\n<p>As I just mentioned, during the shower we had an on-call team ready to jump into action and a playbook to address potential issues. Issues essentially fell into three categories:</p>\n<ol>\n<li>Site is slow/down/bad in some way</li>\n<li>Site is actively malicious, serving the wrong content and potentially scamming people</li>\n<li>Some kind of social engineering attack is underway</li>\n</ol>\n<p>The FP Complete team were responsible for observing (1) and (2). I'll be honest that this is not our strong suit. We are a team that typically builds backends and designs DevOps solutions, not an on-call operations team. However, we were the experts in both the DevOps hosting, as well as the app itself. Fortunately, no major issues popped up, and the on-call team got to sit on their hands the whole time.</p>\n<p>Out of a preponderance of caution, we did take a few extra steps before the showers started to try and ensure we were ready for any attack:</p>\n<ol>\n<li>We bumped the replica count in ECS from 2 desired instances to 5. We had autoscaling in place already, but we wanted extra buffer just to be safe.</li>\n<li>We increased the instance size from 512 CPU units to 2048 CPU units.</li>\n</ol>\n<p>In all of our load testing pre-launch, we had seen that 512 CPU units was sufficient to handle 100,000 requests per second per instance with 99th percentile latency of 3.78ms. With these bumped limits in production, and in the middle of the highest activity on the site, we were very pleased to see the following CPU and memory usage graphs:</p>\n<p><img src=\"/images/blog/levana-nft/cpu.png\" alt=\"CPU usage\" /></p>\n<p><img src=\"/images/blog/levana-nft/memory.png\" alt=\"Memory usage\" /></p>\n<p>This was a nice testament to the power of a Rust-written web service, combined with proper autoscaling and CloudFront caching.</p>\n<h2 id=\"image-creation\">Image creation</h2>\n<p>Alright, let's put the app itself to the side for a second. We knew that, at the end of the shower, we would need to quickly mint NFTs for everyone wallet that donated more than $8 during a single shower. There are a few problems with this:</p>\n<ul>\n<li>We had no idea how many users would contribute.</li>\n<li>Generating the images is a relatively slow process.</li>\n<li>Making the images available on IPFS—necessary for how NFTs work—was potentially going to be a bottleneck.</li>\n</ul>\n<p>What we ended up doing was writing a Python script that pregenerated 100,000 or so meteor images. We did this generation directly on an Amazon EC2 instance. Then, instead of uploading the images to an IPFS hosting/pinning service, we ran the IPFS daemon directly on this EC2 instance. We additionally backed up all the images on S3 for redundant storage. Then we launched a <em>second</em> EC2 instance for redundant IPFS hosting.</p>\n<p>This Python script not only generated the images, but also generated a CSV file mapping the image Content ID (IPFS address) together with various pieces of metadata about the meteor image, such as the meteor body. We'll use this CID/meteor image metadata mapping for correct minting next.</p>\n<p>All in all, this worked just fine. However, there were some hurdles getting there, and we have plans to change this going forward in future stages of the NFT evolution. We'll mention those below.</p>\n<h2 id=\"minting\">Minting</h2>\n<p>Once the shower finished, we needed to get NFTs into user wallets as quickly as possible. That meant we needed two different things:</p>\n<ol>\n<li>All the NFT images on IPFS, which we had.</li>\n<li>A set of CSV files providing the NFTs to be generated, together with all of their metadata and owners.</li>\n</ol>\n<p>The former was handled by the previous step. The latter was additional pieces of Rust tooling we wrote that leveraged the same internal libraries we wrote for the backend application. The purpose of this tooling was to:</p>\n<ul>\n<li>Aggregate the total set of contributions from the blockchain.</li>\n<li>Stratify contributions into individual meteors of different rarity.</li>\n<li>Apply the appropriate algorithms to randomly decide which meteors receive an egg and which don't.</li>\n<li>Assign eggs among the meteors.</li>\n<li>Assign additionally metadata to the meteors.</li>\n<li>Choose an appropriate and unique meteor image for each meteor based on its needed metadata. (This relies on the Python-generated CSV file above.)</li>\n</ul>\n<p>This process produced a few different pieces of data:</p>\n<ul>\n<li>CSV files for meteor NFT generation. There's nothing secret about these, you could reconstruct them yourself by analyzing the NFT minting on the blockchain.</li>\n<li>The distribution of attributes (such as essence, crystals, distance, etc.) among the meteors for calculating rarity of individual traits. Again, this can be derived easily from public information.</li>\n<li>A file that tracks the meteor/egg mapping. This is the one outcome from this process that is a closely guarded secret.</li>\n</ul>\n<p>This final point is also influencing the design of the next few stages of this project. Specifically, while a smart contract would be the more natural way to interact with NFTs in general, we cannot expose the meteor/egg mapping on the blockchain. Therefore, the "cracking" phase (which will allow users to exchange meteors for their potential eggs) will need to work with another backend application.</p>\n<p>In any event, this metadata-generation process was something we tested multiple times on data from our beta runs, and were ready to produce and send over to Knowhere.art for minting soon after the shower. I believe users got NFTs in their wallets within 8 hours of the end of the shower, which was a pretty good timeframe overall.</p>\n<h2 id=\"opening-the-cave\">Opening the cave</h2>\n<p>The final step was opening the cave, a new page on the meteor site that allows users to view their meteors. This phase was achieved by updating the configuration values of the backend to include:</p>\n<ul>\n<li>The smart contract address of the NFT collection</li>\n<li>The total number of meteors</li>\n<li>The trait distribution</li>\n</ul>\n<p>Once we switched the config values, the cave opened up, and users were able to access it. Besides pulling the static information mentioned above from the server, all cave page interactions occur fully client side, with the client querying the blockchain using the Terra.js library.</p>\n<p>And that's where we're at today. The showers completed, users got their meteors, the cave is open, and we're back to work on implementing the cracking phase of this project. W00t!</p>\n<h2 id=\"problems\">Problems</h2>\n<p>Overall, this project went pretty smoothly in production. However, there were a few gotcha moments worth mentioning.</p>\n<h3 id=\"fcd-rate-limiting\">FCD rate limiting</h3>\n<p>The biggest issue we hit during the showers, and the one that had the biggest potential to break everything, was FCD rate limiting. We'd done extensive testing prior to the real showers on testnet, with many volunteer testers in addition to bots. We never ran into a single example that I'm aware of where rate limiting kicked in.</p>\n<p>However, the real production shower run into such rate limiting issues about 10 showers into the event. (We'll look at how they manifested in a moment.) There are multiple potentially contributing factors for this:</p>\n<ul>\n<li>There was simply far greater activity in the real event than we had tested for.</li>\n<li>Most of our testing was limited to just 10 showers, and the real event went for 44.</li>\n<li>There may be different rate limiting rules for FCD on mainnet versus testnet.</li>\n</ul>\n<p>Whatever the case, we began to notice the rate limiting when we tried to roll out a new feature. We implemented the Telescope functionality, which allowed users to see the historical floor prices in previous showers.</p>\n<p><img src=\"/images/blog/levana-nft/telescope.png\" alt=\"Telescope\" /></p>\n<p>After pushing the change to ECS, however, we noticed that the new deployment didn't go live. The reason was that, during the initial data load process, the new processes were receiving rate limiting responses and dying. We tried fixing this by adding a delay or other kinds of retry logic. However, none of these combinations allowed the application to begin processing requests within ECS's readiness check period. (We could have simply turned off health checks, but that would have opened a new can of worms.)</p>\n<p>This problem was fairly critical. Not being able to roll out new features or bug fixes was worrying. But more troubling was the lack of autohealing. The existing instances continued to run fine, because they only needed to download small amounts of data from FCD to stay up-to-date, and therefore never triggered the rate limiting. But if any of those instances went down, ECS wouldn't be able to replace them with healthy instances.</p>\n<p>Fortunately, we had already written the majority of a caching solution in prior weeks, and had not finished the work because we thought it wasn't a priority. After a few hair-raising hours of effort, we got a solution in place which:</p>\n<ul>\n<li>Saved all transactions to a YAML file (a binary format would have been a better choice, but YAML was the easiest to roll out)</li>\n<li>Uploaded this YAML file to S3</li>\n<li>Ran this save/upload process on a loop, updating every 10 minutes</li>\n<li>Modified the application logic to start off by first downloading the YAML file from S3, and then doing a delta load from there using FCD</li>\n</ul>\n<p>This reduced startup time significantly, bypassed the rate limiting completely, and allowed us to roll out new features and not worry about the entire site going down.</p>\n<h3 id=\"ipfs-hosting\">IPFS hosting</h3>\n<p>FP Complete's DevOps approach is decidedly cloud-focused. For large blob storage, our go-to solution is almost always cloud-based blob storage, which would be S3 in the case of Amazon. We had zero experience with large scale IPFS data hosting prior to this project, which presented a unique challenge.</p>\n<p>As mentioned, we didn't want to go with one of the IPFS pinning services, since the rate limiting may have prevented us from uploading all the pregenerated images. (Rate limiting is beginning to sound like a pattern here...) Being comfortable with S3, we initially tried hosting the images using <a href=\"https://github.com/ipfs/go-ds-s3\">go-ds-s3</a>, a plugin for the <code>ipfs</code> CLI that uses S3 for storage. We still don't know why, but this never worked correctly for us. Instead, we reverted to storing the raw image data on Amazon EBS, which is more expensive and less durable, but actually worked. To fix the durability issue, we backed up all the raw image files to S3.</p>\n<p>Overall, however, we're not happy with this outcome. The cost for this hosting is relatively high, and we haven't set up a truly fault-tolerant, highly available hosting. At this point, we would like to switch over to an IPFS pinning service, such as Pinata. Now that the images are available on IPFS, issuing API calls to pin those files should be easier than uploading the complete images. We're planning on using this as a framework going forward for other images, namely:</p>\n<ul>\n<li>Generate the raw images on EC2</li>\n<li>Upload for durability to S3</li>\n<li>Run <code>ipfs</code> locally to make the images available on IPFS</li>\n<li>Pin the images to a service like Pinata</li>\n<li>Take down the EC2 instance</li>\n</ul>\n<p>The next issue we ran into was... RATE LIMITING, again. This time, we discovered that Cloudflare's IPFS gateway was rate limiting users on downloading their meteor images, resulting in a situation where users would see only some of their meteors appear in their cave page. We solved this one by sticking CloudFront in front of the S3 bucket holding the meteor images and serving from there instead.</p>\n<p>Going forward, when it's available, <a href=\"https://blog.cloudflare.com/introducing-r2-object-storage/\">Cloudflare R2</a> is a promising alternative to the S3+CloudFront offering, due to reduced storage cost and entirely removed bandwidth costs.</p>\n<h2 id=\"lessons-learned\">Lessons learned</h2>\n<p>This project was a great mix of leveraging existing expertise and pairing with some new challenges. Some of the top lessons we learned here were:</p>\n<ol>\n<li>We got a lot of experience with working directly with the LCD and FCD APIs for Terra from Rust code. Previously, with our DeFi work, this almost exclusively sat behind Terra.js usage.</li>\n<li>IPFS was a brand-new topic for us, and we got to play with some pretty extreme cases right off the bat. Understanding the concepts in pinning and gateways will help us immensely with future NFT work.</li>\n<li>Since ECS is a relatively unusual technology for us, we got to learn quite a few of the idiosyncrasies it has versus Kubernetes, our more standard toolchain.</li>\n<li>While rate limiting is a concept we're familiar with and have worked with many times in the past, these particular obstacles were all new, and each of them surprising in different ways. Typically, we would have some simpler workarounds for these rate limiting issues, such as using authenticated requests. Having to solve each problem in such an extreme way was surprising.</li>\n<li>And while we've been involved in blockchain and smart contract work for years, this was our first time working directly with NFTs. This was probably the simplest lesson learned. The API for querying the NFTs contracts is <a href=\"https://github.com/CosmWasm/cw-nfts/blob/main/packages/cw721/README.md\">fairly straightforward</a>, and represented a small portion of the time spent on this project.</li>\n</ol>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>We're very excited to have been part of such a successful event as the Levana Dragons NFT meteor shower. This was a fun site to work on, with a huge and active user base, and some interesting challenges. It was great to pair together some of our standard cloud DevOps practices with blockchain and smart contract common practices. And using Rust brought some great advantages we're quite happy with.</p>\n<p>Going forward, we're looking forward to getting to continue evolving the backend, frontend, and DevOps of this project, just like the NFTs themselves will be evolving. Happy dragon luck to all!</p>\n<p><em>Interested in learning more? Check out these relevant articles</em></p>\n<ul>\n<li><a href=\"https://www.fpcomplete.com/platformengineering/\">FP Complete DevOps homepage</a></li>\n<li><a href=\"https://www.fpcomplete.com/rust/\">FP Complete Rust homepage</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/\">Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps, part 1</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/2018/07/deploying-rust-with-docker-and-kubernetes/\">Deploying Rust with Docker and Kubernetes</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/rust-for-devops-tooling/\">Using Rust for DevOps tooling</a></li>\n<li><a href=\"https://www.fpcomplete.com/products/kube360/\">Kube360®</a></li>\n<li><a href=\"https://www.fpcomplete.com/products/zehut/\">Zehut</a></li>\n</ul>\n<p><em>Does this kind of work sound interesting? Consider <a href=\"https://www.fpcomplete.com/jobs/\">applying to work at FP Complete</a>.</em></p>\n",
"permalink": "https://www.fpcomplete.com/blog/levana-nft-launch/",
"slug": "levana-nft-launch",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Levana NFT Launch",
"description": "We were excited to recently help Levana Protocol with their NFT launch. This blog post explains some technical details behind the scenes that allowed this to happen.",
"updated": null,
"date": "2021-11-17",
"year": 2021,
"month": 11,
"day": 17,
"taxonomies": {
"tags": [
"blockchain",
"rust",
"devops"
],
"categories": [
"devops"
]
},
"extra": {
"author": "Wesley Crook",
"keywords": "blockchain, NFT, cryptocurrency, Terra",
"blogimage": "/images/blog-listing/blockchain.png",
"image": "images/blog/thumbs/levana-nft-launch.png"
},
"path": "blog/levana-nft-launch/",
"components": [
"blog",
"levana-nft-launch"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "overview-of-the-event",
"permalink": "https://www.fpcomplete.com/blog/levana-nft-launch/#overview-of-the-event",
"title": "Overview of the event",
"children": []
},
{
"level": 2,
"id": "backend-server",
"permalink": "https://www.fpcomplete.com/blog/levana-nft-launch/#backend-server",
"title": "Backend server",
"children": []
},
{
"level": 2,
"id": "react-frontend",
"permalink": "https://www.fpcomplete.com/blog/levana-nft-launch/#react-frontend",
"title": "React frontend",
"children": []
},
{
"level": 2,
"id": "hosting-infrastructure",
"permalink": "https://www.fpcomplete.com/blog/levana-nft-launch/#hosting-infrastructure",
"title": "Hosting infrastructure",
"children": []
},
{
"level": 2,
"id": "gitlab",
"permalink": "https://www.fpcomplete.com/blog/levana-nft-launch/#gitlab",
"title": "GitLab",
"children": []
},
{
"level": 2,
"id": "aws-lockdown",
"permalink": "https://www.fpcomplete.com/blog/levana-nft-launch/#aws-lockdown",
"title": "AWS lockdown",
"children": []
},
{
"level": 2,
"id": "during-the-shower",
"permalink": "https://www.fpcomplete.com/blog/levana-nft-launch/#during-the-shower",
"title": "During the shower",
"children": []
},
{
"level": 2,
"id": "image-creation",
"permalink": "https://www.fpcomplete.com/blog/levana-nft-launch/#image-creation",
"title": "Image creation",
"children": []
},
{
"level": 2,
"id": "minting",
"permalink": "https://www.fpcomplete.com/blog/levana-nft-launch/#minting",
"title": "Minting",
"children": []
},
{
"level": 2,
"id": "opening-the-cave",
"permalink": "https://www.fpcomplete.com/blog/levana-nft-launch/#opening-the-cave",
"title": "Opening the cave",
"children": []
},
{
"level": 2,
"id": "problems",
"permalink": "https://www.fpcomplete.com/blog/levana-nft-launch/#problems",
"title": "Problems",
"children": [
{
"level": 3,
"id": "fcd-rate-limiting",
"permalink": "https://www.fpcomplete.com/blog/levana-nft-launch/#fcd-rate-limiting",
"title": "FCD rate limiting",
"children": []
},
{
"level": 3,
"id": "ipfs-hosting",
"permalink": "https://www.fpcomplete.com/blog/levana-nft-launch/#ipfs-hosting",
"title": "IPFS hosting",
"children": []
}
]
},
{
"level": 2,
"id": "lessons-learned",
"permalink": "https://www.fpcomplete.com/blog/levana-nft-launch/#lessons-learned",
"title": "Lessons learned",
"children": []
},
{
"level": 2,
"id": "conclusion",
"permalink": "https://www.fpcomplete.com/blog/levana-nft-launch/#conclusion",
"title": "Conclusion",
"children": []
}
],
"word_count": 3966,
"reading_time": 20,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/announcing-amber-ci-secret-tool.md",
"content": "<p>Years ago, <a href=\"https://travis-ci.org/\">Travis CI</a> introduced a method for passing secret values from your repository into the Travis CI system. This method relies on encryption to ensure that anyone can provide a new secret, but only the CI system itself can read those secrets. I've always thought that the Travis approach to secrets was one of the best around, and was disappointed that other CI tools continued to use the more standard "set and update secrets in a web interface" approach. (We'll get into the advantages of the encrypted-secrets approach a bit later.)</p>\n<p>Fast-forward to earlier this year, and for running <a href=\"https://www.fpcomplete.com/products/kube360/\">Kube360</a> deployment jobs, we found that the secrets-in-CI-web-interface approach simply wasn't scaling. So I hacked together a quick script that used GPG and symmetric key encryption to encrypt a <code>secrets.sh</code> file containing the relevant secrets for CI (or, really, CD in this case). This worked, but had some downsides.</p>\n<p>A few weeks ago, I finally bit the bullet and rewrote this ugly script. Instead of using GPG and symmetric key encryption, I used <a href=\"https://lib.rs/crates/sodiumoxide\"><code>sodiumoxide</code></a> and public key encryption. This addressed essentially all the pain points I had with our CD setup. However, this tool was very much custom-built for Kube360.</p>\n<p>Over the weekend, I extracted the general-purpose components of this tool into a <a href=\"https://github.com/fpco/amber\">new open source repository</a>. This blog post is announcing the first public release of Amber, a tool geared at CI/CD systems for better management of secret data over time. There's basic information in that repo to describe how to use the tool. This blog post is intended to go into more detail on why I believe encrypted-secrets is a better approach than web-interface-of-secrets.</p>\n<h2 id=\"the-pain-points\">The pain points</h2>\n<p>There are two primary issues with the standard CI secrets management approach:</p>\n<ol>\n<li>It can be tedious to manage a large number of values inside a web interface. I've personally made mistakes copy-pasting values. And if you ever need to run a script locally for testing purposes, copying all the values out each time is an even bigger pain. (More on that below.)</li>\n<li>It's completely reasonable for secret values to change over time. However, there's no evidence of this in the source repository feeding into the CI system. Instead, the changes happen opaquely, and can never be observed as having changed, nor an old build faithfully reproduced with the original values. (This is pretty similar to why we believe <a href=\"https://www.fpcomplete.com/blog/2017/04/ci-build-process-in-code-repository/\">your CI build process should be in your code repository</a>.)</li>\n</ol>\n<p>With encrypted values within a repository, both of these things change. Adding new encrypted values is now a command line call, which for many of us is less tedious and more foolproof than web interfaces. The encrypted secrets are stored in the Git repository itself, so as values change over time, the files provide evidence of that fact. And checking out an old commit from the repository will allow you to rerun a build with exactly the same secrets as when the commit was made.</p>\n<h2 id=\"why-public-key\">Why public key</h2>\n<p>One of the important changes I made from the GPG script mentioned above was public key, instead of symmetric key, encryption. With symmetric key encryption, you use the same key to encrypt and decrypt data. That means that all people who want to encrypt a value into the repository need access to a piece of secret data. While encrypting new secret values isn't <em>that</em> common an activity, requiring access to that secret data is best avoided.</p>\n<p>Instead, with public key encryption, we generate a secret key and public key. The public key lives inside the repository, in the same file as the secrets themselves. With that in place, anyone with access to the repo can encrypt new values, without any ability to read existing values.</p>\n<p>Further, since the public key is available in the repository, Amber is able to perform sanity checks to ensure that its secret key matches up with the public key in the repository. While the encryption algorithms we use provide the ability to ensure message integrity, this self-check provides for nicer diagnostics, clearly distinguishing "message corrupted" from "looks like you're using the wrong secret key for this repository."</p>\n<h2 id=\"minimizing-deltas\">Minimizing deltas</h2>\n<p>Amber is optimized for the Git repository case. This includes wanting to minimize the deltas when updating secrets. This resulted in three design decisions:</p>\n<ul>\n<li>\n<p>The config file format is YAML. Its whitespace-sensitive formatting makes it a great choice to minimize the number of lines affected when updating a secret. While other formats (like TOML) would have been great choices too, I stuck with YAML as, anecdotally, it seems to have stronger overall language support for people wishing to write companion tools.</p>\n</li>\n<li>\n<p>In addition to storing the secret name and encrypted value (the ciphertext), Amber additionally includes a SHA256 digest of the secret. This means that, if you encrypt the same value twice, Amber can detect this and avoid generating a new ciphertext. This has the additional benefit of letting users check if they know the secret value without being able to decrypt the file.</p>\n</li>\n<li>\n<p>The most natural representation of this data would be a YAML mapping, something like:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">secrets</span><span style=\"color:#657b83;\">:\n</span><span style=\"color:#657b83;\"> </span><span style=\"color:#268bd2;\">NAME1</span><span style=\"color:#657b83;\">:\n</span><span style=\"color:#657b83;\"> </span><span style=\"color:#268bd2;\">sha256</span><span style=\"color:#657b83;\">: </span><span style=\"color:#2aa198;\">deadbeef\n</span><span style=\"color:#657b83;\"> </span><span style=\"color:#268bd2;\">cipher</span><span style=\"color:#657b83;\">: </span><span style=\"color:#2aa198;\">abc123\n</span></code></pre>\n<p>However, in most languages, the ordering of keys in a mapping is arbitrary. This makes it harder to read these files, and means that arbitrary minor changes may result in large deltas. Instead, Amber stores secrets in an array:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">secrets</span><span style=\"color:#657b83;\">:\n</span><span style=\"color:#657b83;\">- </span><span style=\"color:#268bd2;\">name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#2aa198;\">NAME1\n</span><span style=\"color:#657b83;\"> </span><span style=\"color:#268bd2;\">sha256</span><span style=\"color:#657b83;\">: </span><span style=\"color:#2aa198;\">deadbeef\n</span><span style=\"color:#657b83;\"> </span><span style=\"color:#268bd2;\">cipher</span><span style=\"color:#657b83;\">: </span><span style=\"color:#2aa198;\">abc123\n</span></code></pre></li>\n</ul>\n<p>This all works together to achieve what for me is the goal of secrets in a repository: you can trivially see in a <code>git diff</code> which secrets values were added, removed, or updated.</p>\n<h2 id=\"local-running\">Local running</h2>\n<p>Ideally production deployments are only ever run from the official CI/CD system designated for that. However:</p>\n<ol>\n<li>Sometimes during development it's much easier to iterate by doing non-production deployments from your local system.</li>\n<li>As a realist, I have to admit that even the best run DevOps teams may occasionally need to bend the rules for expediency or better debugging of a production issue.</li>\n</ol>\n<p>For Kube360, it wasn't unreasonable to have about a dozen secret values for a standard deployment. Copy/pasting all of those to your local machine each time you want to debug an issue wasn't feasible. This encouraged some worst practices, such as keeping the secret values in a plain-text shell script file locally. For a development cluster, that's not the worst thing in the world. But lax security practices in dev tend to bleed into prod too easily.</p>\n<p>Copying a single secret value from CI secrets or a team password manager is a completely different story. It takes 30 seconds at the beginning of a debug session. I feel no objections to doing so.</p>\n<p>Even this may be something we can bypass with cloud secrets managers, which I'll mention below.</p>\n<h2 id=\"what-s-with-the-name\">What's with the name?</h2>\n<p>As we all know, there are two hard problems in computer science:</p>\n<ol>\n<li>Cache invalidation</li>\n<li>Naming things</li>\n<li>Off-by-one errors</li>\n</ol>\n<p>I named this tool Amber based on Jurassic Park, and the idea of some highly important data (dinosaur DNA) being trapped in amber under layers of sediment. This fit in nicely with my image of storing encrypted secrets inside the commits of a Git repository. But since I just finished playing "Legend of Zelda: Skyward Sword," a more appropriate image seems to be:</p>\n<p><img src=\"/images/blog/amber-zelda.png\" alt=\"Zelda trapped in amber\" /></p>\n<h2 id=\"implementation\">Implementation</h2>\n<p>I wrote this tool in Rust. It's a pretty small codebase currently, clocking in at only 445 SLOC of Rust code. It's also a pretty simple overall implementation, if anyone is interested in a first project to contribute to.</p>\n<h2 id=\"future-enhancements\">Future enhancements</h2>\n<p>Future enhancements will be driven by internal and customer needs at FP Complete, as well as feedback we receive on the issue tracker and pull requests. I have a few ideas ranging from concrete to nebulous for enhancements:</p>\n<ul>\n<li>Masking values. Currently, <code>amber exec</code> will simply run the child process without modifying its output at all. A standard CI system feature is to mask secret values from output. Implementing such as change in Amber should be straightforward. (<a href=\"https://github.com/fpco/amber/issues/1\">Issue #1</a>)</li>\n<li>Tie-ins with cloud secrets management systems. Currently, Amber's only source of the secret key is via environment variables. There are many use cases where grabbing the data from a secrets manager, such as AWS Secrets Manager or Azure Key Vault, would be a better choice. In particular, during deployments, this could allow delegating access to secrets to existing cloud-native permissions mechanisms. See <a href=\"https://github.com/fpco/amber/issues/2\">issue #2</a> and <a href=\"https://github.com/fpco/amber/pull/4\">pull request #4</a> for some more information. One possible approach here is to follow a pattern of naming the secret based on the public key, leading to a zero-config approach to discovering the secret key (since the public key is already in the repository).</li>\n<li>Additional platform support. Currently, we're building executables for x86-64 on Linux (static via musl), Windows, and Mac. Cross compilation support from Rust is great, and one of the reasons I prefer writing CI tools like this in Rust. However, the <code>sodiumoxide</code> library depends on <code>libsodium</code>, so additional GitHub Actions setup will be necessary to get these builds working.</li>\n<li>Auto-generation of passwords. In our Kube360 work, a common need is to generate a temporary password to be used by different components in the system (e.g., an OpenID Connect client secret used by both the Identity Provider and Service Provider). A simple <code>amber gen-password CLIENT_SECRET</code> subcommand may be nice.</li>\n<li>I haven't released this code to <a href=\"https://crates.io/\">crates</a>, but if there's interest I'd be happy to do so.</li>\n<li>Support for encrypted files in addition to encrypted environment variables. I haven't really thought through what the interface for this may look like.</li>\n</ul>\n<h2 id=\"get-started\">Get started</h2>\n<p>There are <a href=\"https://github.com/fpco/amber#readme\">instructions in the repo</a> for getting started with Amber. The basic steps are:</p>\n<ul>\n<li>Download the executable from <a href=\"https://github.com/fpco/amber/releases\">the release page</a> or build it yourself</li>\n<li>Use <code>amber init</code> to create an <code>amber.yaml</code> file and a secret key</li>\n<li>Store the secret key somewhere safe, like your password manager, and additionally within your CI system's secrets\n<ul>\n<li>In theory, this is the last value you'll ever store there!</li>\n</ul>\n</li>\n<li>Add your secrets with <code>amber encrypt</code></li>\n<li>Commit <code>amber.yaml</code> to your repository</li>\n<li>Modify your CI scripts to download the Amber executable and use <code>amber exec</code> to run commands that need secrets</li>\n</ul>\n<h2 id=\"more-from-fp-complete\">More from FP Complete</h2>\n<p>FP Complete is an IT consulting firm specializing in server-side development, DevOps, Rust, and Haskell. A large part of our consulting involves improving and automating build and deployment pipelines. If you're interested in additional help from FP Complete in one of these domains, please <a href=\"https://www.fpcomplete.com/contact-us/\">contact us</a>.</p>\n<p>Interested in working with a team of DevOps, Rust, and Haskell engineers to solve real world problems? We're actively <a href=\"https://www.fpcomplete.com/jobs/\">hiring senior and lead DevOps engineers</a>.</p>\n<p>Want to read more? Check out:</p>\n<ul>\n<li><a href=\"https://www.fpcomplete.com/blog/\">Our blog</a></li>\n<li><a href=\"https://www.fpcomplete.com/platformengineering/\">Our DevOps homepage</a></li>\n<li><a href=\"https://www.fpcomplete.com/rust/\">Our Rust homepage</a></li>\n</ul>\n",
"permalink": "https://www.fpcomplete.com/blog/announcing-amber-ci-secret-tool/",
"slug": "announcing-amber-ci-secret-tool",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Announcing Amber, encrypted secrets management",
"description": "We've released a new tool, Amber, to help better manage secrets in Git repositories for CI purposes. Read more about the motivation and how to get started.",
"updated": null,
"date": "2021-08-17",
"year": 2021,
"month": 8,
"day": 17,
"taxonomies": {
"tags": [
"kubernetes",
"rust"
],
"categories": [
"devops"
]
},
"extra": {
"author": "Michael Snoyman",
"blogimage": "/images/blog-listing/devops.png",
"author_avatar": "/images/leaders/michael-snoyman.png",
"image": "images/blog/thumbs/announcing-amber.png"
},
"path": "blog/announcing-amber-ci-secret-tool/",
"components": [
"blog",
"announcing-amber-ci-secret-tool"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "the-pain-points",
"permalink": "https://www.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#the-pain-points",
"title": "The pain points",
"children": []
},
{
"level": 2,
"id": "why-public-key",
"permalink": "https://www.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#why-public-key",
"title": "Why public key",
"children": []
},
{
"level": 2,
"id": "minimizing-deltas",
"permalink": "https://www.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#minimizing-deltas",
"title": "Minimizing deltas",
"children": []
},
{
"level": 2,
"id": "local-running",
"permalink": "https://www.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#local-running",
"title": "Local running",
"children": []
},
{
"level": 2,
"id": "what-s-with-the-name",
"permalink": "https://www.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#what-s-with-the-name",
"title": "What's with the name?",
"children": []
},
{
"level": 2,
"id": "implementation",
"permalink": "https://www.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#implementation",
"title": "Implementation",
"children": []
},
{
"level": 2,
"id": "future-enhancements",
"permalink": "https://www.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#future-enhancements",
"title": "Future enhancements",
"children": []
},
{
"level": 2,
"id": "get-started",
"permalink": "https://www.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#get-started",
"title": "Get started",
"children": []
},
{
"level": 2,
"id": "more-from-fp-complete",
"permalink": "https://www.fpcomplete.com/blog/announcing-amber-ci-secret-tool/#more-from-fp-complete",
"title": "More from FP Complete",
"children": []
}
],
"word_count": 1879,
"reading_time": 10,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/istio-mtls-debugging-story.md",
"content": "<p>Last week, our team was working on a feature enhancement to <a href=\"https://www.fpcomplete.com/products/kube360/\">Kube360</a>. We work with clients in regulated industries, and one of the requirements was fully encrypted traffic throughout the cluster. While we've supported Istio's mutual TLS (mTLS) as an optional feature for end-user applications, not all of our built-in services were using mTLS strict mode. We were working on rolling out that support.</p>\n<p>One of the cornerstones of Kube360 is our centralized authentication system, which is primarily supplied by a service (called <code>k3dash</code>) that receives incoming traffic, performs authentication against an external identity provider (such as Okta, Azure AD, or others), and then provides those credentials to the other services within the clusters, such as the Kubernetes Dashboard or Grafana. This service in particular was giving some trouble.</p>\n<p>Before diving into the bugs and the debugging journey, however, let's review both Istio's mTLS support and relevant details of how <code>k3dash</code> operates.</p>\n<p><em>Interested in solving these kinds of problems? We're looking for experienced DevOps engineers to join our global team. We're hiring globally, and particularly looking for another US lead engineer. If you're interesting, <a href=\"mailto:jobs@fpcomplete.com\">send your CV to jobs@fpcomplete.com</a>.</em></p>\n<h2 id=\"what-is-mtls\">What is mTLS?</h2>\n<p>In a typical Kubernetes setup, encrypted traffic comes into the cluster and hits a load balancer. That load balancer terminates the TLS connection, resulting in the decrypted traffic. That decrypted traffic is then sent to the relevant service within the cluster. Since traffic within the cluster is typically considered safe, for many use cases this is an acceptable approach.</p>\n<p>But for some use cases, such as handling Personally Identifiable Information (PII), extra safeguards may be desired or required. In those cases, we would like to ensure that <em>all</em> network traffic, even traffic inside the same cluster, is encrypted. That gives extra guarantees against both snooping (reading data in transit) and spoofing (faking the source of data) attacks. This can help mitigate the impact of other flaws in the system.</p>\n<p>Implementing this complete data-in-transit encryption system manually requires a major overhaul to essentially every application in the cluster. You'll need to teach all of them to terminate their own TLS connections, issue certificates for all applications, and add a new Certificate Authority for all applications to respect.</p>\n<p>Istio's mTLS handles this outside of the application. It installs a sidecar that communicates with your application over a localhost connection, bypassing exposed network traffic. It uses sophisticated port forwarding rules (via IP tables) to redirect incoming and outgoing traffic to and from the pod to go via the sidecar. And the Envoy sidecar in the proxy handles all the logic of obtaining TLS certificates, refreshing keys, termination, etc.</p>\n<p>The way Istio handles all of this is pretty incredible. When it works, it works great. And when it fails, it can be disastrously difficult to debug. Which is what happened here (though thankfully it took less than a day to get to a conclusion). In the realm of <em>epic foreshadowment</em>, let me point out three specific points about Istio's mTLS worth mentioning.</p>\n<ul>\n<li>In strict mode, which is what we're going for, the Envoy sidecar will reject any incoming plaintext communication.</li>\n<li>Something I hadn't recognized at first, but now have fully internalized: normally, if you make an HTTP connection to a host that doesn't exist, you'll get a failed connection error. You definitely <em>won't</em> get an HTTP response. With Istio, however, you'll <em>always</em> make a successful outgoing HTTP connection, since your connection is going to Envoy itself. If the Envoy proxy cannot make the connection, it will return an HTTP response body with a 503 error message, like most proxies.</li>\n<li>The Envoy proxy has special handling for some protocols. Most importantly, if you make a plaintext HTTP outgoing connection, the Envoy proxy has sophisticated abilities to parse the outgoing request, understand details about various headers, and do intelligent routing.</li>\n</ul>\n<p>OK, that's mTLS. Let's talk about the other player here: <code>k3dash</code>.</p>\n<h2 id=\"k3dash-and-reverse-proxying\"><code>k3dash</code> and reverse proxying</h2>\n<p>The primary method <code>k3dash</code> uses to provide authentication credentials to other services inside the cluster is HTTP reverse proxying. This is a common technique, and common libraries exist for doing it. In fact, <a href=\"https://www.stackage.org/package/http-reverse-proxy\">I wrote one such library</a> years ago. We've already mentioned a common use case of reverse proxying: load balancing. In a reverse proxy situation, incoming traffic is received by one server, which analyzes the incoming request, performs some transformations, and then chooses a destination service to forward the request to.</p>\n<p>One of the most important aspects of reverse proxying is header management. There are a few different things you can do at the header level, such as:</p>\n<ul>\n<li>Remove hop-by-hop headers, such as <code>transfer-encoding</code>, which apply to a single hop and not the end-to-end communication between client and server.</li>\n<li>Inject new headers. For example, in <code>k3dash</code>, we regularly inject headers recognized by the final services for authentication purposes.</li>\n<li>Leave headers completely untouched. This is often the case with headers like <code>content-type</code>, where we typically want the client and final server to exchange data without any interference.</li>\n</ul>\n<p>As one <em>epic foreshadowment</em> example, consider the <code>Host</code> header in a typical reverse proxy situation. I may have a single load balancer handling traffic for a dozen different domain names, including domain names <code>A</code> and <code>B</code>. And perhaps I have a single service behind the reverse proxy serving the traffic for both of those domain names. I need to make sure that my load balancer forwards on the <code>Host</code> header to the final service, so it can decide how to respond to the request.</p>\n<p><code>k3dash</code> in fact uses the library linked above for its implementation, and is following fairly standard header forwarding rules, plus making some specific modifications within the application.</p>\n<p>I think that's enough backstory, and perhaps you're already beginning to piece together what went wrong based on my clues above. Anyway, let's dive in!</p>\n<h2 id=\"the-problem\">The problem</h2>\n<p>One of my coworkers, Sibi, got started on the Istio mTLS strict mode migration. He got strict mode turned on in a test cluster, and then began to figure out what was broken. I don't know all the preliminary changes he made. But when he reached out to me, he'd gotten us to a point where the Kubernetes load balancer was successfully receiving the incoming requests for <code>k3dash</code> and forwarding them along to <code>k3dash</code>. <code>k3dash</code> was able to log the user in and provide its own UI display. All good so far.</p>\n<p>However, following through from the main UI to the Kubernetes Dashboard would fail, and we'd end up with this error message in the browser:</p>\n<blockquote>\n<p>upstream connect error or disconnect/reset before headers. reset reason: connection failure</p>\n</blockquote>\n<p>Sibi believed this to be a problem with the <code>k3dash</code> codebase itself and asked me to step in to help debug.</p>\n<h2 id=\"the-wrong-rabbit-hole-and-incredible-laziness\">The wrong rabbit hole, and incredible laziness</h2>\n<p>This whole section is just a cathartic gripe session on how I foot-gunned myself. I'm entirely to blame for my own pain, as we're about to see.</p>\n<p>It seemed pretty clear that the outgoing connection from the <code>k3dash</code> pod to the <code>kubernetes-dashboard</code> pod was failing. (And this turned out to be a safe guess.) The first thing I wanted to do was make a simpler repro, which in this case involved <code>kubectl exec</code>ing into the <code>k3dash</code> container and <code>curl</code>ing to the in-cluster service endpoint. Essentially:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">$ curl -ivvv http://kube360-kubernetes-dashboard.kube360-system.svc.cluster.local/\n* Trying 172.20.165.228...\n* TCP_NODELAY set\n* Connected to kube360-kubernetes-dashboard.kube360-system.svc.cluster.local (172.20.165.228) port 80 (#0)\n> GET / HTTP/1.1\n> Host: kube360-kubernetes-dashboard.kube360-system.svc.cluster.local\n> User-Agent: curl/7.58.0\n> Accept: */*\n>\n< HTTP/1.1 503 Service Unavailable\nHTTP/1.1 503 Service Unavailable\n< content-length: 84\ncontent-length: 84\n< content-type: text/plain\ncontent-type: text/plain\n< date: Wed, 14 Jul 2021 15:29:04 GMT\ndate: Wed, 14 Jul 2021 15:29:04 GMT\n< server: envoy\nserver: envoy\n<\n* Connection #0 to host kube360-kubernetes-dashboard.kube360-system.svc.cluster.local left intact\nupstream connect error or disconnect/reset before headers. reset reason: local reset\n</span></code></pre>\n<p>This reproed the problem right away. Great! I was now completely convinced that the problem was not <code>k3dash</code> specific, since neither <code>curl</code> nor <code>k3dash</code> could make the connection, and they both gave the same <code>upstream connect error</code> message. I could think of a few different reasons for this to happen, none of which were correct:</p>\n<ul>\n<li>The outgoing packets from the container were not being sent to the Envoy proxy. I strongly believed this one for a while. But if I'd thought a bit harder, I would have realized that this was completely impossible. That <code>upstream connect error</code> message was of course coming from the Envoy proxy itself! If we were having a normal connection failure, we would have received the error message at the TCP level, not as an HTTP 503 response code. Next!</li>\n<li>The Envoy sidecar was receiving the packets, but the mesh was confused enough that it couldn't figure out how to connect to the destination Envoy sidecar. This turned out to be partially right, but not in the way I thought.</li>\n</ul>\n<p>I futzed around with lots of different attempts here but was essentially stalled. Until Sibi noticed something fascinating. It turns out that the following, seemingly nonsensical command <em>did</em> work:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">curl http://kube360-kubernetes-dashboard.kube360-system.svc.cluster.local:443/\n</span></code></pre>\n<p>For some reason, making an <em>insecure</em> HTTP request over 443, the <em>secure</em> HTTPS port, worked. This made no sense, of course. Why would using the wrong port fix everything? And this is where incredible laziness comes into play. You see, Kubernetes Dashboard's default configuration uses TLS, and requires all of that setup I mentioned above about passing around certificates and updating accepted Certificate Authorities. But you can turn off that requirement, and make it listen on plain text. Since (1) this was intracluster communication, and (2) we've always had strict mTLS on our roadmap, we decided to simply turn off TLS in the Kubernetes Dashboard. However, when doing so, I forgot to switch the port number from 443 to 80.</p>\n<p>Not to worry though! I <em>did</em> remember to correctly configure <code>k3dash</code> to communicate with Kubernetes Dashboard, using insecure HTTP, over port 443. Since both parties agreed on the port, it didn't matter that it was the wrong port.</p>\n<p>But this was all very frustrating. It meant that the "repro" wasn't a repro at all. <code>curl</code>ing on the wrong port was giving the same error message, but for a different reason. In the meanwhile, we went ahead and changed Kubernetes Dashboard to listen on port 80 and <code>k3dash</code> to connect on port 80. We thought there <em>may</em> be a possibility that the Envoy proxy was giving some special treatment to the port number, which in retrospect doesn't really make much sense. In any event, this ended at a situation where our "repro" wasn't a repro at all.</p>\n<h2 id=\"the-bug-is-in-k3dash\">The bug is in <code>k3dash</code></h2>\n<p>Now it was clear that Sibi was right. <code>curl</code> could connect, <code>k3dash</code> couldn't. The bug <em>must</em> be inside <code>k3dash</code>. But I couldn't figure out how. Being the author of essentially all the HTTP libraries involved in this toolchain, I began to worry that my HTTP client library itself may somehow be the source of the bug. I went down a rabbit hole there too, putting together some minimal sample program outside <code>k3dash</code>. I <code>kubectl cp</code>ed them over and then ran them... and everything worked fine. Phew, my libraries were working, but not <code>k3dash</code>.</p>\n<p>Then I did the thing I should have done at the very beginning. I looked at the logs very, very carefully. Remember, <code>k3dash</code> is doing a reverse proxy. So, it receives an incoming request, modifies it, makes the new request, and then sends a modified response back. The logs included the modified outgoing HTTP request (some fields modified to remove private information):</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">2021-07-15 05:20:39.820662778 UTC ServiceRequest Request {\n host = "kube360-kubernetes-dashboard.kube360-system.svc.cluster.local"\n port = 80\n secure = False\n requestHeaders = [("X-Real-IP","127.0.0.1"),("host","test-kube360-hostname.hidden"),("upgrade-insecure-requests","1"),("user-agent","<REDACTED>"),("accept","text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9"),("sec-gpc","1"),("referer","http://test-kube360-hostname.hidden/dash"),("accept-language","en-US,en;q=0.9"),("cookie","<REDACTED>"),("x-forwarded-for","192.168.0.1"),("x-forwarded-proto","http"),("x-request-id","<REDACTED>"),("x-envoy-attempt-count","3"),("x-envoy-internal","true"),("x-forwarded-client-cert","<REDACTED>"),("Authorization","<REDACTED>")]\n path = "/"\n queryString = ""\n method = "GET"\n proxy = Nothing\n rawBody = False\n redirectCount = 0\n responseTimeout = ResponseTimeoutNone\n requestVersion = HTTP/1.1\n}\n</span></code></pre>\n<p>I tried to leave in enough content here to give you the same overwhelmed sense that I had looking it. Keep in mind the <code>requestHeaders</code> field is in practice about three times as long. Anyway, with the slimmed down headers, and all my hints throughout, see if you can guess what the problem is.</p>\n<p>Ready? It's the <code>Host</code> header! Let's take a quote from the <a href=\"https://istio.io/latest/docs/ops/configuration/traffic-management/traffic-routing/\">Istio traffic routing documentation</a>. Regarding HTTP traffic, it says:</p>\n<blockquote>\n<p>Requests are routed based on the port and <em><code>Host</code></em> header, rather than port and IP. This means the destination IP address is effectively ignored. For example, <code>curl 8.8.8.8 -H "Host: productpage.default.svc.cluster.local"</code>, would be routed to the <code>productpage</code> Service.</p>\n</blockquote>\n<p>See the problem? <code>k3dash</code> is behaving like a standard reverse proxy, and including the <code>Host</code> header, which is almost always the right thing to do. But not here! In this case, that <code>Host</code> header we're forwarding is confusing Envoy. Envoy is trying to connect to something (<code>test-kube360-hostname.hidden</code>) that doesn't respond to its mTLS connections. That's why we get the <code>upstream connect error</code>. And that's why we got the same response as when we used the wrong port number, since Envoy is configured to only receive incoming traffic on a port that the service is actually listening to.</p>\n<h2 id=\"the-fix\">The fix</h2>\n<p>After all of that, the fix is rather anticlimactic:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#dc322f;\">-(\\(h, _) -> not (Set.member h _serviceStripHeaders))\n</span><span style=\"color:#859900;\">+-- Strip out host headers, since they confuse the Envoy proxy\n+(\\(h, _) -> not (Set.member h _serviceStripHeaders) && h /= "Host")\n</span></code></pre>\n<p>We already had logic in <code>k3dash</code> to strip away specific headers for each service. And it turns out this logic was primarily used to strip out the <code>Host</code> header for services that got confused when they saw it! Now we just need to strip away the <code>Host</code> header for all the services instead. Fortunately none of our services perform any logic based on the <code>Host</code> header, so with that in place, we should be good. We deployed the new version of <code>k3dash</code>, and voilà! everything worked.</p>\n<h2 id=\"the-moral-of-the-story\">The moral of the story</h2>\n<p>I walked away from this adventure with a much better understanding of how Istio interacts with applications, which is great. I got a great reminder to look more carefully at log messages before hardening my assumptions about the source of a bug. And I got a great kick in the pants for being lazy about port number fixes.</p>\n<p>All in all, it was about six hours of debugging fun. And to quote a great Hebrew phrase on it, "היה טוב, וטוב שהיה" (it was good, and good that it <em>was</em> (in the past)).</p>\n<hr />\n<p>As I mentioned above, we're actively looking for new DevOps candidates, especially US based candidates. If you're interested in working with a global team of experienced DevOps, Rust, and Haskell engineers, consider <a href=\"mailto:jobs@fpcomplete.com\">sending us your CV</a>.</p>\n<p>And if you're looking for a solid Kubernetes platform, batteries included, so you can offload this kind of tedious debugging to some other unfortunate souls (read: us), <a href=\"https://www.fpcomplete.com/products/kube360/\">check out Kube360</a>.</p>\n<p>If you liked this article, you may also like:</p>\n<ul>\n<li><a href=\"https://www.fpcomplete.com/platformengineering/\">DevSecOps homepage</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/rust-kubernetes-windows/\">Deploying Rust with Windows Containers on Kubernetes</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/cloud-vendor-neutrality/\">Cloud Vendor Neutrality</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/devops-for-developers/\">DevOps for (Skeptical) Developers</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/kube360s-kubernetes-security-focus/\">Secure defaults with Kubernetes Security with Kube360</a></li>\n</ul>\n<div class=\"blog-cta\">\n<p><a href=\"https://www.fpcomplete.com/signups/request-a-demo/\"><img src=\"/images/cta/kube360.png\" alt=\"See what Kube360 can do for you\" /></a></p>\n</div>\n",
"permalink": "https://www.fpcomplete.com/blog/istio-mtls-debugging-story/",
"slug": "istio-mtls-debugging-story",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "An Istio/mutual TLS debugging story",
"description": "While rolling out Istio's strict mTLS mode in our Kube360 product, we ran into an interesting corner case problem.",
"updated": null,
"date": "2021-07-20",
"year": 2021,
"month": 7,
"day": 20,
"taxonomies": {
"categories": [
"devops",
"kube360",
"it-compliance"
],
"tags": [
"kubernetes",
"regulated"
]
},
"extra": {
"author": "Michael Snoyman",
"blogimage": "/images/blog-listing/devops.png",
"author_avatar": "/images/leaders/michael-snoyman.png",
"image": "images/blog/thumbs/istio-mtls-debugging-story.png"
},
"path": "blog/istio-mtls-debugging-story/",
"components": [
"blog",
"istio-mtls-debugging-story"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "what-is-mtls",
"permalink": "https://www.fpcomplete.com/blog/istio-mtls-debugging-story/#what-is-mtls",
"title": "What is mTLS?",
"children": []
},
{
"level": 2,
"id": "k3dash-and-reverse-proxying",
"permalink": "https://www.fpcomplete.com/blog/istio-mtls-debugging-story/#k3dash-and-reverse-proxying",
"title": "k3dash and reverse proxying",
"children": []
},
{
"level": 2,
"id": "the-problem",
"permalink": "https://www.fpcomplete.com/blog/istio-mtls-debugging-story/#the-problem",
"title": "The problem",
"children": []
},
{
"level": 2,
"id": "the-wrong-rabbit-hole-and-incredible-laziness",
"permalink": "https://www.fpcomplete.com/blog/istio-mtls-debugging-story/#the-wrong-rabbit-hole-and-incredible-laziness",
"title": "The wrong rabbit hole, and incredible laziness",
"children": []
},
{
"level": 2,
"id": "the-bug-is-in-k3dash",
"permalink": "https://www.fpcomplete.com/blog/istio-mtls-debugging-story/#the-bug-is-in-k3dash",
"title": "The bug is in k3dash",
"children": []
},
{
"level": 2,
"id": "the-fix",
"permalink": "https://www.fpcomplete.com/blog/istio-mtls-debugging-story/#the-fix",
"title": "The fix",
"children": []
},
{
"level": 2,
"id": "the-moral-of-the-story",
"permalink": "https://www.fpcomplete.com/blog/istio-mtls-debugging-story/#the-moral-of-the-story",
"title": "The moral of the story",
"children": []
}
],
"word_count": 2666,
"reading_time": 14,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/cloud-vendor-neutrality.md",
"content": "<p>Earlier this week, Amazon removed Parler from its platform. As a company hosting a network service on a cloud provider today, should you worry about such actions from cloud vendors? And what steps should you be taking now?</p>\n<p>In this post, we'll explore some of the risks associated with being tied to a single vendor, and the costs involved in breaking the dependency. I'll also give some recommendations on low hanging fruit.</p>\n<p>Ultimately, how far down the vendor neutrality path you want to go is a company specific risk mitigation strategy. In this post, we'll explore the raw information, but deeper analysis would be based on your company's specific situation. As usual, if you would like more direct help from the team at FP Complete in understanding these topics, please <a href=\"https://www.fpcomplete.com/contact-us/\">contact us for a consultation</a>.</p>\n<h2 id=\"what-is-vendor-neutrality\">What is vendor neutrality?</h2>\n<p>Vendor neutrality is not a binary. There are various levels on a spectrum from an application that leverages many vendor-specific services to an application which runs on any Linux machine in the world. Achieving complete vendor neutrality is almost never the goal. Instead, most companies interested in this topic are looking to reduce their dependencies where reasonable.</p>\n<p>To be more concrete, let's say you're on Amazon, and you're looking into what database options to use in your application. Your team comes up with three options:</p>\n<ol>\n<li>Build it using DynamoDB, an Amazon-specific proprietary offering</li>\n<li>Build it using PostgreSQL hosted on Amazon's RDS service</li>\n<li>Build it using PostgreSQL which your team manages themselves</li>\n</ol>\n<p>Option (1) provides no vendor neutrality. If you, for any reason, decide to leave Amazon, you'll need to rewrite large parts of your application to move from DynamoDB. This may be a significant undertaking, introducing a major barrier to exit from Amazon.</p>\n<p>Option (2), while still leveraging an Amazon service, does not fall into that same trap. Your application will speak to PostgreSQL, an open source database that can be hosted anywhere in the world. If you're dissatisfied with RDS, you can migrate to another offering fairly easy. PostgreSQL hosted offerings are available on other cloud providers. And by using RDS, you'll get some features more easily, such as backups and replication.</p>\n<p>Option (3) is the most vendor neutral. You'll be forced to implement all features of PostgreSQL you want yourself. Maybe this will entail creating a Docker image with a fully configured PostgreSQL instance. Moving this to Azure or on-prem is even easier than option (2). But we may be at the point of diminishing returns, as we'll discuss below.</p>\n<p>To summarize: vendor neutrality is a spectrum measuring how tied you are to a specific vendor, and how difficult it would be to move to a different one.</p>\n<h2 id=\"advantages-of-vendor-neutrality\">Advantages of vendor neutrality</h2>\n<p>The current situation with Parler is an extreme example of the advantages of vendor neutrality. I would imagine most companies doing business with Amazon don't have a reasonable expectation that Amazon would decide to remove them from their platform. Again, this is a risk assessment scenario, and you need to analyze the risk for your own business. A company hosting uncensored political discourse is in a different risk category from a someone running a personal blog.</p>\n<p>But this is far from the only advantage of vendor neutrality. Let's analyze some of the most common concerns I've seen for companies to remain vendor neutral.</p>\n<ul>\n<li><strong>Price sensitivity</strong> Cloud costs can be a major part of a company's budget, and costs can vary radically between providers. Various providers are also willing to give large incentives for companies to switch platforms. But if you've designed your application deeply around one provider, the cost of switching may not exceed the long term cost savings, leaving you at your current provider's mercy.</li>\n<li><strong>Regulatory obligations</strong> Some governments may have requirements that your software run on specific vendor hardware, or specific on-prem environments. Building up your software around one provider may prevent you from offering your services in those cases.</li>\n<li><strong>Client preference</strong> Similarly, if you provide managed software to companies, they may have a built-in cloud provider preference. If you've built your software on Google Cloud, but they have a corporate policy that all new projects live on Azure, you may lose the sale.</li>\n<li><strong>Geographic distribution</strong> For lowest latency, you'll want to put your services as close to the clients as possible. And it may turn out that the provider you've chosen simply doesn't have a presence there. Or a competitor may be closer. Or a service you want to peer with is on different provider, and the data costs will be much lower if you switch providers.</li>\n</ul>\n<p>There are many more examples, this isn't an exhaustive list. What I want to motivate here is that vendor neutrality isn't just a fringe ideal for companies afraid of platform eviction. There are many reasons a normal company in its normal course of business may wish to be vendor neutral. You should analyze these cases, as well as others that may apply to your company, and assess the value of neutrality.</p>\n<h2 id=\"costs-of-vendor-neutrality\">Costs of vendor neutrality</h2>\n<p>Vendor neutrality does not come for free. A primary value proposition of most cloud providers is quick time to market. By leveraging existing services, your team can offload creation and maintenance of complex systems. Eschewing such services and building from scratch will impact your time to market, and potentially have other impacts (like increase bug rate, reduced reliability, etc).</p>\n<p>I often see engineers decrying the evils of vendor lock-in without taking these costs into account. As a business, you'll need to find a way to adequately and accurately measure these costs as you make decisions, instead of turning it into a quasi-religious crusade against all forms of lock-in.</p>\n<p>With these trade-offs in mind, I'll finish off this post by explaining some of the most bang-for-the-buck moves you can make, which:</p>\n<ul>\n<li>Move you much farther along the vendor neutral spectrum</li>\n<li>Do not cost significant engineering work, if undertaken early on and designed correctly</li>\n<li>Provide additional benefits whenever possible</li>\n</ul>\n<h2 id=\"leverage-open-source-tools\">Leverage open source tools</h2>\n<p>The hardest lock-in to overcome is dedication to a proprietary tool. Without naming names, some large 6-letter database companies have made a great reputation of leveraging lock-in with major increases in licensing fees. Once you're tied into that model, it's difficult to disengage.</p>\n<p>Open source tools provide a major protection against this. Assuming the licenses are correct—and you should be sure to check that—no one can ever take your open source tools away from you. Sure, a provider may decide to stop maintaining the software. Or perhaps future releases may be closed source instead. Or perhaps they won't address your bug reports without paying for a support contract. But ultimately, you retain lots of freedom to take the software, modify it as necessary, and deploy it everywhere.</p>\n<p>There has long been a debate between the features and maturity of proprietary versus open source tooling. As always, we cannot make our decisions in a vacuum, and the flexibility of open source is not the be-all and end-all for a business. However, in the past decade in particular, open source has come to dominate large parts of the deployment space.</p>\n<p>To pick on the example above: while DynamoDB is a powerful and flexible database option on AWS, it's far from unique. Cassandra, Redis, PostgreSQL, and dozens of other open source databases are readily available, with companies offering support, commercial hosting, and paid consulting services.</p>\n<p>We've seen a major shift occur as well in the software development language space. Many of the biggest tech companies in the world not only <em>use</em> open source languages, but provide their own complete language ecosystems, free of charge. Google's Go, Microsoft's .NET Core, Mozilla's <a href=\"https://www.fpcomplete.com/rust/\">Rust</a>, and Apple's Swift are some prime examples.</p>\n<p>Far from being the scrappy underdog, we've seen a shift where open source is the de facto standard, and proprietary options are viewed as niche. You're no longer trading quality for flexibility. You can often have your cake and eat it too.</p>\n<h3 id=\"kubernetes\">Kubernetes</h3>\n<p>I decided to give one open source player its own subsection in this context. Kubernetes is an orchestration management tool, managing various cloud resources for hosting containerized applications in both Linux and Windows. The first notable thing in this context is that Kubernetes has effectively supplanted other proprietary and cloud-specific offerings. Those offerings still exist, but from a market share standpoint, Kubernetes is clearly in a dominant position.</p>\n<p>The second thing to note is that Kubernetes is a tool supported by many of the largest cloud providers. Google created Kubernetes, Microsoft provides significant support, and all three top cloud providers (Google, Azure, and AWS) offer native Kubernetes services.</p>\n<p>The final thing to note is that Kubernetes really goes beyond a single service. In many ways, it functions as a cloud abstraction layer. When you use Kubernetes, you often times write your applications to target Kubernetes <em>instead of</em> targeting the underlying vendor. Instead of using a cloud Load Balancer, you'll use an ingress and service in Kubernetes. This drastically reduces the cost of remaining vendor neutral.</p>\n<p>As a plug, in <a href=\"https://www.fpcomplete.com/products/kube360/\">our own Kubernetes offering</a>, we've focused on combining commonly used open source components to provide a batteries-included experience with minimized vendor lock-in. We've already used it internally and for customers to easily migrate services between different cloud providers, and from the cloud to on-prem.</p>\n<div class=\"text-center\"><a href=\"/products/kube360\" class=\"button-coral\">Learn more about Kube360</a></div>\n<h2 id=\"high-value-cloud-services\">High value cloud services</h2>\n<p>Some cloud services provide an interesting combination of delivering high value with minimal lock-in costs. The greatest example of that is blob storage services, such as S3. The durability and availability guarantees cloud providers offer around your data is far greater than most teams would be able to provide on their own. The cost of usage is significantly far lower than rolling your own solution using block storage in the cloud. And finally: the lock-in risks tend to be small. There are tools available to abstract the different vendor APIs for blob storage (and we include such a tool in Kube360). And even without such tools, generally the impact on a codebase from blob storage selection is minimal.</p>\n<p>Another example is services which host open source offerings. The RDS example above fits in nicely here. We generally recommend using hosted database offerings from cloud providers, since the cost is close to what you would pay to set it up yourself, you get lots of features quickly, and migration to a different option is trivial.</p>\n<p>And one final example is services like load balancers and auto-scaling groups. These are services that are impossible to implement fully yourself, would be far more expensive to implement to any extent using cloud virtual machines, and introduce virtually no lock-in. If you're moving from AWS to Azure, you'll need to change your infrastructure code to use Azure equivalents to those services. But generally, these can be seen at the same level of commodity as the virtual machines themselves. You're paying for a fairly standard service, you're rarely locking yourself in to a vendor-specific feature.</p>\n<h2 id=\"multicloud-vs-hybrid-cloud\">Multicloud vs hybrid cloud</h2>\n<p>In previous discussions, the topic of vendor neutrality typically introduces the two confusing terms "multicloud" and "hybrid cloud." There is some disagreement in the tech space around what the former term means, but I'm going to define these two terms as:</p>\n<ul>\n<li><strong>Multicloud</strong> means that your service is capable of running on multiple different cloud providers and/or on-prem environments, but each environment will be autonomous from others</li>\n<li><strong>Hybrid cloud</strong> means that you can simultaneously run your service on multiple cloud providers, and they will replicate data, load balance, and perform other intelligent operations between the different providers</li>\n</ul>\n<p>Multicloud is a much easier thing to attain than hybrid cloud. Hybrid cloud introduces many new kinds of distributed systems failure models, as well as risks around major data transfer costs and latencies. There are certainly some potential advantages for hybrid cloud setups, but in our experience the much lower hanging fruit is in targeting multicloud.</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>Summing up, there are many reasons a company may decide to keep their applications vendor neutral. Each of these reasons can be seen as a risk mitigation strategy, and a proper risk assessment and cost analysis should be performed. While current events has people's attention on vendor eviction, plenty of other reasons exist.</p>\n<p>On the other hand, vendor neutrality is not free, and should not be pursued to the detriment of the business. Finding high value, low cost moves to increase your neutrality is your best bet. Such moves may include:</p>\n<ul>\n<li>Opting for open source where possible</li>\n<li>Using a platform like <a href=\"https://www.fpcomplete.com/products/kube360/\">Kubernetes</a> that encourages more neutrality</li>\n<li>Opt for cloud services that are more easily swappable, such as load balancers</li>\n</ul>\n<p>If you would like more information or help with a vendor neutrality risk assessment, we would love to chat.</p>\n<div class=\"text-center\"><a href=\"/contact-us/\" class=\"button-coral\">Contact us for more information</a></div>\n<p>If you liked this post, you may also like:</p>\n<ul>\n<li><a href=\"https://www.fpcomplete.com/blog/why-we-built-kube360/\">Why we built Kube360</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/understanding-cloud-deployments/\">Understanding Cloud Software Deployments</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/rust-for-devops-tooling/\">Using Rust for DevOps tooling</a></li>\n</ul>\n",
"permalink": "https://www.fpcomplete.com/blog/cloud-vendor-neutrality/",
"slug": "cloud-vendor-neutrality",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Cloud Vendor Neutrality",
"description": "Amazon recently removed Parler from its platform, causing some people to ask if and how they should protect themselves from cloud providers. In this post, we'll explore costs and benefits of keeping yourself cloud vendor neutral, and how to approach it expediently.",
"updated": null,
"date": "2021-01-13",
"year": 2021,
"month": 1,
"day": 13,
"taxonomies": {
"tags": [
"devops",
"insights"
],
"categories": [
"devops"
]
},
"extra": {
"author": "Michael Snoyman",
"blogimage": "/images/blog-listing/devops.png",
"image": "images/blog/cloud-vendor-neutrality.png"
},
"path": "blog/cloud-vendor-neutrality/",
"components": [
"blog",
"cloud-vendor-neutrality"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "what-is-vendor-neutrality",
"permalink": "https://www.fpcomplete.com/blog/cloud-vendor-neutrality/#what-is-vendor-neutrality",
"title": "What is vendor neutrality?",
"children": []
},
{
"level": 2,
"id": "advantages-of-vendor-neutrality",
"permalink": "https://www.fpcomplete.com/blog/cloud-vendor-neutrality/#advantages-of-vendor-neutrality",
"title": "Advantages of vendor neutrality",
"children": []
},
{
"level": 2,
"id": "costs-of-vendor-neutrality",
"permalink": "https://www.fpcomplete.com/blog/cloud-vendor-neutrality/#costs-of-vendor-neutrality",
"title": "Costs of vendor neutrality",
"children": []
},
{
"level": 2,
"id": "leverage-open-source-tools",
"permalink": "https://www.fpcomplete.com/blog/cloud-vendor-neutrality/#leverage-open-source-tools",
"title": "Leverage open source tools",
"children": [
{
"level": 3,
"id": "kubernetes",
"permalink": "https://www.fpcomplete.com/blog/cloud-vendor-neutrality/#kubernetes",
"title": "Kubernetes",
"children": []
}
]
},
{
"level": 2,
"id": "high-value-cloud-services",
"permalink": "https://www.fpcomplete.com/blog/cloud-vendor-neutrality/#high-value-cloud-services",
"title": "High value cloud services",
"children": []
},
{
"level": 2,
"id": "multicloud-vs-hybrid-cloud",
"permalink": "https://www.fpcomplete.com/blog/cloud-vendor-neutrality/#multicloud-vs-hybrid-cloud",
"title": "Multicloud vs hybrid cloud",
"children": []
},
{
"level": 2,
"id": "conclusion",
"permalink": "https://www.fpcomplete.com/blog/cloud-vendor-neutrality/#conclusion",
"title": "Conclusion",
"children": []
}
],
"word_count": 2235,
"reading_time": 12,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/rust-kubernetes-windows.md",
"content": "<p>A few years back, we <a href=\"https://www.fpcomplete.com/blog/2018/07/deploying-rust-with-docker-and-kubernetes/\">published a blog post</a> about deploying a Rust application using Docker and Kubernetes. That application was a Telegram bot. We're going to do something similar today, but with a few meaningful differences:</p>\n<ol>\n<li>We're going to be deploying a web app. Don't get too excited: this will be an incredibly simply piece of code, basically copy-pasted from the <a href=\"https://actix.rs/docs/application/\">actix-web documentation</a>.</li>\n<li>We're going to build the deployment image on Github Actions</li>\n<li>And we're going to be building this using Windows Containers instead of Linux. (Sorry for burying the lead.)</li>\n</ol>\n<p>We put this together for testing purposes when rolling out Windows support in our <a href=\"https://www.fpcomplete.com/products/kube360/\">managed Kubernetes product, Kube360®</a> here at FP Complete. I wanted to put this post together to demonstrate a few things:</p>\n<ul>\n<li>How pleasant and familiar Windows Containers workflows were versus the more familiar Linux approaches</li>\n<li>Github Actions work seamlessly for building Windows Containers</li>\n<li>With the correct configuration, Kubernetes is a great platform for deploying Windows Containers</li>\n<li>And, of course, how wonderful the Rust toolchain is on Windows</li>\n</ul>\n<p>Alright, let's dive in! And if any of those topics sound interesting, and you'd like to learn more about FP Complete offerings, please <a href=\"https://www.fpcomplete.com/contact-us/\">contact us for more information on our offerings</a>.</p>\n<h2 id=\"prereqs\">Prereqs</h2>\n<p>Quick sidenote before we dive in. Windows Containers only run on Windows machines. Not even all Windows machines will support Windows Containers. You'll need Windows 10 Pro or a similar license, and have Docker installed on that machine. You'll also need to ensure that Docker is set to use Windows instead of Linux containers.</p>\n<p>If you have all of that set up, you'll be able to follow along with most of the steps below. If not, you won't be able to build or run the Docker images on your local machine.</p>\n<p>Also, for running the application on Kubernetes, you'll need a Kubernetes cluster with Windows nodes. I'll be using the FP Complete Kube360 test cluster on Azure in this blog post, though we've previously tested in on both AWS and on-prem clusters too.</p>\n<h2 id=\"the-rust-application\">The Rust application</h2>\n<p>The source code for this application will be, by far, the most uninteresting part of this post. As mentioned, it's basically a copy-paste of an example straight from the actix-web documentation featuring mutable state. It turns out this was a great way to test out basic Kubernetes functionality like health checks, replicas, and autohealing.</p>\n<p>We're going to build this using the latest stable Rust version as of writing this post, so create a <code>rust-toolchain</code> file with the contents:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">1.47.0\n</span></code></pre>\n<p>Our <code>Cargo.toml</code> file will be pretty vanilla, just adding in the dependency on <code>actix-web</code>:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">[</span><span style=\"color:#b58900;\">package</span><span style=\"color:#657b83;\">]\n</span><span style=\"color:#268bd2;\">name </span><span style=\"color:#657b83;\">= </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">windows-docker-web</span><span style=\"color:#839496;\">"\n</span><span style=\"color:#268bd2;\">version </span><span style=\"color:#657b83;\">= </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">0.1.0</span><span style=\"color:#839496;\">"\n</span><span style=\"color:#268bd2;\">authors </span><span style=\"color:#657b83;\">= [</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Michael Snoyman <msnoyman@fpcomplete.com></span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">]\n</span><span style=\"color:#268bd2;\">edition </span><span style=\"color:#657b83;\">= </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">2018</span><span style=\"color:#839496;\">"\n\n</span><span style=\"color:#657b83;\">[</span><span style=\"color:#b58900;\">dependencies</span><span style=\"color:#657b83;\">]\n</span><span style=\"color:#268bd2;\">actix-web </span><span style=\"color:#657b83;\">= </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">3.1</span><span style=\"color:#839496;\">"\n</span></code></pre>\n<p>If you want to see the <code>Cargo.lock</code> file I compiled with, it's <a href=\"https://github.com/fpco/windows-docker-web/blob/f8a3192e63f2e699cc67716488a633f5e0893446/Cargo.lock\">available in the source repo</a>.</p>\n<p>And finally, the actual code in <code>src/main.rs</code>:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#859900;\">use </span><span style=\"color:#657b83;\">actix_web::{get, web, App, HttpServer};\n</span><span style=\"color:#859900;\">use </span><span style=\"color:#657b83;\">std::sync::Mutex;\n\n</span><span style=\"color:#268bd2;\">struct </span><span style=\"color:#b58900;\">AppState </span><span style=\"color:#657b83;\">{\n </span><span style=\"color:#268bd2;\">counter</span><span style=\"color:#657b83;\">: Mutex<</span><span style=\"color:#268bd2;\">i32</span><span style=\"color:#657b83;\">>,\n}\n\n#[</span><span style=\"color:#268bd2;\">get</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">/</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">)]\nasync </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">index</span><span style=\"color:#657b83;\">(</span><span style=\"color:#268bd2;\">data</span><span style=\"color:#657b83;\">: web::Data<AppState>) -> String {\n </span><span style=\"color:#268bd2;\">let </span><span style=\"color:#586e75;\">mut</span><span style=\"color:#657b83;\"> counter = data.counter.</span><span style=\"color:#859900;\">lock</span><span style=\"color:#657b83;\">().</span><span style=\"color:#859900;\">unwrap</span><span style=\"color:#657b83;\">();\n *counter += </span><span style=\"color:#6c71c4;\">1</span><span style=\"color:#657b83;\">;\n </span><span style=\"color:#859900;\">format!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Counter is at </span><span style=\"color:#cb4b16;\">{}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, counter)\n}\n\n#[</span><span style=\"color:#268bd2;\">actix_web</span><span style=\"color:#657b83;\">::</span><span style=\"color:#268bd2;\">main</span><span style=\"color:#657b83;\">]\nasync </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">main</span><span style=\"color:#657b83;\">() -> std::io::</span><span style=\"color:#859900;\">Result</span><span style=\"color:#657b83;\"><()> {\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> host = </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">0.0.0.0:8080</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">;\n </span><span style=\"color:#859900;\">println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Trying to listen on </span><span style=\"color:#cb4b16;\">{}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, host);\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> app_state = web::Data::new(AppState {\n counter: Mutex::new(</span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\">),\n });\n HttpServer::new(</span><span style=\"color:#586e75;\">move </span><span style=\"color:#859900;\">|| </span><span style=\"color:#657b83;\">App::new().</span><span style=\"color:#859900;\">app_data</span><span style=\"color:#657b83;\">(app_state.</span><span style=\"color:#859900;\">clone</span><span style=\"color:#657b83;\">()).</span><span style=\"color:#859900;\">service</span><span style=\"color:#657b83;\">(index))\n .</span><span style=\"color:#859900;\">bind</span><span style=\"color:#657b83;\">(host)</span><span style=\"color:#859900;\">?\n </span><span style=\"color:#657b83;\">.</span><span style=\"color:#859900;\">run</span><span style=\"color:#657b83;\">()\n .await\n}\n</span></code></pre>\n<p>This code creates an application state (a mutex of an <code>i32</code>), defines a single <code>GET</code> handler that increments that variable and prints the current value, and then hosts this on <code>0.0.0.0:8080</code>. Not too shabby.</p>\n<p>If you're following along with the code, now would be a good time to <code>cargo run</code> and make sure you're able to load up the site on your <code>localhost:8080</code>.</p>\n<h2 id=\"dockerfile\">Dockerfile</h2>\n<p>If this is your first foray into Windows Containers, you may be surprised to hear me say "Dockerfile." Windows Container images can be built with the same kind of Dockerfiles you're used to from the Linux world. This even supports more advanced features, such as multistage Dockerfiles, which we're going to take advantage of here.</p>\n<p>There are a number of different base images provided by Microsoft for Windows Containers. We're going to be using Windows Server Core. It provides enough capabilities for installing Rust dependencies (which we'll see shortly), without including too much unneeded extras. Nanoserver is a much lighterweight image, but it doesn't play nicely with the Microsoft Visual C++ runtime we're using for the <code>-msvc</code> Rust target.</p>\n<p><strong>NOTE</strong> I've elected to use the <code>-msvc</code> target here instead of <code>-gnu</code> for two reasons. Firstly, it's closer to the actual use cases we need to support in Kube360, and therefore made a better test case. Also, as the default target for Rust on Windows, it seemed appropriate. It should be possible to set up a more minimal nanoserver-based image based on the <code>-gnu</code> target, if someone's interested in a "fun" side project.</p>\n<p>The <a href=\"https://github.com/fpco/windows-docker-web/blob/f8a3192e63f2e699cc67716488a633f5e0893446/Dockerfile\">complete Dockerfile is available on Github</a>, but let's step through it more carefully. As mentioned, we'll be performing a multistage build. We'll start with the build image, which will install the Rust build toolchain and compile our application. We start off by using the Windows Server Core base image and switching the shell back to the standard <code>cmd.exe</code>:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">FROM mcr.microsoft.com/windows/servercore:1809 as build\n\n# Restore the default Windows shell for correct batch processing.\nSHELL ["cmd", "/S", "/C"]\n</span></code></pre>\n<p>Next we're going to install the Visual Studio buildtools necessary for building Rust code:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\"># Download the Build Tools bootstrapper.\nADD https://aka.ms/vs/16/release/vs_buildtools.exe /vs_buildtools.exe\n\n# Install Build Tools with the Microsoft.VisualStudio.Workload.AzureBuildTools workload,\n# excluding workloads and components with known issues.\nRUN vs_buildtools.exe --quiet --wait --norestart --nocache \\\n --installPath C:\\BuildTools \\\n --add Microsoft.Component.MSBuild \\\n --add Microsoft.VisualStudio.Component.Windows10SDK.18362 \\\n --add Microsoft.VisualStudio.Component.VC.Tools.x86.x64\t\\\n || IF "%ERRORLEVEL%"=="3010" EXIT 0\n</span></code></pre>\n<p>And then we'll modify the entrypoint to include the environment modifications necessary to use those buildtools:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\"># Define the entry point for the docker container.\n# This entry point starts the developer command prompt and launches the PowerShell shell.\nENTRYPOINT ["C:\\\\BuildTools\\\\Common7\\\\Tools\\\\VsDevCmd.bat", "&&", "powershell.exe", "-NoLogo", "-ExecutionPolicy", "Bypass"]\n</span></code></pre>\n<p>Next up is installing <code>rustup</code>, which is fortunately pretty easy:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">RUN curl -fSLo rustup-init.exe https://win.rustup.rs/x86_64\nRUN start /w rustup-init.exe -y -v && echo "Error level is %ERRORLEVEL%"\nRUN del rustup-init.exe\n\nRUN setx /M PATH "C:\\Users\\ContainerAdministrator\\.cargo\\bin;%PATH%"\n</span></code></pre>\n<p>Then we copy over the relevant source files and kick off a build, storing the generated executable in <code>c:\\output</code>:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">COPY Cargo.toml /project/Cargo.toml\nCOPY Cargo.lock /project/Cargo.lock\nCOPY rust-toolchain /project/rust-toolchain\nCOPY src/ /project/src\nRUN cargo install --path /project --root /output\n</span></code></pre>\n<p>And with that, we're done with our build! Time to jump over to our runtime image. We don't need the Visual Studio buildtools in this image, but we do need the Visual C++ runtime:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">FROM mcr.microsoft.com/windows/servercore:1809\n\nADD https://download.microsoft.com/download/6/A/A/6AA4EDFF-645B-48C5-81CC-ED5963AEAD48/vc_redist.x64.exe /vc_redist.x64.exe\nRUN c:\\vc_redist.x64.exe /install /quiet /norestart\n</span></code></pre>\n<p>With that in place, we can copy over our executable from the build image and set it as the default <code>CMD</code> in the image:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">COPY --from=build c:/output/bin/windows-docker-web.exe /\n\nCMD ["/windows-docker-web.exe"]\n</span></code></pre>\n<p>And just like that, we've got a real life Windows Container. If you'd like to, you can test it out yourself by running:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">> docker run --rm -p 8080:8080 fpco/windows-docker-web:f8a3192e63f2e699cc67716488a633f5e0893446\n</span></code></pre>\n<p>If you connect to port 8080, you should see our painfully simple app. Hurrah!</p>\n<h2 id=\"building-with-github-actions\">Building with Github Actions</h2>\n<p>One of the nice things about using a multistage Dockerfile for performing the build is that our CI scripts become very simple. Instead of needing to set up an environment with correct build tools or any other configuration, our script:</p>\n<ul>\n<li>Logs into the Docker Hub registry</li>\n<li>Performs a <code>docker build</code></li>\n<li>Pushes to the Docker Hub registry</li>\n</ul>\n<p>The downside is that there is no build caching at play with this setup. There are multiple methods to mitigate this problem, such as creating helper build images that pre-bake the dependencies. Or you can perform the builds on the host on CI and only use the Dockerfile for generating the runtime image. Those are interesting tweaks to try out another time. </p>\n<p>Taking on the simple multistage approach though, we have the following in our <code>.github/workflows/container.yml</code> file:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">Build a Windows container\n\n</span><span style=\"color:#b58900;\">on</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">push</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">branches</span><span style=\"color:#657b83;\">: [</span><span style=\"color:#2aa198;\">master</span><span style=\"color:#657b83;\">]\n\n</span><span style=\"color:#268bd2;\">jobs</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">build</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">runs-on</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">windows-latest\n\n steps</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#268bd2;\">uses</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">actions/checkout@v1\n\n </span><span style=\"color:#657b83;\">- </span><span style=\"color:#268bd2;\">name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">Build and push\n shell</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">bash\n run</span><span style=\"color:#657b83;\">: </span><span style=\"color:#859900;\">|\n</span><span style=\"color:#2aa198;\"> echo "${{ secrets.DOCKER_HUB_TOKEN }}" | docker login --username fpcojenkins --password-stdin\n IMAGE_ID=fpco/windows-docker-web:$GITHUB_SHA\n docker build -t $IMAGE_ID .\n docker push $IMAGE_ID\n</span></code></pre>\n<p>I like following the convention of tagging my images with the Git SHA of the commit. Other people prefer different tagging schemes, it's all up to you.</p>\n<h2 id=\"manifest-files\">Manifest files</h2>\n<p>Now that we have a working Windows Container image, the next step is to deploy it to our Kube360 cluster. Generally, we use ArgoCD and Kustomize for managing app deployments within Kube360, which lets us keep a very nice Gitops workflow. Instead, for this blog post, I'll show you the raw manifest files. It will also let us play with the <code>k3</code> command line tool, which also happens to be written in Rust.</p>\n<p>First we'll have a Deployment manifest to manage the pods running the application itself. Since this is a simple Rust application, we can put very low resource limits on this. We're going to disable the Istio sidebar, since it's not compatible with Windows. We're going to ask Kubernetes to use the Windows machines to host these pods. And we're going to set up some basic health checks. All told, this is what our manifest file looks like:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">apiVersion</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">apps/v1\nkind</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">Deployment\nmetadata</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">windows-docker-web\n labels</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">app.kubernetes.io/component</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">webserver\nspec</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">replicas</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">1\n </span><span style=\"color:#268bd2;\">minReadySeconds</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">5\n </span><span style=\"color:#268bd2;\">selector</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">matchLabels</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">app.kubernetes.io/component</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">webserver\n template</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">metadata</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">labels</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">app.kubernetes.io/component</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">webserver\n annotations</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">sidecar.istio.io/inject</span><span style=\"color:#657b83;\">: </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">false</span><span style=\"color:#839496;\">"\n </span><span style=\"color:#268bd2;\">spec</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">runtimeClassName</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">windows-2019\n containers</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#268bd2;\">name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">windows-docker-web\n image</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">fpco/windows-docker-web:f8a3192e63f2e699cc67716488a633f5e0893446\n ports</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#268bd2;\">name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">http\n containerPort</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">8080\n </span><span style=\"color:#268bd2;\">readinessProbe</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">httpGet</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">path</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">/\n port</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">8080\n </span><span style=\"color:#268bd2;\">initialDelaySeconds</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">10\n </span><span style=\"color:#268bd2;\">periodSeconds</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">10\n </span><span style=\"color:#268bd2;\">livenessProbe</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">httpGet</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">path</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">/\n port</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">8080\n </span><span style=\"color:#268bd2;\">initialDelaySeconds</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">10\n </span><span style=\"color:#268bd2;\">periodSeconds</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">10\n </span><span style=\"color:#268bd2;\">resources</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">requests</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">memory</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">128Mi\n cpu</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">100m\n limits</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">memory</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">128Mi\n cpu</span><span style=\"color:#657b83;\">: </span><span style=\"color:#2aa198;\">100m\n</span></code></pre>\n<p>Awesome, that's the most complicated by far of the three manifests. Next we'll put a fairly stock-standard Service in front of that deployment:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">apiVersion</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">v1\nkind</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">Service\nmetadata</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">windows-docker-web\n labels</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">app.kubernetes.io/component</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">webserver\nspec</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">ports</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#268bd2;\">name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">http\n port</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">80\n </span><span style=\"color:#268bd2;\">targetPort</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">http\n type</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">ClusterIP\n selector</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">app.kubernetes.io/component</span><span style=\"color:#657b83;\">: </span><span style=\"color:#2aa198;\">webserver\n</span></code></pre>\n<p>This exposes a services on port 80, and targets the <code>http</code> port (port 8080) inside the deployment. Finally, we have our Ingress. Kube360 uses external DNS to automatically set DNS records, and cert-manager to automatically grab TLS certificates. Our manifest looks like this:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">apiVersion</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">networking.k8s.io/v1beta1\nkind</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">Ingress\nmetadata</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">annotations</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">cert-manager.io/cluster-issuer</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">letsencrypt-ingress-prod\n kubernetes.io/ingress.class</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">nginx\n nginx.ingress.kubernetes.io/force-ssl-redirect</span><span style=\"color:#657b83;\">: </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">true</span><span style=\"color:#839496;\">"\n </span><span style=\"color:#268bd2;\">name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">windows-docker-web\nspec</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">rules</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#268bd2;\">host</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">windows-docker-web.az.fpcomplete.com\n http</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">paths</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#268bd2;\">backend</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#268bd2;\">serviceName</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">windows-docker-web\n servicePort</span><span style=\"color:#657b83;\">: </span><span style=\"color:#6c71c4;\">80\n </span><span style=\"color:#268bd2;\">tls</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#268bd2;\">hosts</span><span style=\"color:#657b83;\">:\n - </span><span style=\"color:#268bd2;\">windows-docker-web.az.fpcomplete.com\n secretName</span><span style=\"color:#657b83;\">: </span><span style=\"color:#2aa198;\">windows-docker-web-tls\n</span></code></pre>\n<p>Now that we have our application inside a Docker image, and we have our manifest files to instruct Kubernetes on how to run it, we just need to deploy these manifests and we'll be done.</p>\n<h2 id=\"launch\">Launch</h2>\n<p>With our manifests in place, we can finally deploy them. You can use <code>kubectl</code> directly to do this. Since I'm deploying to Kube360, I'm going to use the <code>k3</code> command line tool, which automates the process of logging in, getting temporary Kubernetes credentials, and providing those to the <code>kubectl</code> command via an environment variable. These steps could be run on Windows, Mac, or Linux. But since we've done the rest of this post on Windows, I'll use my Windows machine for this too.</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">> k3 init test.az.fpcomplete.com\n> k3 kubectl apply -f deployment.yaml\nWeb browser opened to https://test.az.fpcomplete.com/k3-confirm?nonce=c1f764d8852f4ff2a2738fb0a2078e68\nPlease follow the login steps there (if needed).\nThen return to this terminal.\nPolling the server. Please standby.\nChecking ...\nThanks, got the token response. Verifying token is valid\nRetrieving a kubeconfig for use with k3 kubectl\nKubeconfig retrieved. You are now ready to run kubectl commands with `k3 kubectl ...`\ndeployment.apps/windows-docker-web created\n> k3 kubectl apply -f ingress.yaml\ningress.networking.k8s.io/windows-docker-web created\n> k3 kubectl apply -f service.yaml\nservice/windows-docker-web created\n</span></code></pre>\n<p>I told <code>k3</code> to use the <code>test.az.fpcomplete.com</code> cluster. On the first <code>k3 kubectl</code> call, it detected that I did not have valid credentials for the cluster, and opened up my browser to a page that allowed me to log in. One of the design goals in Kube360 is to strongly leverage existing identity providers, such as Azure AD, Google Directory, Okta, Microsoft 365, and others. This is not only more secure than copy-pasting <code>kubeconfig</code> files with permanent credentials around, but more user friendly. As you can see, the process above was pretty automated.</p>\n<p>It's easy enough to check that the pods are actually running and healthy:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">> k3 kubectl get pods\nNAME READY STATUS RESTARTS AGE\nwindows-docker-web-5687668cdf-8tmn2 1/1 Running 0 3m2s\n</span></code></pre>\n<p>Initially, the ingress controller looked like this while it was getting TLS certificates:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">> k3 kubectl get ingress\nNAME CLASS HOSTS ADDRESS PORTS AGE\ncm-acme-http-solver-zlq6j <none> windows-docker-web.az.fpcomplete.com 80 0s\nwindows-docker-web <none> windows-docker-web.az.fpcomplete.com 80, 443 3s\n</span></code></pre>\n<p>And after cert-manager gets the TLS certificate, it will switch over to:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">> k3 kubectl get ingress\nNAME CLASS HOSTS ADDRESS PORTS AGE\nwindows-docker-web <none> windows-docker-web.az.fpcomplete.com 52.151.225.139 80, 443 90s\n</span></code></pre>\n<p>And finally, our site is live! Hurrah, a Rust web application compiled for Windows and running on Kubernetes inside Azure.</p>\n<p><strong>NOTE</strong> Depending on when you read this post, the web app may or may not still be live, so don't be surprised if you don't get a response if you try to connect to that host.</p>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>This post was a bit light on actual Rust code, but heavy on a lot of Windows scripting. As I think many Rustaceans already know, the dev experience for Rust on Windows is top notch. What may not have been obvious is how pleasant the Docker experience is on Windows. There are definitely some pain points, like the large images involved and needing to install the VC runtime. But overall, with a bit of cargo-culting, it's not too bad. And finally, having a cluster with Windows support ready via Kube360 makes deployment a breeze.</p>\n<p>If anyone has follow up questions about anything here, please <a href=\"https://twitter.com/snoyberg\">reach out to me on Twitter</a> or <a href=\"https://www.fpcomplete.com/contact-us/\">contact our team at FP Complete</a>. In addition to our <a href=\"https://www.fpcomplete.com/products/kube360/\">Kube360 product offering</a>, FP Complete provides many related services, including:</p>\n<ul>\n<li><a href=\"https://www.fpcomplete.com/platformengineering/\">DevOps consulting</a></li>\n<li><a href=\"https://www.fpcomplete.com/rust/\">Rust consulting and training</a></li>\n<li><a href=\"https://www.fpcomplete.com/services/\">General training and consulting services</a></li>\n<li><a href=\"https://www.fpcomplete.com/haskell/\">Haskell consulting and training</a></li>\n</ul>\n<p>If you liked this post, please check out some related posts:</p>\n<ul>\n<li><a href=\"https://www.fpcomplete.com/blog/2018/07/deploying-rust-with-docker-and-kubernetes/\">Deploying Rust with Docker and Kubernetes</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/rust-for-devops-tooling/\">Using Rust for DevOps tooling</a></li>\n<li><a href=\"https://www.fpcomplete.com/rust/crash-course/\">The Rust Crash Course eBook</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/devops-for-developers/\">DevOps for (Skeptical) Developers</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/understanding-cloud-auth/\">Understanding cloud auth</a></li>\n</ul>\n",
"permalink": "https://www.fpcomplete.com/blog/rust-kubernetes-windows/",
"slug": "rust-kubernetes-windows",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Deploying Rust with Windows Containers on Kubernetes",
"description": "An example of deploying Rust inside a Windows Containers as a web service hosted on Kubernetes",
"updated": null,
"date": "2020-10-26",
"year": 2020,
"month": 10,
"day": 26,
"taxonomies": {
"categories": [
"functional programming",
"devops"
],
"tags": [
"rust",
"devops",
"kubernetes"
]
},
"extra": {
"author": "Michael Snoyman",
"blogimage": "/images/blog-listing/rust.png",
"image": "images/blog/rust-windows-kube360.png"
},
"path": "blog/rust-kubernetes-windows/",
"components": [
"blog",
"rust-kubernetes-windows"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "prereqs",
"permalink": "https://www.fpcomplete.com/blog/rust-kubernetes-windows/#prereqs",
"title": "Prereqs",
"children": []
},
{
"level": 2,
"id": "the-rust-application",
"permalink": "https://www.fpcomplete.com/blog/rust-kubernetes-windows/#the-rust-application",
"title": "The Rust application",
"children": []
},
{
"level": 2,
"id": "dockerfile",
"permalink": "https://www.fpcomplete.com/blog/rust-kubernetes-windows/#dockerfile",
"title": "Dockerfile",
"children": []
},
{
"level": 2,
"id": "building-with-github-actions",
"permalink": "https://www.fpcomplete.com/blog/rust-kubernetes-windows/#building-with-github-actions",
"title": "Building with Github Actions",
"children": []
},
{
"level": 2,
"id": "manifest-files",
"permalink": "https://www.fpcomplete.com/blog/rust-kubernetes-windows/#manifest-files",
"title": "Manifest files",
"children": []
},
{
"level": 2,
"id": "launch",
"permalink": "https://www.fpcomplete.com/blog/rust-kubernetes-windows/#launch",
"title": "Launch",
"children": []
},
{
"level": 2,
"id": "conclusion",
"permalink": "https://www.fpcomplete.com/blog/rust-kubernetes-windows/#conclusion",
"title": "Conclusion",
"children": []
}
],
"word_count": 2573,
"reading_time": 13,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/paradigm-shift-key-to-competing.md",
"content": "<p>It used to be that being technically mature was thought to be a good thing; now, that view is not so cut and dried. As you look at topics like containerization, cloud migration, and DevOps, it is easy to see why young companies get to claim the term “Cloud Native.” At the same time, those who have been in business for decades are frequently relegated to the legions of those needing ‘transforming.’ While this is, of course, an overgeneralization, it feels right more often than not. So, what are the ‘mature’ to do? </p>\n<p>Talking to several older small and medium sized businesses, a few strategic changes help propel those who are thinking about tech ‘transformation’ into becoming better, faster, more cost-effective, and more secure. These strategies include focusing on containerizing business logic, cloud-enabling their enterprise, and taking a fresh look at open source offerings for their infrastructure. If we look at these topics from an executive seat rather than an engineering one, a path and a plan emerges. </p>\n<a href=\"/devops/why-what-how/\">\n<p style=\"text-align:center;font-size:2em;border-width: 3px 0;border-color:#ff8d6e;border-style: dashed;margin:1em 0;padding:0.25em 0;font-weight: bold\">\nCheck Out The Why, What, and How of DevSecOps\n</p>\n</a>\n<p>Containerization is not a new topic; it has just evolved. We have all gone from monolithic solutions to distributed computing. From there, we bought small Linux servers, and they felt like containers; then, virtualization came to market, and the VM became the new container. Now, we have Docker and Kubernetes. Docker containers represent a considerable paradigm shift in that they do not require a lot of hardware or yet another OS license…., and when managed by Kubernetes, they create an entire ecosystem with little overhead. Kubernetes take Docker containers and handle horizontal scaling, fault tolerance, automated monitoring, etc. within a DevOps toolset and frame. What makes this setup even more impressive is Open Source; yet, supported by ‘the most prominent’ tech infrastructure firms. </p>\n<p>Once we start embracing modern container architectures, the conversation gets fascinating. All cloud and virtualization providers are now battling each other to get customers to deploy these standardized workloads onto their proprietary platforms. While there are always a few complications, Docker and Kubernetes run on AWS, Azure, VMWare, GCP, etc., with little (or no) alterations if you follow the Open Source path. </p>\n<p>So imagine....once we were trying to figure out how to build in fault tolerance, scalability, continuous develop/deploy, and automate testing.....now all we need to do is follow a DevOps approach using Open Source frameworks like Docker and Kubernetes....and voila....you are there (well it isn’t that easy....but a darn sight easier than it used to be). Oh....and by the way, all of this is far easier to deploy in the cloud than on-premise, but that is a topic for another day. </p>\n<p><a href=\"https://www.fpcomplete.com/platformengineering/why-what-how/\"><img src=\"/images/cta/why-what-how.png\" alt=\"A Quick Guide to DevOps\" /></a></p>\n",
"permalink": "https://www.fpcomplete.com/blog/paradigm-shift-key-to-competing/",
"slug": "paradigm-shift-key-to-competing",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "A Paradigm Shift is Key to Competing",
"description": "",
"updated": null,
"date": "2020-10-16",
"year": 2020,
"month": 10,
"day": 16,
"taxonomies": {
"tags": [
"devops",
"insights"
],
"categories": [
"devops",
"insights"
]
},
"extra": {
"author": "Wes Crook",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "blog/paradigm-shift-key-to-competing/",
"components": [
"blog",
"paradigm-shift-key-to-competing"
],
"summary": null,
"toc": [],
"word_count": 499,
"reading_time": 3,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/devops-in-the-enterprise.md",
"content": "<p>Is it Enterprise DevOps or DevOps in the enterprise? I guess it all depends on where you sit. DevOps has been a significant change to how many modern technology organizations approach systems development and support. While many have found it to be a major productivity boost, it represents a threat in "BTTWWHADI" evangelists in some organizations. Let's start with two definitions: </p>\n<ul>\n<li>\n<p>DevOps: DevOps is a set of practices that combines software development (Dev) and IT operations (Ops). It aims to shorten the systems development life cycle and provide continuous delivery with high software quality. DevOps is complementary with Agile software development; several DevOps aspects came from Agile methodology. Credit: https://en.wikipedia.org/wiki/DevOps </p>\n</li>\n<li>\n<p>BTTWWHADI : This is shorthand for "But That's The Way We Have Always Done It." Credit: Unknown </p>\n</li>\n</ul>\n<h2 id=\"where-we-come-from\">Where we come from...</h2>\n<p>If we look at some successful Enterprise technology areas, they have had long term success by sticking with what works. Cleanly partitioned technical responsibilities (analysts, developers, DBAs, network admins, sysadmins, etc.), a waterfall approach to development, a "stay in your lane" accountability matrix (e.g., you write the app, I'll get it platformed), rack 'em and stack 'em approach to hardware, etc.</p>\n<p>While no one can deny this type of discipline has served many well, Enterprise technology's current generation offers us a much more flexible approach. Today, virtually all hardware is virtualized (on and off-premise), and cloud vendors offer things like platforms as a service, databases as a service, security as a service...etc. These innovations have allowed my companies to completely re-think how they want to be spending their technology resource (budget, people, mindshare)….with the most enlightened organizations quickly concluding that they should spend their human capital in spaces where they can create competitive advantages while purchasing those parts of their technology ecosystem what more commoditized.</p>\n<p>An example of this would be in a retail company to think more about creating business intelligence than setting up new hardware for a database server. A database can be scaled in the cloud, leaving the retail enterprise more human capital to figure out how to drive revenue. Those who are not embracing the change DevOps affords are most often using a BTTWWHADI argument. </p>\n<h2 id=\"not-everyone-is-ready-for-a-revolution\">Not everyone is ready for a revolution...</h2>\n<p>So, if DevOps is such a revolution, why do you have so many corporations having such an issue trying to get DevOps strategies to work for them? The answer lies in culture. For DevOps to be effective, an organization needs to be willing to take out a blank sheet of paper and draw a picture of what could be if they tore down yesterday's constraints and looked toward today's innovations. They need to match that picture up against their current staff, recognize that many jobs (and many skills) need to be re-learned or acquired. No longer is so much specialization required in many specific fixed assets (like data centers, computers, network devices, security devices, etc.) In a modern DevOps world, much of the infrastructure is virtualized (giving rise to infrastructure as code). </p>\n<p>To some extent, this means that your infrastructure staff will start to look more and more like developers. Instead of a team plugging in servers, routers, and load balancers into a network backbone, they will be using scripting to configure equivalent services on virtualized hardware. On the development and operational side, CI/CD pipelines and process automation drive out many manual processes involved in yesterday's software development lifecycle. For development, the beginnings of this revolution date back to test-driven development. Today's modern pipelines go from development through testing, integration, and deployment. While everything is automatable, many have stopping points in their pipeline where human interactions are required to review test results or require confirmation about final deployments to production. Whether you are in infrastructure or development, BTTWWHADI just won't do and more. To compete, everyone will need to skill up and focus on architecture, automation, XaaS, and scripting/coding to decrease time to market while improving quality and resilience. </p>\n<a href=\"/devops/creating-ecosystem/\">\n<p style=\"text-align:center;font-size:2em;border-width: 3px 0;border-color:#ff8d6e;border-style: dashed;margin:1em 0;padding:0.25em 0;font-weight: bold\">\nLearn How to Create Your DevOps Ecosystem\n</p>\n</a>\n<h2 id=\"so-what-s-the-big-deal\">So, what's the big deal…</h2>\n<p>DevOps can be a threat to those who aren't ready for it (the BTTWWHADI crowd). If your job is configuring hardware or running manual software tests, you might see these functions being automated into 'coding' jobs. This function change could pose a severe career problem for those team members who don't see this evolution coming and fail to get prepared through education and training. Unprepared staff becomes resistive to change (understandably), yet, those who are prepared end up in a better position (read: more career security, mobility, and better paid) as automation experts are now far more sought after than traditional hardware configuration engineers (as a gross generalization). Please do not misunderstand; traditional system engineers are still valuable members of most enterprise teams, but as DevOps and virtualization take hold, those jobs will change. Get prepared, train your staff, and address the culture change head-on. </p>\n<p>If you need help with your journey, <a href=\"https://www.fpcomplete.com/contact-us/\">contact FP Complete</a>. This is who we are and what we do. </p>\n<p><a href=\"https://www.fpcomplete.com/platformengineering/creating-ecosystem/\"><img src=\"/images/cta/creating-devops-ecosystem.png\" alt=\"A Quick Guide to DevOps\" /></a></p>\n",
"permalink": "https://www.fpcomplete.com/blog/devops-in-the-enterprise/",
"slug": "devops-in-the-enterprise",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "DevOps in the Enterprise: What could be better? What could go wrong?",
"description": "",
"updated": null,
"date": "2020-10-09",
"year": 2020,
"month": 10,
"day": 9,
"taxonomies": {
"categories": [
"devops",
"insights"
],
"tags": [
"devops",
"insights"
]
},
"extra": {
"author": "Wes Crook",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "blog/devops-in-the-enterprise/",
"components": [
"blog",
"devops-in-the-enterprise"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "where-we-come-from",
"permalink": "https://www.fpcomplete.com/blog/devops-in-the-enterprise/#where-we-come-from",
"title": "Where we come from...",
"children": []
},
{
"level": 2,
"id": "not-everyone-is-ready-for-a-revolution",
"permalink": "https://www.fpcomplete.com/blog/devops-in-the-enterprise/#not-everyone-is-ready-for-a-revolution",
"title": "Not everyone is ready for a revolution...",
"children": []
},
{
"level": 2,
"id": "so-what-s-the-big-deal",
"permalink": "https://www.fpcomplete.com/blog/devops-in-the-enterprise/#so-what-s-the-big-deal",
"title": "So, what's the big deal…",
"children": []
}
],
"word_count": 880,
"reading_time": 5,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/cloud-for-non-natives.md",
"content": "<p>Does this mean if you weren't born in the cloud, you'll never be as good as those who are? </p>\n<p>When thinking about building from scratch or modernizing an existing technology environment, we tend to see one of a few different things happening: </p>\n<ul>\n<li>Staff will read up, and you will try it on your own. </li>\n<li>Managers will hire someone who says they have done it before. </li>\n<li>Leaders will engage a large software vendor or consulting firm to help get them to the promised land. </li>\n</ul>\n<p>While all of these strategies can work, we often find one of the following happens: </p>\n<ul>\n<li>Trial and error result in very expensive under delivery. </li>\n<li>Existing teams become disaffected and resistive because they perceive being left behind. </li>\n<li>Something gets delivered, but costs go up, and reliability goes down. </li>\n<li>New hires come in, make the magic happen, and then move on without leaving enough knowhow to continue without them. </li>\n<li>Vendors use proprietary software, and a new age of vendor lock-in ensues. </li>\n</ul>\n<p>There is a better way of approaching modernizing a business-focused, legacy world. Our core approach at FP complete is: </p>\n<ul>\n<li>Be vendor agnostic </li>\n<li>Build a road map based on business outcomes </li>\n<li>Deeply understand and implement DevOps concepts </li>\n<li>Be ruthlessly focused on architecture from the start </li>\n<li>Containerize everything* </li>\n<li>Virtualize everything*</li>\n</ul>\n<p>While this approach is straightforward, staying focused on outcomes is the key: </p>\n<ul>\n<li>The business logic is the key to build your ecosystem once and properly so you can focus on what matters. </li>\n<li>Integrate security by design as security is a non-non-negotiable. </li>\n<li>All alerts and logs centrally as managing and operating via complete transparency is key. </li>\n<li>Ensure Containers are made to scale horizontally and be fault-tolerant from the start. </li>\n<li>Ensure you are on-prem and cloud-agnostic. </li>\n<li>Be open-source but get enterprise support. </li>\n</ul>\n<a href=\"/devops/quick-guide/\">\n<p style=\"text-align:center;font-size:2em;border-width: 3px 0;border-color:#ff8d6e;border-style: dashed;margin:1em 0;padding:0.25em 0;font-weight: bold\">\nCheck out our Quick Guide to DevOps\n</p>\n</a>\n<p>How do you get help without breaking the bank, compromising your values, or getting locked in? </p>\n<p>At FP Complete, we believe the way to get started is to: </p>\n<ul>\n<li>Build DevOps expertise, acquire DevOps Tooling. </li>\n<li>Get help constructing your roadmap to ensure technical focus aligns with business results. </li>\n<li>Get help designing how your applications will get containerized to be cloud-ready. </li>\n<li>Acquire Enterprise support for your newly open-sourced world. </li>\n</ul>\n<p>FP Complete has a unique track record in these activities. We are not built on recurring revenue from long term consulting. We are built on helping our customers build better software, run better technology operations, and achieve better business outcomes. We come from diverse backgrounds and have serviced a myriad of industries. We often find that others have already solved many of our client's problems, and our expertise lies in matching existing solutions to places where they are needed most. </p>\n<p>So, what is the best way to get started: </p>\n<ol>\n<li>Please send us a mail or call us up. </li>\n<li>We will walk through your aspirations and provide a high-level road map for achieving your goals at no cost. </li>\n<li>If you like what you see, invite us in for a POC based on a 100% ROI. </li>\n<li>Scale from there. </li>\n</ol>\n<p>If you are unsure about the claims in this post, shoot me an email...you won't get a bot response… you'll get me. </p>\n<p>*Note: the exceptions to these rules are usually around ultra-low latency requirements. </p>\n<p><a href=\"https://www.fpcomplete.com/platformengineering/quick-guide/\"><img src=\"/images/cta/quick-guide-devops.png\" alt=\"A Quick Guide to DevOps\" /></a></p>\n",
"permalink": "https://www.fpcomplete.com/blog/cloud-for-non-natives/",
"slug": "cloud-for-non-natives",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Cloud for Non-Natives",
"description": "Faster time to market and lower failure rate are the beginning of the many benefits DevOps offers companies. Discover the measurable metrics and KPIs, as well as the true business value DevOps offers.",
"updated": null,
"date": "2020-10-02",
"year": 2020,
"month": 10,
"day": 2,
"taxonomies": {
"categories": [
"devops",
"insights"
],
"tags": [
"devops",
"insights"
]
},
"extra": {
"author": "Wes Crook",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "blog/cloud-for-non-natives/",
"components": [
"blog",
"cloud-for-non-natives"
],
"summary": null,
"toc": [],
"word_count": 598,
"reading_time": 3,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/rust-for-devops-tooling.md",
"content": "<p>A beginner's guide to writing your DevOps tools in Rust.</p>\n<h2 id=\"introduction\">Introduction</h2>\n<p>In this blog post we'll cover some basic DevOps use cases for Rust and why \nyou would want to use it.\nAs part of this, we'll also cover a few common libraries you will likely use\nin a Rust-based DevOps tool for AWS.</p>\n<p>If you're already familiar with writing DevOps tools in other languages,\nthis post will explain why you should try Rust.</p>\n<p>We'll cover why Rust is a particularly good choice of language to write your DevOps\ntooling and critical cloud infrastructure software in.\nAnd we'll also walk through a small demo DevOps tool written in Rust. \nThis project will be geared towards helping someone new to the language ecosystem \nget familiar with the Rust project structure.</p>\n<p>If you're brand new to Rust, and are interested in learning the language, you may want to start off with our <a href=\"https://www.fpcomplete.com/rust/crash-course/\">Rust Crash Course eBook</a>.</p>\n<h2 id=\"what-makes-the-rust-language-unique\">What Makes the Rust Language Unique</h2>\n<blockquote>\n<p>Rust is a systems programming language focused on three goals: safety, speed, \nand concurrency. It maintains these goals without having a garbage collector, \nmaking it a useful language for a number of use cases other languages aren’t \ngood at: embedding in other languages, programs with specific space and time \nrequirements, and writing low-level code, like device drivers and operating systems. </p>\n</blockquote>\n<p><em>The Rust Book (first edition)</em></p>\n<p>Rust was initially created by Mozilla and has since gained widespread adoption and\nsupport. As the quote from the Rust book alludes to, it was designed to fill the \nsame space that C++ or C would (in that it doesn’t have a garbage collector or a runtime).\nBut Rust also incorporates zero-cost abstractions and many concepts that you would\nexpect in a higher level language (like Go or Haskell).\nFor that, and many other reasons, Rust's uses have expanded well beyond that\noriginal space as low level safe systems language.</p>\n<p>Rust's ownership system is extremely useful in efforts to write correct and \nresource efficient code. Ownership is one of the killer features of the Rust \nlanguage and helps programmers catch classes of resource errors at compile time \nthat other languages miss or ignore.</p>\n<p>Rust is an extremely performant and efficient language, comparable to the speeds \nyou see with idiomatic everyday C or C++.\nAnd since there isn’t a garbage collector in Rust, it’s a lot easier to get \npredictable deterministic performance.</p>\n<h2 id=\"rust-and-devops\">Rust and DevOps</h2>\n<p>What makes Rust unique also makes it very useful for areas stemming from robots \nto rocketry, but are those qualities relevant for DevOps?\nDo we care if we have efficient executables or fine grained control over \nresources, or is Rust a bit overkill for what we typically need in DevOps?</p>\n<p><em>Yes and no</em></p>\n<p>Rust is clearly useful for situations where performance is crucial and actions \nneed to occur in a deterministic and consistent way. That obviously translates to \nlow-level places where previously C and C++ were the only game in town. \nIn those situations, before Rust, people simply had to accept the inherent risk and \nadditional development costs of working on a large code base in those languages.\nRust now allows us to operate in those areas but without the risk that C and C++\ncan add.</p>\n<p>But with DevOps and infrastructure programming we aren't constrained by those \nrequirements. For DevOps we've been able to choose from languages like Go, Python, \nor Haskell because we're not strictly limited by the use case to languages without \ngarbage collectors. Since we can reach for other languages you might argue \nthat using Rust is a bit overkill, but let's go over a few points to counter this.</p>\n<h3 id=\"why-you-would-want-to-write-your-devops-tools-in-rust\">Why you would want to write your DevOps tools in Rust</h3>\n<ul>\n<li>Small executables relative to other options like Go or Java</li>\n<li>Easy to port across different OS targets</li>\n<li>Efficient with resources (which helps cut down on your AWS bill) </li>\n<li>One of the fastest languages (even when compared to C)</li>\n<li>Zero cost abstractions - Rust is a low level performant language which also\ngives the us benefits of a high level language with its generics and abstractions.</li>\n</ul>\n<p>To elaborate on some of these points a bit further:</p>\n<h4 id=\"os-targets-and-cross-compiling-rust-for-different-architectures\">OS targets and Cross Compiling Rust for different architectures</h4>\n<p>For DevOps it's also worth mentioning the (relative) ease with which you can \nport your Rust code across different architectures and different OS's. </p>\n<p>Using the official Rust toolchain installer <code>rustup</code>, it's easy to get the \nstandard library for your target platform.\nRust <a href=\"https://doc.rust-lang.org/nightly/rustc/platform-support.html\">supports a great number of platforms</a>\nwith different tiers of support.\nThe docs for the <code>rustup</code> tool has <a href=\"https://rust-lang.github.io/rustup/cross-compilation.html\">a section</a>\ncovering how you can access pre-compiled artifacts for various architectures.\nTo install the target platform for an architecture (other than the host platform which is installed by default)\nyou simply need to run <code>rustup target add</code>:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">$ rustup target add x86_64-pc-windows-msvc \ninfo: downloading component 'rust-std' for 'x86_64-pc-windows-msvc'\ninfo: installing component 'rust-std' for 'x86_64-pc-windows-msvc'\n</span></code></pre>\n<p>Cross compilation is already built into the Rust compiler by default. \nOnce the <code>x86_64-pc-windows-msvc</code> target is installed you can build for Windows \nwith the <code>cargo</code> build tool using the <code>--target</code> flag:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">cargo build --target=x86_64-pc-windows-msvc\n</span></code></pre>\n<p>(the default target is always the host architecture)</p>\n<p>If one of your dependencies links to a native (i.e. non-Rust) library, you will\nneed to make sure that those cross compile as well. Doing <code>rustup target add</code>\nonly installs the Rust standard library for that target. However for the other \ntools that are often needed when cross-compiling, there is the handy\n<a href=\"https://github.com/rust-embedded/cross\">github.com/rust-embedded/cross</a> tool.\nThis is essentially a wrapper around cargo which does all cross compilation in \ndocker images that have all the necessary bits (linkers) and pieces installed.</p>\n<h4 id=\"small-executables\">Small Executables</h4>\n<p>A key unique feature of Rust is that it doesn't need a runtime or a garbage collector.\nCompare this to languages like Python or Haskell: with Rust the lack of any runtime\ndependencies (Python), or system libraries (as with Haskell) is a huge advantage \nfor portability.</p>\n<p>For practical purposes, as far as DevOps is concerned, this portability means \nthat Rust executables are much easier to deploy than scripts.\nWith Rust, compared to Python or Bash, we don't need to set up the environment for \nour code ahead of time. This frees us up from having to worry if the runtime \ndependencies for the language are set up.</p>\n<p>In addition to that, with Rust you're able to produce 100% static executables for \nLinux using the MUSL libc (and by default Rust will statically link all Rust code). \nThis means that you can deploy your Rust DevOps tool's binaries across your Linux \nservers without having to worry if the correct <code>libc</code> or other libraries were \ninstalled beforehand.</p>\n<p>Creating static executables for Rust is simple. As we discussed before, when discussing\ndifferent OS targets, it's easy with Rust to switch the target you're building against.\nTo compile static executables for the Linux MUSL target all you need to do is add \nthe <code>musl</code> target with:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">$ rustup target add x86_64-unknown-linux-musl\n</span></code></pre>\n<p>Then you can using this new target to build your Rust project as a fully static \nexecutable with:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">$ cargo build --target x86_64-unknown-linux-musl\n</span></code></pre>\n<p>As a result of not having a runtime or a garbage collector, Rust executables \ncan be extremely small. For example, there is a common DevOps tool called \nCredStash that was originally written in Python but has since been \nported to Go (GCredStash) and now Rust (RuCredStash).</p>\n<p>Comparing the executable sizes of the Rust versus Go implementations of CredStash,\nthe Rust executable is nearly a quarter of the size of the Go variant. </p>\n<table><thead><tr><th>Implementation</th><th>Executable Size</th></tr></thead><tbody>\n<tr><td>Rust CredStash: (RuCredStash Linux amd64)</td><td>3.3 MB</td></tr>\n<tr><td>Go CredStash: (GCredStash Linux amd64 v0.3.5)</td><td>11.7 MB</td></tr>\n</tbody></table>\n<p>Project links:</p>\n<ul>\n<li><a href=\"https://github.com/psibi/rucredstash\">github.com/psibi/rucredstash</a></li>\n<li><a href=\"https://github.com/winebarrel/gcredstash\">github.com/winebarrel/gcredstash</a></li>\n</ul>\n<p>This is by no means a perfect comparison, and 8 MB may not seem like a lot, but\nconsider the advantage automatically of having executables that are a quarter of the \nsize you would typically expect. </p>\n<p>This cuts down on the size your Docker images, AWS AMI's, or Azure VM images need\nto be - and that helps speed up the time it takes to spin up new deployments.</p>\n<p>With a tool of this size, having an executable that is 75% smaller than it \nwould be otherwise is not immediately apparent. On this scale the difference, 8 MB,\nis still quite cheap.\nBut with larger tools (or collections of tools and Rust based software) the benefits\nadd up and the difference begins to be a practical and worthwhile consideration.</p>\n<p>The Rust implementation was also not strictly written with the resulting size of \nthe executable in mind. So if executable size was even more important of a \nfactor other changes could be made - but that's beyond the scope of this post.</p>\n<h4 id=\"rust-is-fast\">Rust is fast</h4>\n<p>Rust is very fast even for common idiomatic everyday Rust code. And not only that\nit's arguably easier to work with than with C and C++ and catch errors in your \ncode.</p>\n<p>For the Fortunes benchmark (which exercises the ORM, \ndatabase connectivity, dynamic-size collections, sorting, server-side templates, \nXSS countermeasures, and character encoding) Rust is second and third, only lagging \nbehind the first place C++ based framework by 4 percent. </p>\n<img src=\"/images/blog/techempower-benchmarks-round-19-fortunes.png\" style=\"max-width:95%\">\n<p>In the benchmark for database access for a single query Rust is first and second:</p>\n<img src=\"/images/blog/techempower-benchmarks-round-19-single-query.png\" style=\"max-width:95%\">\n<p>And in a composite of all the benchmarks Rust based frameworks are second and third place.</p>\n<img src=\"/images/blog/techempower-benchmarks-round-19-composite.png\" style=\"max-width:95%\">\n<p>Of course language and framework benchmarks are not real life, however this is \nstill a fair comparison of the languages as they relate to others (within the context \nand the focus of the benchmark).</p>\n<p>Source: <a href=\"https://www.techempower.com/benchmarks/\">https://www.techempower.com/benchmarks</a></p>\n<h3 id=\"why-would-you-not-want-to-write-your-devops-tools-in-rust\">Why would you not want to write your DevOps tools in Rust?</h3>\n<p>For medium to large projects, it’s important to have a type system and compile \ntime checks like those in Rust versus what you would find in something like Python\nor Bash.\nThe latter languages let you get away with things far more readily. This makes \ndevelopment much "faster" in one sense.</p>\n<p>Certain situations, especially those with involving small project codebases, would \nbenefit more from using an interpreted language. In these cases, being able to quickly \nchange pieces of the code without needing to re-compile and re-deploy the project\noutweighs the benefits (in terms of safety, execution speed, and portability)\nthat languages like Rust bring. </p>\n<p>Working with and iterating on a Rust codebase in those circumstances, with frequent\nbut small codebases changes, would be needlessly time-consuming\nIf you have a small codebase with few or no runtime dependencies, then it wouldn't\nbe worth it to use Rust.</p>\n<h2 id=\"demo-devops-project-for-aws\">Demo DevOps Project for AWS</h2>\n<p>We'll briefly cover some of the libraries typically used for an AWS focused \nDevOps tool in a walk-through of a small demo Rust project here. \nThis aims to provide a small example that uses some of the libraries you'll likely\nwant if you’re writing a CLI based DevOps tool in Rust. Specifically for this \nexample we'll show a tool that does some basic operations against AWS S3 \n(creating new buckets, adding files to buckets, listing the contents of buckets).</p>\n<h3 id=\"project-structure\">Project structure</h3>\n<p>For AWS integration we're going to utilize the <a href=\"https://www.rusoto.org/\">Rusoto</a> library.\nSpecifically for our modest demo Rust DevOps tools we're going to pull in the \n<a href=\"https://docs.rs/rusoto_core/0.45.0/rusoto_core/\">rusoto_core</a> and the \n<a href=\"https://docs.rs/rusoto_s3/0.45.0/rusoto_s3/\">rusoto_s3</a> crates (in Rust a <em>crate</em>\nis akin to a library or package).</p>\n<p>We're also going to use the <a href=\"https://docs.rs/structopt/0.3.16/structopt/\">structopt</a> crate\nfor our CLI options. This is a handy, batteries included CLI library that makes \nit easy to create a CLI interface around a Rust struct. </p>\n<p>The tool operates by matching the CLI option and arguments the user passes in \nwith a <a href=\"https://github.com/fpco/rust-aws-devops/blob/54d6cfa4bb7a9a15c2db52976f2b7057431e0c5e/src/main.rs#L211\"><code>match</code> expression</a>.</p>\n<p>We can then use this to match on that part of the CLI option struct we've defined \nand call the appropriate functions for that option.</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#859900;\">match</span><span style=\"color:#657b83;\"> opt {\n Opt::Create { bucket: bucket_name } </span><span style=\"color:#859900;\">=> </span><span style=\"color:#657b83;\">{\n </span><span style=\"color:#859900;\">println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Attempting to create a bucket called: </span><span style=\"color:#cb4b16;\">{}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, bucket_name);\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> demo = S3Demo::new(bucket_name);\n </span><span style=\"color:#859900;\">create_demo_bucket</span><span style=\"color:#657b83;\">(</span><span style=\"color:#859900;\">&</span><span style=\"color:#657b83;\">demo);\n },\n</span></code></pre>\n<p>This matches on the <a href=\"https://github.com/fpco/rust-aws-devops/blob/54d6cfa4bb7a9a15c2db52976f2b7057431e0c5e/src/main.rs#L182\"><code>Create</code></a>\nvariant of the <code>Opt</code> enum. </p>\n<p>We then use <code>S3Demo::new(bucket_name)</code> to create a new <code>S3Client</code> which we can\nuse in the standalone <code>create_demo_bucket</code> function that we've defined \nwhich will create a new S3 bucket.</p>\n<p>The tool is fairly simple with most of the code located in \n<a href=\"https://github.com/fpco/rust-aws-devops/blob/54d6cfa4bb7a9a15c2db52976f2b7057431e0c5e/src/main.rs\">src/main.rs</a></p>\n<h3 id=\"building-the-rust-project\">Building the Rust project</h3>\n<p>Before you build the code in this project, you will need to install Rust. \nPlease follow <a href=\"https://www.rust-lang.org/tools/install\">the official install instructions here</a>.</p>\n<p>The default build tool for Rust is called Cargo. It's worth getting familiar \nwith <a href=\"https://doc.rust-lang.org/cargo/guide/\">the docs for Cargo</a>\nbut here's a quick overview for building the project.</p>\n<p>To build the project run the following from the root of the \n<a href=\"https://github.com/fpco/rust-aws-devops\">git repo</a>:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">cargo build\n</span></code></pre>\n<p>You can then use <code>cargo run</code> to run the code or execute the code directly\nwith <code>./target/debug/rust-aws-devops</code>:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">$ ./target/debug/rust-aws-devops \n\nRunning tool\nRustAWSDevops 0.1.0\nMike McGirr <mike@fpcomplete.com>\n\nUSAGE:\n rust-aws-devops <SUBCOMMAND>\n\nFLAGS:\n -h, --help Prints help information\n -V, --version Prints version information\n\nSUBCOMMANDS:\n add-object Add the specified file to the bucket\n create Create a new bucket with the given name\n delete Try to delete the bucket with the given name\n delete-object Remove the specified object from the bucket\n help Prints this message or the help of the given subcommand(s)\n list Try to find the bucket with the given name and list its objects``\n</span></code></pre>\n<p>Which will output the nice CLI help output automatically created for us \nby <code>structopt</code>.</p>\n<p>If you're ready to build a release version (with optimizations turn on which \nwill make compilation take slightly longer) run the following:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">cargo build --release\n</span></code></pre><h2 id=\"conclusion\">Conclusion</h2>\n<p>As this small demo showed, it's not difficult to get started using Rust to write\nDevOps tools. And even then we didn't need to make a trade-off between ease of\ndevelopment and performant fast code. </p>\n<p>Hopefully the next time you're writing a new piece of DevOps software, \nanything from a simple CLI tool for a specific DevOps operation or you're writing \nthe next Kubernetes, you'll consider reaching for Rust.\nAnd if you have further questions about Rust, or need help implementing your Rust \nproject, please feel free to reach out to FP Complete for Rust engineering \nand training!</p>\n<p>Want to learn more Rust? Check out our <a href=\"https://www.fpcomplete.com/rust/crash-course/\">Rust Crash Course eBook</a>. And for more information, check out our <a href=\"https://www.fpcomplete.com/rust/\">Rust homepage</a>.</p>\n",
"permalink": "https://www.fpcomplete.com/blog/rust-for-devops-tooling/",
"slug": "rust-for-devops-tooling",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Using Rust for DevOps tooling",
"description": "A beginner's guide to writing your DevOps tools in Rust.",
"updated": null,
"date": "2020-09-09",
"year": 2020,
"month": 9,
"day": 9,
"taxonomies": {
"tags": [
"devops",
"rust",
"insights"
],
"categories": [
"functional programming",
"devops"
]
},
"extra": {
"author": "Mike McGirr",
"blogimage": "/images/blog-listing/rust.png"
},
"path": "blog/rust-for-devops-tooling/",
"components": [
"blog",
"rust-for-devops-tooling"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "introduction",
"permalink": "https://www.fpcomplete.com/blog/rust-for-devops-tooling/#introduction",
"title": "Introduction",
"children": []
},
{
"level": 2,
"id": "what-makes-the-rust-language-unique",
"permalink": "https://www.fpcomplete.com/blog/rust-for-devops-tooling/#what-makes-the-rust-language-unique",
"title": "What Makes the Rust Language Unique",
"children": []
},
{
"level": 2,
"id": "rust-and-devops",
"permalink": "https://www.fpcomplete.com/blog/rust-for-devops-tooling/#rust-and-devops",
"title": "Rust and DevOps",
"children": [
{
"level": 3,
"id": "why-you-would-want-to-write-your-devops-tools-in-rust",
"permalink": "https://www.fpcomplete.com/blog/rust-for-devops-tooling/#why-you-would-want-to-write-your-devops-tools-in-rust",
"title": "Why you would want to write your DevOps tools in Rust",
"children": [
{
"level": 4,
"id": "os-targets-and-cross-compiling-rust-for-different-architectures",
"permalink": "https://www.fpcomplete.com/blog/rust-for-devops-tooling/#os-targets-and-cross-compiling-rust-for-different-architectures",
"title": "OS targets and Cross Compiling Rust for different architectures",
"children": []
},
{
"level": 4,
"id": "small-executables",
"permalink": "https://www.fpcomplete.com/blog/rust-for-devops-tooling/#small-executables",
"title": "Small Executables",
"children": []
},
{
"level": 4,
"id": "rust-is-fast",
"permalink": "https://www.fpcomplete.com/blog/rust-for-devops-tooling/#rust-is-fast",
"title": "Rust is fast",
"children": []
}
]
},
{
"level": 3,
"id": "why-would-you-not-want-to-write-your-devops-tools-in-rust",
"permalink": "https://www.fpcomplete.com/blog/rust-for-devops-tooling/#why-would-you-not-want-to-write-your-devops-tools-in-rust",
"title": "Why would you not want to write your DevOps tools in Rust?",
"children": []
}
]
},
{
"level": 2,
"id": "demo-devops-project-for-aws",
"permalink": "https://www.fpcomplete.com/blog/rust-for-devops-tooling/#demo-devops-project-for-aws",
"title": "Demo DevOps Project for AWS",
"children": [
{
"level": 3,
"id": "project-structure",
"permalink": "https://www.fpcomplete.com/blog/rust-for-devops-tooling/#project-structure",
"title": "Project structure",
"children": []
},
{
"level": 3,
"id": "building-the-rust-project",
"permalink": "https://www.fpcomplete.com/blog/rust-for-devops-tooling/#building-the-rust-project",
"title": "Building the Rust project",
"children": []
}
]
},
{
"level": 2,
"id": "conclusion",
"permalink": "https://www.fpcomplete.com/blog/rust-for-devops-tooling/#conclusion",
"title": "Conclusion",
"children": []
}
],
"word_count": 2540,
"reading_time": 13,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/devops-unifying-dev-ops-qa.md",
"content": "<p>The term DevOps has been around for many years. Small and big companies adopt DevOps concepts for different purposes, e.g. to increase the quality of software. In this blog post, we define DevOps, present its pros and cons, highlight a few concepts and see how these can impact the entire organization.</p>\n<h2 id=\"what-is-devops\">What is DevOps?</h2>\n<p>At a high level, DevOps is understood as a technical, organizational and cultural shift in a company to run software more efficiently, reliably, and securely. From this first definition, we can see that DevOps is much more than "use tool X" or "move to the cloud". DevOps starts with the understanding that development (Dev), operations (Ops) and quality assurance (QA) are not treated as siloed disciplines anymore. Instead, they all come together in shared processes and responsibilities across collaborating teams. DevOps achieves this through various techniques. In the section "Implementation", we present a few of these concepts.</p>\n<h2 id=\"benefits\">Benefits</h2>\n<p>Benefits of applying DevOps include:</p>\n<ul>\n<li>Cost savings through higher efficiency.</li>\n<li>Faster software iteration cycles, where updates take less time from development to running in production.</li>\n<li>More security, reliability, and fault tolerance when running software.</li>\n<li>Stronger bonds between different stakeholders in the organization including non-technical staff.</li>\n<li>Enable more data-driven decisions.</li>\n</ul>\n<p>Let's have a look <em>how</em> these benefits can be achieved by applying DevOps ideas:</p>\n<h2 id=\"how-to-implement-devops\">How to implement DevOps</h2>\n<h3 id=\"automation-and-continuous-integration-ci-continuous-delivery-cd\">Automation and Continuous Integration (CI) / Continuous Delivery (CD)</h3>\n<p>Automation refers to a key aspect of the engineering-driven part of DevOps. With automation, we aim to reduce the need for human action, and thus the possibility of human error, as far as possible by sending your software through an automated and well-understood pipeline of actions. These automated actions can build your software, run unit tests, integrate it with existing systems, run system tests, deploy it, and provide feedback on each step. What we are\ndescribing here is usually referred to as <strong>Continuous Integration (CI)</strong> and <strong>Continuous Delivery (CD)</strong>. Adopting CI/CD invests in a low-risk and low-cost way of crossing the chasm between "software that is working on an engineer's laptop" and "software that running securely and reliably on production servers".</p>\n<p>CI/CD is usually tied to a platform on top of which the automated actions are run, e.g., Gitlab. The platform accepts software that should be passed through the pipeline, executes the automated actions on servers which are usually abstracted away, and provides feedback to the engineering team. These actions can be highly customized and tied together in different ways. For example, one action only compiles the source code and provides the build artifacts to subsequent actions. Another action can be responsible for running a test-suite, another one can deploy software. Such actions can be defined for different types of software: A website can be automatically deployed to a server, or a Desktop application can be made available to your customers without human interaction.</p>\n<p>Besides the fact that CI/CD can be used for all kinds of software, there are other advantages to consider:</p>\n<ol>\n<li><strong>The CI/CD pipeline is well-understood and maintained by the teams</strong>: the actions that are run in a pipeline can be flexibly updated, extended, etc. <a href=\"https://www.fpcomplete.com/blog/devops-for-developers/#infra-as-code\">Infrastructure as Code</a> can be a powerful concept here.</li>\n<li><strong>Run in standardized environments</strong>: Version conflicts between tools and configuration or dependency mismatches only have to be fixed once when the pipeline is built. Once a pipeline is working, it will continue to work as the underlying servers and their software versions don't change. No more conflicts between operating systems, tools, versions of tools across different engineers. Pipelines are highly reproducible. Containerization can be a game-changer here.</li>\n<li><strong>Feedback</strong>: Actions sometimes fail, e.g. because a unit test does not pass. The CI/CD platform usually allows different reporting mechanisms: E-mail someone, update the project status on your repository overview page, block subsequent actions or cancel other pipelines.</li>\n</ol>\n<p>The next sections cover more DevOps concepts that benefit from automation.</p>\n<h3 id=\"multiple-environments\">Multiple Environments</h3>\n<p>The CI/CD can be extended by deploying software to different environments. These deployments can happen in individual actions defined in your pipeline. Besides the production environment, which runs user-facing software, staging and testing environments can be defined where software is deployed to. For example, a testing environment can be used by the engineering team for peer-reviewing and validating software changes. Once the team agreed on new software, it can be deployed to a staging environment. A usual purpose of the staging environment is to mimic the production environment as closely as possible. Further tests can be run in a staging environment to make sure the software is ready to be used by real users. Finally, the software reaches production-readiness and is deployed to a production environment. Such a production deployment can be designed using a gradual rollout, i.e. canary deployments.</p>\n<p>Different environments not only realize different semantics and confidence levels of running software, e.g. as described in the previous paragraph, but also serve as an agreed-upon view on software in the entire organization. Multi-environment deployments make your software and quality thereof easier to understand. This is because of the gained insights when running software, in particular on infrastructure that is close to a production setting. Generally, running software gives much more insights into the performance, reliability, security, production-readiness and overall quality. Different teams, e.g. security experts or a dedicated QA-team (if your organization follows this practice) can be consulted at different software quality stages, i.e. different environments in which software runs. Additionally, non-technical staff can use environments, e.g. specialized ones for demo purposes.</p>\n<p>Ultimately, integrating multiple environments structures QA and smoothens the interactions between different teams.</p>\n<h3 id=\"fail-early\">Fail early</h3>\n<p>No matter how well things are working in an organization that builds software, bugs happen and bugs are expensive. The cost of bugs can be projected to the manpower invested into fixing the bug, the loss of reputation due to angry customers, and generally negative business impact. Since we can't fully avoid bugs, there exist concepts to reduce both the frequency and impact of bugs. "Fail early" is one of these concepts.</p>\n<p>The basic idea is to catch bugs and other flaws in your software as early in the development process as possible. When software is developed, unit tests, compiler errors and peer reviews count towards the early and cheap mechanisms to detect and fix flaws. Ideally, a unit test tells the developer that the software is not correct, or, a second pair of eyes reveals a potential performance issue during a code review. In both cases, not much time and effort is lost and the flaw can be easily fixed. However, other bugs might make it through these initial checks and land in testing or staging environments. Other types of tests and QA should be in place to check the software quality. Worst case, the bug outlives all checks and is in production. There, bugs have much higher impact and require more effort by many stakeholders, e.g. the bug fix by the engineering team and the apology to the customers.</p>\n<p>To save costs, cheap checks such as running a test suite in an automated pipeline should be executed early. This will save costs as flaws discovered later in the process result in higher costs. Thus, failing early increases cost efficiency.</p>\n<h3 id=\"rollbacks\">Rollbacks</h3>\n<p>DevOps can also help to react quickly to changes. One example of a sudden change is a bug, as described in the last section, which is discovered in the production environment. Rollbacks, for example as manually triggered pipelines, can recover the well-functioning of a production service in a timely manner. This can be useful when the bug is a hard one and needs hours to be identified and fixed. These hours of degraded customer experience or even downtime makes paying customers unhappy. A faster mechanism is desired, which minimizes the gap between a faulty system and a recovered system. A rollback can be a fast and effective way to recover system state without exposing customers to company failure much.</p>\n<h3 id=\"policies\">Policies</h3>\n<p>DevOps concepts impose a challenge to security and permission management as these span the entire organization. Policies can help to formulate authorizations and rules during operations. For example, implementing the following security requirements may be required:</p>\n<ul>\n<li>A deployment or rollback in production should not be triggered by anyone but a well-defined set of people in authority.</li>\n<li>Some actions in a CI/CD pipeline should always be run while other actions are intended to be triggered manually or only run under certain conditions.</li>\n<li>The developers might require slightly different permissions than a dedicated QA team to perform their day-to-day work.</li>\n<li>Humans and machine users can have different capabilities but should always have the least privileges assigned to them.</li>\n</ul>\n<p>The authentication and authorization tools provided by CI/CD providers or cloud vendors can help to design such policies according to your organizational needs.</p>\n<h3 id=\"observability\">Observability</h3>\n<p>As software is running and users are interacting with your applications, insights such as error rates, performance statistics, resource usages, etc. can help to identify bottlenecks, mitigate future issues, and drive business decisions through data. There exist two major ways to establish different forms of observability:</p>\n<ul>\n<li><strong>Logging</strong>: Events in text form that software outputs to inform about the application's status and health. Different types of logging messages, e.g. indicating the severity of an error event, can help to aggregate and display log messages in a central place, where it can be used by engineering teams for debugging purposes.</li>\n<li><strong>Metrics</strong>: Information about the running software that is not generated by the application itself. For example, the CPU or memory usage of the underlying machine that runs the software, network statistics, HTTP error rates, etc. As with logging, metrics can help to spot bottlenecks and mitigate them before they have a business impact. Visualizing aggregated metrics data facilitates communication across technical and non-technical teams and leverages data-driven decisions. Metrics dashboards can strengthen the shared ownership of software across teams.</li>\n</ul>\n<p>Logging and metrics can help to define goals and to align a development team with a QA team for example.</p>\n<h2 id=\"disadvantages\">Disadvantages</h2>\n<p>So far, we only looked at the benefits and characteristics of DevOps. Let's have a brief look at the other side of the coin by commenting on the possible negative side effects and disadvantages of adopting DevOps concepts.</p>\n<ul>\n<li>\n<p>The investment into DevOps can be huge as it is a company-wide, multi-discipline, and multi-team transformation that not only requires technical implementation effort but also training for people, re-structuring and aligning teams.</p>\n</li>\n<li>\n<p>This goes along with the first point but it's worth emphasizing it: The cultural impact on your organization can be challenging due to human factors. While a new automation mechanism can be estimated and implemented reasonably well, tracking the progress of changing people's way of communication, feeling of ownership, aligning to new processes can be hard and might lead to no gained efficiencies, which DevOps promises, short-term. Due to the high impact of DevOps, it is a long-term investment.</p>\n</li>\n<li>\n<p>The technical backbone of DevOps, e.g. CI/CD pipelines, cloud vendors, integration of authorization and authentication, likely results in increased expenses through new contracts and licenses with new players. However, through the dominance of open source in modern DevOps tooling, e.g. through using Kubernetes, vendor lock-in can be avoided.</p>\n</li>\n</ul>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>In this blog post, we explored the definition of DevOps and presented several DevOps concepts and use-cases. Furthermore, we evaluated benefits and disadvantages. Adopting DevOps is an investment into a low-friction and automated way of developing, testing, and running software. Technical improvements, e.g. automation, as well as increased collaboration between teams of different disciplines ultimately improve the efficiency in your organization long-term.</p>\n<p>However, DevOps represents not only technical effort but also impacts the entire company, e.g. how teams communicate with each other, how issues are resolved, and what teams feel responsible for. Finding the right balance and choosing the best concepts and tools for your teams represents a challenge. We can help you with identifying and solving the DevOps transformation in your organization.</p>\n",
"permalink": "https://www.fpcomplete.com/blog/devops-unifying-dev-ops-qa/",
"slug": "devops-unifying-dev-ops-qa",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "DevOps: Unifying Dev, Ops, and QA",
"description": "The term DevOps has been around for many years. Small and big companies adopt DevOps concepts for different purposes, e.g. to increase the quality of software. In this blog post, we define DevOps, present its pros and cons, highlight a few concepts and see how these can impact the entire organization.",
"updated": null,
"date": "2020-08-24",
"year": 2020,
"month": 8,
"day": 24,
"taxonomies": {
"tags": [
"devops",
"insights"
],
"categories": [
"devops"
]
},
"extra": {
"author": "Moritz Hoffmann",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "blog/devops-unifying-dev-ops-qa/",
"components": [
"blog",
"devops-unifying-dev-ops-qa"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "what-is-devops",
"permalink": "https://www.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#what-is-devops",
"title": "What is DevOps?",
"children": []
},
{
"level": 2,
"id": "benefits",
"permalink": "https://www.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#benefits",
"title": "Benefits",
"children": []
},
{
"level": 2,
"id": "how-to-implement-devops",
"permalink": "https://www.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#how-to-implement-devops",
"title": "How to implement DevOps",
"children": [
{
"level": 3,
"id": "automation-and-continuous-integration-ci-continuous-delivery-cd",
"permalink": "https://www.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#automation-and-continuous-integration-ci-continuous-delivery-cd",
"title": "Automation and Continuous Integration (CI) / Continuous Delivery (CD)",
"children": []
},
{
"level": 3,
"id": "multiple-environments",
"permalink": "https://www.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#multiple-environments",
"title": "Multiple Environments",
"children": []
},
{
"level": 3,
"id": "fail-early",
"permalink": "https://www.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#fail-early",
"title": "Fail early",
"children": []
},
{
"level": 3,
"id": "rollbacks",
"permalink": "https://www.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#rollbacks",
"title": "Rollbacks",
"children": []
},
{
"level": 3,
"id": "policies",
"permalink": "https://www.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#policies",
"title": "Policies",
"children": []
},
{
"level": 3,
"id": "observability",
"permalink": "https://www.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#observability",
"title": "Observability",
"children": []
}
]
},
{
"level": 2,
"id": "disadvantages",
"permalink": "https://www.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#disadvantages",
"title": "Disadvantages",
"children": []
},
{
"level": 2,
"id": "conclusion",
"permalink": "https://www.fpcomplete.com/blog/devops-unifying-dev-ops-qa/#conclusion",
"title": "Conclusion",
"children": []
}
],
"word_count": 2023,
"reading_time": 11,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/devops-for-developers.md",
"content": "<p>In this post, I describe my personal journey as a developer skeptical\nof the seemingly ever-growing, ever more complex, array of "ops"\ntools. I move towards adopting some of these practices, ideas and\ntools. I write about how this journey helps me to write software\nbetter and understand discussions with the ops team at work.</p>\n<div style=\"border:1px solid black;background-color:#f8f8f8;margin-bottom:1em;padding: 0.5em 0.5em 0 0.5em;\">\n<p><strong>Table of Contents</strong></p>\n<ul>\n<li><a href=\"https://www.fpcomplete.com/blog/devops-for-developers/#on-being-skeptical\">On being skeptical</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/devops-for-developers/#the-humble-app\">The humble app</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/devops-for-developers/#disk-failures-are-not-that-common\">Disk failures are not that common</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/devops-for-developers/#auto-deployment-is-better-than-manual\">Auto-deployment is better than manual</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/devops-for-developers/#backups-become-worth-it\">Backups become worth it</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/devops-for-developers/#deployment-staging\">Deployment staging</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/devops-for-developers/#packaging-with-docker-is-good\">Packaging with Docker is good</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/devops-for-developers/#custodians-multiple-processes-are-useful\">Custodians multiple processes are useful</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/devops-for-developers/#kubernetes-provides-exactly-that\">Kubernetes provides exactly that</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/devops-for-developers/#declarative-is-good-vendor-lock-in-is-bad\">Declarative is good, vendor lock-in is bad</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/devops-for-developers/#more-advanced-rollout\">More advanced rollout</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/devops-for-developers/#relationship-between-code-and-deployed-state\">Relationship between code and deployed state</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/devops-for-developers/#argocd\">ArgoCD</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/devops-for-developers/#infra-as-code\">Infra-as-code</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/devops-for-developers/#where-the-dev-meets-the-ops\">Where the dev meets the ops</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/devops-for-developers/#what-we-do\">What we do</a></li>\n</ul>\n</div>\n<h2 id=\"on-being-skeptical\">On being skeptical</h2>\n<p>I would characterise my attitudes to adopting technology in two\nstages:</p>\n<ul>\n<li>Firstly, I am conservative and dismissive, in that I will usually\ndisregard any popular new technology as a bandwagon or trend. I'm a\nslow adopter.</li>\n<li>Secondly, when I actually encounter a situation where I've suffered,\nI'll then circle back to that technology and give it a try, and if I\ncan really find the nugget of technical truth in there, then I'll\nadopt it.</li>\n</ul>\n<p>Here are some things that I disregarded for a year or more before\ntrying: Emacs, Haskell, Git, Docker, Kubernetes, Kafka. The whole\nNoSQL trend came, wrecked havoc, and went, while I had my back turned,\nbut I am considering using Redis for a cache at the moment.</p>\n<h2 id=\"the-humble-app\">The humble app</h2>\n<p>If you’re a developer like me, you’re probably used to writing your\nsoftware, spending most of your time developing, and then finally\ndeploying your software by simply creating a machine, either a\ndedicated machine or a virtual machine, and then uploading a binary of\nyour software (or source code if it’s interpreted), and then running\nit with the copy pasted config of systemd or simply running the\nsoftware inside GNU screen. It's a secret shame that I've done this,\nbut it's the reality.</p>\n<p>You might use nginx to reverse-proxy to the service. Maybe you set up\na PostgreSQL database or MySQL database on that machine. And then you\nwalk away and test out the system, and later you realise you need some\nslight changes to the system configuration. So you SSH into the system\nand makes the small tweaks necessary, such as port settings, encoding\nsettings, or an additional package you forgot to add. Sound familiar?</p>\n<p>But on the whole, your work here is done and for most services this is\npretty much fine. There are plenty of services running that you have\nseen in the past 30 years that have been running like this.</p>\n<h2 id=\"disk-failures-are-not-that-common\">Disk failures are not that common</h2>\n<p>Rhetoric about processes going down due to a hardware failure are\nprobably overblown. Hard drives don’t crash very often. They don’t\nreally wear out as quickly as they used to, and you can be running a\nsystem for years before anything even remotely concerning happens.</p>\n<h2 id=\"auto-deployment-is-better-than-manual\">Auto-deployment is better than manual</h2>\n<p>When you start to iterate a little bit quicker, you get bored of\nmanually building and copying and restarting the binary on the\nsystem. This is especially noticeable if you forget the steps later\non.</p>\n<!-- Implementing Auto-Deployment -->\n<p>If you’re a little bit more advanced you might have some special\nscripts or post-merge git hooks, so that when you push to your repo it\nwould apply to the same machine and you have some associated token on\nyour CI machine that is capable of uploading a binary and running a\ncommand like copy and restart (e.g. SSH key or API\nkey). Alternatively, you might implement a polling system on the\nactual production system which will check if any updates have occurred\nin get and if so pull down a new binary. This is how we were doing\nthings in e.g. 2013.</p>\n<h2 id=\"backups-become-worth-it\">Backups become worth it</h2>\n<p>Eventually, if you're lucky, your service starts to become slightly\nmore important; maybe it’s used in business and people actually are\nusing it and storing valuable things in the database. You start to\nthink that back-ups are a good idea and worth the investment.</p>\n<!-- Redundancy of DB -->\n<p>You probably also have a script to back up the database, or replicate\nit on a separate machine, for redundancy.</p>\n<h2 id=\"deployment-staging\">Deployment staging</h2>\n<p>Eventually, you might have a staged deployment strategy. So you might\nhave a developer testing machine, you might have a QA machine, a\nstaging machine, and finally a production machine. All of these are\nconfigured in pretty much the same way, but they are deployed at\ndifferent times and probably the system administrator is the only one\nwith access to deploy to production.</p>\n<!-- Continuum -->\n<p>It’s clear by this point that I’m describing a continuum from "hobby\nproject" to "enterprise serious business synergy solutions".</p>\n<h2 id=\"packaging-with-docker-is-good\">Packaging with Docker is good</h2>\n<p>Docker effectively leads to collapsing all of your system dependencies\nfor your binary to run into one contained package. This is good,\nbecause dependency management is hell. It's also highly wasteful,\nbecause its level of granularity is very wide. But this is a trade-off\nwe accept for the benefits.</p>\n<h2 id=\"custodians-multiple-processes-are-useful\">Custodians multiple processes are useful</h2>\n<p>Docker doesn’t have much to say about starting and restarting\nservices. I’ve explored using CoreOS with the hosting provider Digital\nOcean, and simply running a fresh virtual machine, with the given\nDocker image.</p>\n<p>However, you quickly run into the problem of starting up and tearing\ndown:</p>\n<ul>\n<li>When you start the service, you need certain liveness checks\nand health checks, so if the service fails to start then you should\nnot stop the existing service from running, for example. You should\nkeep the existing ones running.</li>\n<li>If the process fails at any time during running then you should also\nrestart the process. I thought about this point a lot, and came to the\nconclusion that it’s better to have your process be restarted than to\nassume that the reason it failed was so dangerous that the process\nshouldn’t start again. Probably it’s more likely that there is an\nexception or memory issue that happened in a pathological case which\nyou can investigate in your logging system. But it doesn’t mean that\nyour users should suffer by having downtime.</li>\n<li>The natural progression of this functionality is to support\ndifferent rollout strategies. Do you want to switch everything to the\nnew system in one go, do you want it to be deployed piece-by-piece?</li>\n</ul>\n<!-- Summary: You Realise Worth Of Ops Tools -->\n<p>It’s hard to fully appreciate the added value of ops systems like\nKubernetes, Istio/Linkerd, Argo CD, Prometheus, Terraform, etc. until\nyou decide to design a complete architecture yourself, from scratch,\nthe way you want it to work in the long term.</p>\n<h2 id=\"kubernetes-provides-exactly-that\">Kubernetes provides exactly that</h2>\n<p>What system happens to accept Docker images, provide custodianship,\nroll out strategies, and trivial redeploy? Kubernetes.</p>\n<p>It provides this classical monitoring and custodian responsibilities\nthat plenty of other systems have done in the past. However, unlike\nsimply running a process and testing if it’s fine and then turning off\nanother process, Kubernetes buys into Docker all the way. Processes\nare isolated from each other, in both the network on the file\nsystem. Therefore, you can very reliably start and stop the services\non the same machine. Nothing about a process's machine state is\npersistent, therefore you are forced to design your programs in a way\nthat state is explicitly stored either ephemerally, or elsewhere.</p>\n<!-- Cloud Managed Databases Make This Practical -->\n<p>In the past it might be a little bit scarier to have your database\nrunning in such system, what if it automatically wipes out the\ndatabase process? With today’s cloud base deployments, it's more\ncommon to use a managed database such as that provided by Amazon,\nDigital Ocean, Google or Azure. The whole problem of updating and\nbacking up your database can pretty much be put to one\nside. Therefore, you are free to mess with the configuration or\ntopology of your cluster as much as you like without affecting your\ndatabase.</p>\n<h2 id=\"declarative-is-good-vendor-lock-in-is-bad\">Declarative is good, vendor lock-in is bad</h2>\n<p>A very appealing feature of a deployment system like Kubernetes is\nthat everything is automatic and declarative. You stick all of your\nconfiguration in simple YAML files (which is also a curse because YAML\nhas its own warts and it's not common to find formal schemas for it).\nThis is also known as "infrastructure as code".</p>\n<p>Ideally, you should have as much as possible about your infrastructure\nin code checked in to a repo so that you can reproduce it and track\nit.</p>\n<p>There is also a much more straight-forward path to migrate from one\nservice provider to another service provider. Kubernetes is supported\non all the major service providers (Google, Amazon, Azure), therefore\nyou are less vulnerable to vendor lock-in. They also all provide\nmanaged databases that are standard (PostgreSQL, for example) with\ntheir normal wire protocols. If you were using the vendor-specific\nAPIs to achieve some of this, you'd be stuck on one vendor. I, for\nexample, am not sure whether to go with Amazon or Azure on a big\npersonal project right now. If I use Kubernetes, I am mitigating risk.</p>\n<p>With something like Terraform you can go one step further, in which\nyou write code that can create your cluster completely from\nscratch. This is also more vendor independent/mitigated.</p>\n<h2 id=\"more-advanced-rollout\">More advanced rollout</h2>\n<p>Your load balancer and your DNS can also be in code. Typically a load\nbalancer that does the job is nginx. However, for more advanced\ndeployments such as A/B or green/blue deployments, you may need\nsomething more advanced like Istio or Linkerd.</p>\n<p>Do I really want to deploy a new feature to all of my users? Maybe,\nthat might be easier. Do I want to deploy a different way of marketing\nmy product on the website to all users at once? If I do that, then I\ndon’t exactly know how effective it is. So, I could perhaps do a\ndeployment in which half of my users see one page and half of the\nusers see another page. These kinds of deployments are\nstraight-forwardly achieved with Istio/Linkerd-type service meshes,\nwithout having to change any code in your app.</p>\n<h2 id=\"relationship-between-code-and-deployed-state\">Relationship between code and deployed state</h2>\n<p>Let's think further than this.</p>\n<p>You've set up your cluster with your provider, or Terraform. You've\nset up your Kubernetes deployments and services. You've set up your CI\nto build your project, produce a Docker image, and upload the images\nto your registry. So far so good.</p>\n<p>Suddenly, you’re wondering, how do I actually deploy this? How do I\ncall Kubernetes, with the correct credentials, to apply this new\nDoctor image to the appropriate deployment?</p>\n<p>Actually, this is still an ongoing area of innovation. An obvious way\nto do it is: you put some details on your CI system that has access to\nrun kubectl, then set the image with the image name and that will try\nto do a deployment. Maybe the deployment fails, you can look at that\nresult in your CI dashboard.</p>\n<p>However, the question comes up as what is currently actually deployed\non production? Do we really have infrastructure as code here?</p>\n<p>It’s not that I edited the file and that update suddenly got\nreflected. There’s no file anywhere in Git that contains what the\ncurrent image is. Head scratcher.</p>\n<p>Ideally, you would have a repository somewhere which states exactly\nwhich image should be deployed right now. And if you change it in a\ncommit, and then later revert that commit, you should expect the\nproduction is also reverted to reflect the code, right?</p>\n<h2 id=\"argocd\">ArgoCD</h2>\n<p>One system which attempts to address this is ArgoCD. They implement\nwhat they call "GitOps". All state of the system is reflected in a Git\nrepo somewhere. In Argo CD, after your GitHub/Gitlab/Jenkins/Travis CI\nsystem has pushed your Docker image to the Docker repository, it makes\na gRPC call to Argo, which becomes aware of the new image. As an\nadmin, you can now trivially look in the UI and click "Refresh" to\nredeploy the new version.</p>\n<h2 id=\"infra-as-code\">Infra-as-code</h2>\n<p>The common running theme in all of this is\ninfrastructure-as-code. It’s immutability. It’s declarative. It’s\nremoving the number of steps that the human has to do or care\nabout. It’s about being able to rewind. It’s about redundancy. And\nit’s about scaling easily.</p>\n<!-- Circling Back -->\n<p>When you really try to architect your own system, and your business\nwill lose money in the case of ops mistakes, then you start to think\nthat all of these advantages of infrastructure as code start looking\nreally attractive.</p>\n<p>But before you really sit down and think about this stuff, however, it\nis pretty hard to empathise or sympathise with the kind of concerns\nthat people using these systems have.</p>\n<!-- Downsides/Tax -->\n<p>There are some downsides to these tools, as with any:</p>\n<ul>\n<li>Docker is quite wasteful of time and space</li>\n<li>Kubernetes is undoubtedly complex, and leans heavily on YAML</li>\n<li><a href=\"https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-abstractions/\">All abstractions are leaky</a>,\ntherefore tools like this all leak</li>\n</ul>\n<h2 id=\"where-the-dev-meets-the-ops\">Where the dev meets the ops</h2>\n<p>Now that I’ve started looking into these things and appreciating their\nuse, I interact a lot more with the ops side of our DevOps team at work,\nand I can also be way more helpful in assisting them with the\ninformation that they need, and also writing apps which anticipate the\nkind of deployment that is going to happen. The most difficult\nchallenge typically is metrics and logging, for run-of-the-mill apps,\nI’m not talking about high-performance apps.</p>\n<!-- An Exercise -->\n<p>One way way to bridge the gap between your ops team and dev team,\ntherefore, might be an exercise meeting in which you do have a dev\nperson literally sit down and design an app architecture and\ninfrastructure, from the ground up using the existing tools that we\nhave that they are aware of and then your ops team can point out the\nadvantages and disadvantages of their proposed solution. Certainly,\nI think I would have benefited from such a mentorship, even for an\nhour or two.</p>\n<!-- Head-In-The-Sand Also Works -->\n<p>It may be that your dev team and your ops team are completely separate\nand everybody’s happy. The devs write code, they push it, and then it\nmagically works in production and nobody has any issues. That’s\ncompletely fine. If anything it would show that you have a very good\nprocess. In fact, that’s pretty much how I’ve worked for the past\neight years at this company.</p>\n<p>However, you could derive some benefit if your teams are having\ndifficulty communicating.</p>\n<p>Finally, the tools in the ops world aren't perfect, and they're made\nby us devs. If you have a hunch that you can do better than these\ntools, you should learn more about them, and you might be right.</p>\n<h2 id=\"what-we-do\">What we do</h2>\n<p>FP Complete are using a great number of these tools, and we're writing\nour own, too. If you'd like to know more, email use at\n<a href=\"mailto:sales@fpcomplete.com\">sales@fpcomplete.com</a>.</p>\n",
"permalink": "https://www.fpcomplete.com/blog/devops-for-developers/",
"slug": "devops-for-developers",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "DevOps for (Skeptical) Developers",
"description": null,
"updated": null,
"date": "2020-08-16",
"year": 2020,
"month": 8,
"day": 16,
"taxonomies": {
"categories": [
"functional programming",
"devops"
],
"tags": [
"devops"
]
},
"extra": {
"author": "Chris Done",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "blog/devops-for-developers/",
"components": [
"blog",
"devops-for-developers"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "on-being-skeptical",
"permalink": "https://www.fpcomplete.com/blog/devops-for-developers/#on-being-skeptical",
"title": "On being skeptical",
"children": []
},
{
"level": 2,
"id": "the-humble-app",
"permalink": "https://www.fpcomplete.com/blog/devops-for-developers/#the-humble-app",
"title": "The humble app",
"children": []
},
{
"level": 2,
"id": "disk-failures-are-not-that-common",
"permalink": "https://www.fpcomplete.com/blog/devops-for-developers/#disk-failures-are-not-that-common",
"title": "Disk failures are not that common",
"children": []
},
{
"level": 2,
"id": "auto-deployment-is-better-than-manual",
"permalink": "https://www.fpcomplete.com/blog/devops-for-developers/#auto-deployment-is-better-than-manual",
"title": "Auto-deployment is better than manual",
"children": []
},
{
"level": 2,
"id": "backups-become-worth-it",
"permalink": "https://www.fpcomplete.com/blog/devops-for-developers/#backups-become-worth-it",
"title": "Backups become worth it",
"children": []
},
{
"level": 2,
"id": "deployment-staging",
"permalink": "https://www.fpcomplete.com/blog/devops-for-developers/#deployment-staging",
"title": "Deployment staging",
"children": []
},
{
"level": 2,
"id": "packaging-with-docker-is-good",
"permalink": "https://www.fpcomplete.com/blog/devops-for-developers/#packaging-with-docker-is-good",
"title": "Packaging with Docker is good",
"children": []
},
{
"level": 2,
"id": "custodians-multiple-processes-are-useful",
"permalink": "https://www.fpcomplete.com/blog/devops-for-developers/#custodians-multiple-processes-are-useful",
"title": "Custodians multiple processes are useful",
"children": []
},
{
"level": 2,
"id": "kubernetes-provides-exactly-that",
"permalink": "https://www.fpcomplete.com/blog/devops-for-developers/#kubernetes-provides-exactly-that",
"title": "Kubernetes provides exactly that",
"children": []
},
{
"level": 2,
"id": "declarative-is-good-vendor-lock-in-is-bad",
"permalink": "https://www.fpcomplete.com/blog/devops-for-developers/#declarative-is-good-vendor-lock-in-is-bad",
"title": "Declarative is good, vendor lock-in is bad",
"children": []
},
{
"level": 2,
"id": "more-advanced-rollout",
"permalink": "https://www.fpcomplete.com/blog/devops-for-developers/#more-advanced-rollout",
"title": "More advanced rollout",
"children": []
},
{
"level": 2,
"id": "relationship-between-code-and-deployed-state",
"permalink": "https://www.fpcomplete.com/blog/devops-for-developers/#relationship-between-code-and-deployed-state",
"title": "Relationship between code and deployed state",
"children": []
},
{
"level": 2,
"id": "argocd",
"permalink": "https://www.fpcomplete.com/blog/devops-for-developers/#argocd",
"title": "ArgoCD",
"children": []
},
{
"level": 2,
"id": "infra-as-code",
"permalink": "https://www.fpcomplete.com/blog/devops-for-developers/#infra-as-code",
"title": "Infra-as-code",
"children": []
},
{
"level": 2,
"id": "where-the-dev-meets-the-ops",
"permalink": "https://www.fpcomplete.com/blog/devops-for-developers/#where-the-dev-meets-the-ops",
"title": "Where the dev meets the ops",
"children": []
},
{
"level": 2,
"id": "what-we-do",
"permalink": "https://www.fpcomplete.com/blog/devops-for-developers/#what-we-do",
"title": "What we do",
"children": []
}
],
"word_count": 2618,
"reading_time": 14,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/our-history-containerization.md",
"content": "<p>FP Complete has been working with containerization (or OS-level virtualization) since before it was popularized by Docker. What follows is a brief history of how and why we got started using containers, and how our use of containerization has evolved as new technology has emerged.</p>\n<h2 id=\"brief-history\">Brief history</h2>\n<p>Our first foray into containerization started at the beginning of the company, when we were building a web-based integrated development environment for Haskell. We needed a secure and cost-effective way to be able to compile and run Haskell code on the server side. While giving each active user their own virtual machine with dedicated CPU and memory would have satisfied the first requirement (security), it would have been far from cost effective. GHC, the de-facto standard Haskell compiler, is notoriously resource hungry, so the VM would have to be quite large (it's not uncommon to need 4 GB or more of RAM to compile a fairly straightforward piece of software). We needed a way to share CPU and memory resources between multiple users securely and be able to shift load around a cluster of virtual machines to keep usage balanced and avoid one heavy user from impacting the experience of others users on the same VM. This sounds like a job for container orchestration! Unfortunately, Docker didn't exist yet, let alone Kubernetes. The state of the art for Linux containers at the time was LXC, which was mostly a collection of shell scripts that helped with using the Linux kernel features that underly all Linux container solutions, but at a much lower level than Docker. On top of this we built everything we needed to distribute "images" of a base filesystem plus overlay for local changes, isolated container networks, and ability to shift load based on VM and container utilization -- that is, many of the things Docker and Kubernetes do now, but tailored specifically for our application's needs.</p>\n<p>When Docker came on the scene, we embraced it despite some early growing pains, since it was much easier to use and more general purpose than our "bespoke" system and we thought it likely that it would soon become a de-facto standard, which is exactly what happened. For internal and customer solutions, Docker allowed us to create much more nimble and efficient deployment solutions that satisfied the requirement for <a href=\"https://www.fpcomplete.com/platformengineering/immutable-infrastructure/\">immutable infrastructure</a>. Prior to Docker, we achieved immutability by building VM images and spinning up virtual machines; a much slower and heavier process than building a Docker image and running it on an already-provisioned VM. This also allowed us to run multiple applications isolated from one another on a single VM without worry of interference with each other.</p>\n<p>Finally Kubernetes arrived. While it was not the first orchestration platform, it was the first that wholeheartedly standardized on Docker containers. Once again we embraced it, despite some early growing pains, due to its ease of use, multi-cloud support, fast pace of improvement, and backing of a major company (Google). We once again bet that Kubernetes would become the de-facto standard, which is again exactly what happened. With Kubernetes, instead of having to think about which VM a container would run on, we can have a cluster of general-purpose nodes and let the orchestrator worry about what runs on which node. This lets us squeeze yet more efficiency out of our resources. Due to its ease of use and built-in support for common rollout strategies, we can give developers the ability to deploy their apps directly, and since it is so easy to tie into CI/CD pipelines we can drastically simplify automated deployment processes.</p>\n<p>Going forward, we continue to keep up with the latest developments in containerization and are constantly evaluating new and alternative technologies, to stay on the forefront of DevOps.</p>\n<h2 id=\"why-we-really-like-it\">Why we really like it</h2>\n<ul>\n<li>\n<p>Supports <a href=\"https://www.fpcomplete.com/platformengineering/immutable-infrastructure/\">immutable infrastructure</a>.</p>\n</li>\n<li>\n<p>Fast build and deployment processes.</p>\n</li>\n<li>\n<p>Low overhead and efficient use of compute resources.</p>\n</li>\n<li>\n<p>Easy integration with CI/CD pipelines.</p>\n</li>\n<li>\n<p>Isolation of applications from others running on the same machine.</p>\n</li>\n<li>\n<p>Bundles dependencies with the application, so they can be tested together and there's no risk of deploying to an incorrect environment.</p>\n</li>\n<li>\n<p>Developers on various platforms can build and test the application in a consistent environment.</p>\n</li>\n</ul>\n<h2 id=\"limitations-of-the-technology\">Limitations of the technology</h2>\n<ul>\n<li>\n<p>Containers and container orchestration are most mature on Linux, although Docker and Kubernetes do now support running Windows containers on machines running Windows, and most modern server operating system have support for some kind of containerization (but not necessarily Docker or Kubernetes).</p>\n</li>\n<li>\n<p>Containers and container orchestration add additional layers of abstraction and complexity. This can, at times, make diagnosing problems more difficult.</p>\n</li>\n<li>\n<p>Legacy applications can be tricky to containerize since they assume they are running on a persistent machine rather than an ephemeral one. While this can be mitigated using persistent volumes, it makes the containerization strategy less straightforward.</p>\n</li>\n<li>\n<p>While properly configured containers are relatively secure, all containers running on a host share a single operating system kernel which means there is greater risk that a process can use a security vulnerability to "break out" of its container than when using VMs.</p>\n</li>\n</ul>\n<h2 id=\"resources\">Resources</h2>\n<p>From FP Complete:</p>\n<ul>\n<li><a href=\"https://www.fpcomplete.com/platformengineering/containerization/\">Introduction to Containerization concepts</a></li>\n<li><a href=\"https://www.fpcomplete.com/platformengineering/immutable-infrastructure/\">Introduction to Immutable Infrastructure concepts</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/deploying_haskell_apps_with_kubernetes/\">Webinar: Deploying Haskell apps with Kubernetes</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/2018/07/deploying-rust-with-docker-and-kubernetes/\">Blog post: Deploying rust with Docker and Kubernetes</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/2017/02/immutability-docker-haskells-st-type/\">Blog post: Immutability, Docker, and Haskell's ST type</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/2017/01/containerize-legacy-app/\">Blog post: Containerizing a legacy application: an overview</a></li>\n</ul>\n<p>From the web:</p>\n<ul>\n<li><a href=\"https://www.docker.com/resources/what-container\">What is a container?</a></li>\n<li><a href=\"https://www.docker.com/get-started\">Get started with Docker</a></li>\n<li><a href=\"https://kubernetes.io/docs/concepts/\">Kubernetes concepts</a></li>\n<li><a href=\"https://kubernetes.io/docs/setup/\">Getting started with Kubernetes</a></li>\n</ul>\n",
"permalink": "https://www.fpcomplete.com/blog/our-history-containerization/",
"slug": "our-history-containerization",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Our history with containerization",
"description": "FP Complete has a long history of working with containers, beginning before Docker existed and staying ahead of advances in the technology.",
"updated": null,
"date": "2020-08-13",
"year": 2020,
"month": 8,
"day": 13,
"taxonomies": {
"categories": [
"devops"
],
"tags": [
"devops",
"docker",
"kubernetes"
]
},
"extra": {
"author": "FP Complete Team",
"blogimage": "/images/blog-listing/cloud-computing.png"
},
"path": "blog/our-history-containerization/",
"components": [
"blog",
"our-history-containerization"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "brief-history",
"permalink": "https://www.fpcomplete.com/blog/our-history-containerization/#brief-history",
"title": "Brief history",
"children": []
},
{
"level": 2,
"id": "why-we-really-like-it",
"permalink": "https://www.fpcomplete.com/blog/our-history-containerization/#why-we-really-like-it",
"title": "Why we really like it",
"children": []
},
{
"level": 2,
"id": "limitations-of-the-technology",
"permalink": "https://www.fpcomplete.com/blog/our-history-containerization/#limitations-of-the-technology",
"title": "Limitations of the technology",
"children": []
},
{
"level": 2,
"id": "resources",
"permalink": "https://www.fpcomplete.com/blog/our-history-containerization/#resources",
"title": "Resources",
"children": []
}
],
"word_count": 960,
"reading_time": 5,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/cloud-deployment-models-advantages-and-disadvantages.md",
"content": "<p>In this post we show a couple of options when it comes to a cloud\ndeployment model. Depending on the needs of your organization some\noptions may suit you better than others.</p>\n<h1 id=\"private-cloud\">Private Cloud</h1>\n<p>A private cloud is cloud infrastructure that only members of your organization\ncan utilize. It is typically owned and managed by the organization itself and\nis hosted on premises but it could also be managed by a third party in a secure\ndatacenter. This deployment model is best suited for organizations that deal\nwith sensitive data and/or are required to uphold certain security standards by\nvarious regulations.</p>\n<p>Advantages:</p>\n<ul>\n<li>Organization specific</li>\n<li>High degree of security and level of control</li>\n<li>Ability to choose your resources (ie. specialized hardware)</li>\n</ul>\n<p>Disadvantages:</p>\n<ul>\n<li>Lack of elasticity and capacity to scale (bursts)</li>\n<li>Higher cost</li>\n<li>Requires a significant amount of engineering effort</li>\n</ul>\n<h1 id=\"public-cloud\">Public Cloud</h1>\n<p>Public cloud refers to cloud infrastructure that is located and\naccessed over the public network. It provides a convenient way to\nburst and scale your project depending on the use and is typically\npay-per-use. Popular examples include <a href=\"https://aws.amazon.com\">Amazon AWS</a>,\n<a href=\"https://cloud.google.com/\">Google Cloud Platform</a> and <a href=\"https://azure.microsoft.com/\">Microsoft\nAzure</a>.</p>\n<p>Advantages:</p>\n<ul>\n<li>Scalability/Flexibility/Bursting</li>\n<li>Cost effective</li>\n<li>Ease of use</li>\n</ul>\n<p>Disadvantages:</p>\n<ul>\n<li>Shared resources</li>\n<li>Operated by third party</li>\n<li>Unreliability</li>\n<li>Less secure</li>\n</ul>\n<h1 id=\"hybrid-cloud\">Hybrid Cloud</h1>\n<p>This type of cloud infrastructure assumes that you are hosting your system both\non private and public cloud . One use case might be regulation requiring data\nto be stored in a locked down private data center but have the application\nprocessing parts available on the public cloud and talking to the private\ncomponents over a secure tunnel.</p>\n<p>Another example is hosting most of the system inside a private cloud and having\na clone of the system on the public cloud to allow for rapid scaling and\naccommodating bursts of new usage that would otherwise not be possible on the\nprivate cloud.</p>\n<p>Advantages:</p>\n<ul>\n<li>Cost effective</li>\n<li>Scalability/Flexibility</li>\n<li>Balance of convenience and security</li>\n</ul>\n<p>Disadvantages:</p>\n<ul>\n<li>Same disadvantages as the public cloud</li>\n</ul>\n<h1 id=\"multi-cloud\">Multi-Cloud</h1>\n<p>This option is a variant of the hybrid cloud but we refer to it when we mean\n"using multiple public cloud providers". It is mostly used for mission critical\nsystems that want to minimize the amount of down time if a specific service on\na particular cloud goes down (e.g., the S3 outage of 2017 that took down a lot\nof web services with it). This option is arguably the most advanced option and\nsacrifices convenience for security and reliability. It requires significant\nexpertise and engineering effort to get right since most platforms vary widely\nbetween the type of resources and services that they provide in subtle ways.</p>\n<p>When chosing a cloud deployment model weigh the advantages and disadvantages of\neach option as it relates to your business objectives. </p>\n<p>If you liked this post you may also like: <a href=\"https://www.fpcomplete.com/blog/intro-to-devops-on-govcloud/\">Introduction to DevOps on AWS Gov Cloud</a></p>\n",
"permalink": "https://www.fpcomplete.com/blog/cloud-deployment-models-advantages-and-disadvantages/",
"slug": "cloud-deployment-models-advantages-and-disadvantages",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Cloud Deployment Models: Advantages and Disadvantages",
"description": "Choosing the correct Cloud Deployment Model is crucial. Discover the advantages and disadvantages of each and how to choose the best one for your organization.",
"updated": null,
"date": "2020-08-07T13:41:00Z",
"year": 2020,
"month": 8,
"day": 7,
"taxonomies": {
"tags": [
"devops"
],
"categories": [
"devops"
]
},
"extra": {
"author": "FP Complete Team",
"blogimage": "/images/blog-listing/deployment.png"
},
"path": "blog/cloud-deployment-models-advantages-and-disadvantages/",
"components": [
"blog",
"cloud-deployment-models-advantages-and-disadvantages"
],
"summary": null,
"toc": [
{
"level": 1,
"id": "private-cloud",
"permalink": "https://www.fpcomplete.com/blog/cloud-deployment-models-advantages-and-disadvantages/#private-cloud",
"title": "Private Cloud",
"children": []
},
{
"level": 1,
"id": "public-cloud",
"permalink": "https://www.fpcomplete.com/blog/cloud-deployment-models-advantages-and-disadvantages/#public-cloud",
"title": "Public Cloud",
"children": []
},
{
"level": 1,
"id": "hybrid-cloud",
"permalink": "https://www.fpcomplete.com/blog/cloud-deployment-models-advantages-and-disadvantages/#hybrid-cloud",
"title": "Hybrid Cloud",
"children": []
},
{
"level": 1,
"id": "multi-cloud",
"permalink": "https://www.fpcomplete.com/blog/cloud-deployment-models-advantages-and-disadvantages/#multi-cloud",
"title": "Multi-Cloud",
"children": []
}
],
"word_count": 486,
"reading_time": 3,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/understanding-cloud-auth.md",
"content": "<p>The topics of authentication and authorization usually appear simple but turn out to hide significant complexity. That's because, at its core, auth is all about answering two questions:</p>\n<ul>\n<li>Who are you</li>\n<li>What are you allowed to do</li>\n</ul>\n<p>However, the devil is in the details. Seasoned IT professionals, software developers, and even typical end users are fairly accustomed at this point to many of the most common requirements and pain points around auth.</p>\n<p>Cloud authentication and authorization is not drastically different from non-cloud systems, at least in principle. However, there are a few things about the cloud and its common use cases that introduce some curve balls:</p>\n<ul>\n<li>As with most auth systems, cloud providers each have their own idiosyncracies</li>\n<li>Cloud auth systems have almost always been designed from the outset to work API first, and interact with popular web technologies</li>\n<li>Security is usually taken very seriously in cloud, leading to workflows arguably more complex than other systems</li>\n<li>Cloud services themselves typically need some method to authenticate to the cloud, e.g. a virtual machine gaining access to private blob storage</li>\n<li>Many modern DevOps tools are commonly deployed to cloud systems, and introduce extra layers of complexity and indirection</li>\n</ul>\n<p>This blog post series is going to focus on the full picture of authentication and authorization, focusing on a cloud mindset. There is significant overlap with non-cloud systems in this, but we'll be covering those details as well to give a complete picture. Once we have those concepts and terms in place, we'll be ready to tackle the quirks of individual cloud providers and commonly used tooling.</p>\n<h2 id=\"goals-of-authentication\">Goals of authentication</h2>\n<p>We're going to define authentication as proving your identity to a service provider. A service provider can be anything from a cloud provider offering virtual machines, to your webmail system, to a bouncer at a bartender who has your name on a list. The identity is an equally flexible concept, and could be "my email address" or "my user ID in a database" or "my full name."</p>\n<p>To help motivate the concepts we'll be introducing, let's understand what goals we're trying to achieve with typical authentication systems.</p>\n<ul>\n<li>Allow a user to prove who he/she is</li>\n<li>Minimize the number of passwords a user has to memorize</li>\n<li>Minimize the amount of work IT administrator have to do to create new user accounts, maintain them, and ultimately shut them down\n<ul>\n<li>That last point is especially important; no one wants the engineer who was just fired to still be able to authenticate to one of the systems</li>\n</ul>\n</li>\n<li>Provide security against common attack vectors, like compromised passwords or lost devices</li>\n<li>Provide a relatively easy-to-use method for user authentication</li>\n<li>Allow a computer program/application/service (lets call these all apps) to prove what it is</li>\n<li>Provide a simple way to allocate, securely transmit, and store credentials necessary for those proofs</li>\n<li>Ensure that credentials can be revoked when someone leaves a company or an app is no longer desired (or is compromised)</li>\n</ul>\n<h2 id=\"goals-of-authorization\">Goals of authorization</h2>\n<p>Once we know the identity of something or someone, the next question is: what are they allowed to do? That's where authorization comes into play. A good authorization provides these kinds of features:</p>\n<ul>\n<li>Fine grained control, when necessary, of who can do what</li>\n<li>Ability to grant common sets of permissions as a bundle, avoiding tedium and mistakes</li>\n<li>A centralized collection of authorization rules</li>\n<li>Ability to revoke a permission, and see that change propagated quickly to multiple systems</li>\n<li>Ability to delegate permissions from one identity to another\n<ul>\n<li>For example: if I'm allowed to read a file on some cloud storage server, it would be nice if I could let my mail client do that too, without the mail program pretending it's me</li>\n</ul>\n</li>\n<li>To avoid mistakes, it would be nice to assume a smaller set of permissions when performing some operations\n<ul>\n<li>For example: as a super user/global admin/root user, I'd like to be able to say "I don't want to accidentally delete systems files right now"</li>\n</ul>\n</li>\n</ul>\n<p>In simple systems, the two concepts of authentication and authorization is straightforward. For example, on a single-user computer system, my username would be my identity, I would authenticate using my password, and as that user I would be authorized to do anything on the computer system.</p>\n<p>However, most modern systems end up with many additional layers of complexity. Let's step through what some of these concepts are.</p>\n<h2 id=\"users-and-policies\">Users and policies</h2>\n<p>A basic concept of authentication would be a <em>user</em>. This typically would refer to a real human being accessing some service. Depending on the system, they may use identifiers like usernames or email addresses. User accounts are often times given to non-users, like automated processes or Continuous Integration (CI) jobs. However, most modern systems would recommend using a service account (discussed below) or similar instead.</p>\n<p>Sometimes, the user is the end of the story. When I log into my personal Gmail account, I'm allowed to read and write emails in that account. However, when dealing with multiuser shared systems, some form of permissions management comes along as well. Most cloud providers have a robust and sophisticated set of policies, where you can specify fine-grained individual permissions within a policy.</p>\n<p>As an example, with AWS, the S3 file storage service provides an array of individual actions from the obvious (read, write, and delete an object) to the more obscure (like setting retention policies on an object). You can also specify which files can be affected by these permissions, allowing a user to, for example, have read and write access in one directory, but read-only access in another.</p>\n<p>Managing all of these individual permissions each time for each user is tedious and error prone. It makes it difficult to understand what a user can actually do. Common practice is to create a few policies across your organization, and assign them appropriately to each user, trying to minimize the amount of permissions granted out.</p>\n<h2 id=\"groups\">Groups</h2>\n<p>Within the world of authorization, groups are a natural extensions of users and policies. Odds are you'll have multiple users and multiple policies. And odds are that you're likely to have groups of users who need to have similar sets of policy documents. You <em>could</em> create a large master policy that encompasses the smaller policies, but that could be difficult to maintain. You could also apply each individual policy document to each user, but that's difficult to keep track of.</p>\n<p>Instead, with groups, you can assign multiple policies to a group, and multiple groups to a user. If you have a billing team that needs access to the billing dashboard, plus the list of all users in the system, you may have a <code>BillingDashboard</code> policy as well as a <code>ListUsers</code> policy, and assign both policies to a <code>BillingTeam</code> group. You may then also assign the <code>ListUsers</code> policy to the <code>Operators</code> group.</p>\n<h2 id=\"roles\">Roles</h2>\n<p>There's a downside with this policies and groups setup described above. Even if I'm a superadmin on my cloud account, I may not want to have the responsibility of all those powers at all times. It's far too easy to accidentally destroy vital resources like a database server. Often, we would like to artificially limit our permissions while operating with a service.</p>\n<p>Roles allow us to do this. With roles, we create a named role for some set of operations, assign a set of policies to it, and provide some way for users to <em>assume</em> that role. When you assume that role, you can perform actions using that set of permissions, but audit trails will still be able to trace back to the original user who performed the actions.</p>\n<p>Arguably a cloud best practice is to grant users only enough permissions to assume various roles, and otherwise unable to perform any meaningful actions. This forces a higher level of stated intent when interacting with cloud APIs.</p>\n<h2 id=\"service-accounts\">Service accounts</h2>\n<p>Some cloud providers and tools support the concept of a service account. While users <em>can</em> be used for both real human beings and services, there is often a mismatch. For example, we typically want to enable multi-factor authentication on real user accounts, but alternative authentication schemes on services.</p>\n<p>One approach to this is service accounts. Service accounts vary among different providers, but typically allow defining some kind of service, receiving some secure token or password, and assigning either roles or policies to that service account.</p>\n<p>In some cases, such as Amazon's EC2, you can assign roles directly to cloud machines, allowing programs running on those machines to easily and securely assume those roles, without needing to store any kinds of tokens or secrets. This concept nicely ties in with roles for users, making role-based management of both users and services and emerging best practice in industry.</p>\n<h2 id=\"rbac-vs-acl\">RBAC vs ACL</h2>\n<p>The system described above is known as Role Based Access Control, or RBAC. Many people are likely familiar with the related concept known as Access Control Lists, or ACL. With ACLs, administrators typically have more work to do, specifically managing large numbers of resources and assigning users to each of those per-resource lists. Using groups or roles significantly simplifies the job of the operator, and reduces the likelihood of misapplied permissions.</p>\n<h2 id=\"single-sign-on\">Single sign-on</h2>\n<p>Most modern DevOps platforms have multiple systems, each requiring separate authentication. For example, in a modern Kubernetes-based deployment, you're likely to have:</p>\n<ul>\n<li>The underlying cloud vendor\n<ul>\n<li>Both command line and web based access</li>\n</ul>\n</li>\n<li>Kubernetes itself\n<ul>\n<li>Both command line access and the Kubernetes Dashboard</li>\n</ul>\n</li>\n<li>A monitoring dashboard</li>\n<li>A log aggregation system</li>\n<li>Other company-specific services</li>\n</ul>\n<p>That's in addition to maintaining a company's standard directory, such as Active Directory or G Suite. Maintaining this level of duplication among user accounts is time consuming, costly, and dangerous. Furthermore, while it's reasonable to securely lock down a single account via MFA and other mechanisms, expecting users to maintain such information for all of these systems securely is unreasonable. And some of these systems don't even provide such security mechanisms.</p>\n<p>Instead, single sign-on provides a standards-based, secure, and simple method for authenticating to these various systems. In some cases, user accounts still need to be created in each individual system. In those cases, automated user provisioning is ideal. We'll talk about some of that in later posts. In other cases, like AWS's identity provider mechanism, it's possible for temporary identifiers to be generated on-the-fly for each SSO-based login, with roles assigned.</p>\n<p>Deeper questions arise about where permissions management is handled. Should the central directory, like Active Directory, maintain permissions information for all systems? Should a single role in the directory represent permissions information in all of the associated systems? Should a separate set of role mappings be maintained for each service?</p>\n<p>Typically, organizations end up including some of each, depending on the functionality available in the underlying tooling, and organizational discretion on how much information to include in a directory.</p>\n<h2 id=\"going-deeper\">Going deeper</h2>\n<p>What we've covered here sets the stage for understanding many cloud-specific authentication and authorization schemes. Going forward, we're going to cover a look into common auth protocols, followed by a review of specific cloud providers and tools, specifically AWS, Azure, and Kubernetes.</p>\n",
"permalink": "https://www.fpcomplete.com/blog/understanding-cloud-auth/",
"slug": "understanding-cloud-auth",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Understanding cloud auth",
"description": "Authentication and authorization are a core component to any secure system. In this overview post, we will begin analyzing common patterns in cloud auth",
"updated": null,
"date": "2020-07-29",
"year": 2020,
"month": 7,
"day": 29,
"taxonomies": {
"tags": [
"devops",
"insights"
],
"categories": [
"devops"
]
},
"extra": {
"author": "Michael Snoyman",
"blogimage": "/images/blog-listing/cloud-computing.png"
},
"path": "blog/understanding-cloud-auth/",
"components": [
"blog",
"understanding-cloud-auth"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "goals-of-authentication",
"permalink": "https://www.fpcomplete.com/blog/understanding-cloud-auth/#goals-of-authentication",
"title": "Goals of authentication",
"children": []
},
{
"level": 2,
"id": "goals-of-authorization",
"permalink": "https://www.fpcomplete.com/blog/understanding-cloud-auth/#goals-of-authorization",
"title": "Goals of authorization",
"children": []
},
{
"level": 2,
"id": "users-and-policies",
"permalink": "https://www.fpcomplete.com/blog/understanding-cloud-auth/#users-and-policies",
"title": "Users and policies",
"children": []
},
{
"level": 2,
"id": "groups",
"permalink": "https://www.fpcomplete.com/blog/understanding-cloud-auth/#groups",
"title": "Groups",
"children": []
},
{
"level": 2,
"id": "roles",
"permalink": "https://www.fpcomplete.com/blog/understanding-cloud-auth/#roles",
"title": "Roles",
"children": []
},
{
"level": 2,
"id": "service-accounts",
"permalink": "https://www.fpcomplete.com/blog/understanding-cloud-auth/#service-accounts",
"title": "Service accounts",
"children": []
},
{
"level": 2,
"id": "rbac-vs-acl",
"permalink": "https://www.fpcomplete.com/blog/understanding-cloud-auth/#rbac-vs-acl",
"title": "RBAC vs ACL",
"children": []
},
{
"level": 2,
"id": "single-sign-on",
"permalink": "https://www.fpcomplete.com/blog/understanding-cloud-auth/#single-sign-on",
"title": "Single sign-on",
"children": []
},
{
"level": 2,
"id": "going-deeper",
"permalink": "https://www.fpcomplete.com/blog/understanding-cloud-auth/#going-deeper",
"title": "Going deeper",
"children": []
}
],
"word_count": 1863,
"reading_time": 10,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/understanding-devops-roles-and-responsibilities.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/understanding-devops-roles-and-responsibilities/",
"slug": "understanding-devops-roles-and-responsibilities",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Understanding DevOps Roles and Responsibilities",
"description": "Companies are implementing DevOps at an increasingly rapid rate. Discover the roles and responsibilities and how to implement DevOps into your latest project.",
"updated": null,
"date": "2020-07-24T13:12:00Z",
"year": 2020,
"month": 7,
"day": 24,
"taxonomies": {
"tags": [
"devops"
],
"categories": [
"insights",
"devops"
]
},
"extra": {
"author": "FP Complete Team",
"html": "hubspot-blogs/understanding-devops-roles-and-responsibilities.html",
"blogimage": "/images/blog-listing/executive-insights.png"
},
"path": "blog/understanding-devops-roles-and-responsibilities/",
"components": [
"blog",
"understanding-devops-roles-and-responsibilities"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/preparing-for-cloud-computing-trends.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/preparing-for-cloud-computing-trends/",
"slug": "preparing-for-cloud-computing-trends",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Preparing for Upcoming Cloud Computing Trends",
"description": "Cloud Computing is growing at a rate 7 times faster than the rest of IT with no signs of slowing in the coming years. Discover all the trends businesses should be preparing for in order to succeed in 2020 and beyond. ",
"updated": null,
"date": "2020-07-24T11:05:00Z",
"year": 2020,
"month": 7,
"day": 24,
"taxonomies": {
"tags": [
"devops"
],
"categories": [
"insights",
"devops"
]
},
"extra": {
"author": "FP Complete Team",
"html": "hubspot-blogs/preparing-for-cloud-computing-trends.html",
"blogimage": "/images/blog-listing/cloud-computing.png"
},
"path": "blog/preparing-for-cloud-computing-trends/",
"components": [
"blog",
"preparing-for-cloud-computing-trends"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/cloud-preparation-checklist.md",
"content": "<p>While moving to the cloud brings many benefits associated with it, we\nneed to also be aware of the pain points associated with such a move.\nThis post will discuss those pain points, provide ways to mitigate them, and\ngive you a checklist which can be used if you plan to migrate your\napplications to cloud. We will also discuss the advantages of\nmoving to the cloud.</p>\n<h2 id=\"common-pain-points\">Common pain points</h2>\n<p>One of the primary pain points in moving to the cloud is selecting the\nappropriate tools for a specific usecase. We have an abundance of tools\navailable, with many solving the same problem in different ways. To give\nyou a basic idea, this is the CNCF's (Cloud Native Computing\nFoundation) recommended path through the cloud native technologies:</p>\n<img src=\"/images/insights/cloud-prep-checklist/landscape.png\" alt=\"Cloud Native Landscape\" title=\"Cloud Native Landscape\" width=\"100%\">\n<p></p>\n<p>Picking the right tool is hard, and this is where having experience\nwith them comes in handy.</p>\n<p>Also, the existing knowledge of on-premises data centers may not be\ndirectly transferable when you plan to move to the cloud. An individual might\nhave to undergo a basic training to understand the terminology and the\nconcepts used by a particular cloud vendor. An on-premises system\nadministrator might be used to setting up firewalls via\n<a href=\"https://en.wikipedia.org/wiki/Iptables\">Iptables</a>, but he might also\nwant to consider using <a href=\"https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html\">Security\ngroups</a>\nif he plans to accomplish the same goals in the AWS ecosystem (for EC2 instances).</p>\n<p>Another point to consider while moving to the cloud is the ease with which you\ncan easily get locked in to a single vendor. You might start using\nAmazon's <a href=\"https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html\">Auto Scaling\nGroups</a>\nto automatically handle the load of your application. But when you plan\nto switch to another cloud vendor the migration might not be\nstraightforward. Switching between cloud services isn't easy, and if you want portability, you\nneed to make sure that your applications are built with a multi-cloud\nstrategy. This will allow you to easily switch between vendors if such a\nscenario arises. Taking advantage of containers and Kubernetes may give\nyou additional flexibility and ease portability between different cloud\nvendors.</p>\n<h2 id=\"advantages-of-moving\">Advantages of moving</h2>\n<p>Despite the pain points listed above, there are many advantages involved in\nmoving your applications to cloud. Note that even big media services\nprovider like\n<a href=\"https://netflixtechblog.com/four-reasons-we-choose-amazons-cloud-as-our-computing-platform-4aceb692afec\">Netflix</a>\nhas moved on to the cloud instead of building and managing their own\ndata center solution.</p>\n<h3 id=\"cost\">Cost</h3>\n<p>One of the primary advantages of leveraging the cloud is avoiding\nthe cost of building your\nown data center. Building a secure data center is not trivial. By\noffloading this activity to an external cloud provider, you can instead build your\napplications on top of the infrastructure provided by them. This not\nonly saves the initial capital expenditure but also saves headaches from\nreplacing hardware, such as replacing failing network switches. But note that\nswitching to the cloud will not magically save cost. Depending on your\napplication's architecture and workload, you have to be aware of the\nchoices you make and make sure that your choices are cost efficient.</p>\n<h3 id=\"uptime\">Uptime</h3>\n<p>Cloud vendors provide SLAs (Service Level Agreements) where they state\ninformation about uptime and the guarantees they make. This is a\nsnapshot from the Amazon Compute SLA:</p>\n<p><img src=\"/images/insights/cloud-prep-checklist/sla.png\" alt=\"SLA\" title=\"SLA\" /></p>\n<p>All major cloud providers have historically provided excellent uptime,\nespecially for applications that properly leverage availability zones.\nBut depending on a specific\nusecase/applications, you should define the acceptable uptime for your\napplication and make sure that your SLA matches with it. Also depending\non the requirements, you can architect your application such that it has\nmulti region deployments to provide a better uptime in case there is an\noutage in one region.</p>\n<h3 id=\"security-and-compliance\">Security and Compliance</h3>\n<p>Cloud deployments provide an extra benefit when working in regulated industries\nor with government projects. In many cases, cloud vendors provide regulation-compliant\nhardware.\nBy using cloud providers, we can take advantage of the various\ncompliance standards (eg: HIPAA, PCI etc) they meet.\nValidating an on-premises data center against such standards can be a time consuming,\nexpensive process. Relying on already validated hardware can be faster, cheaper, easier,\nand more reliable.</p>\n<p>Broadening the security topic, cloud vendors typically also provide\na wide range of additional security tools.</p>\n<p>Despite these boons,\nproper care must still be taken, and best practices must still be followed,\nto deploy an application securely.\nAlso, be aware that running on compliant hardware does not automatically\nensure compliance of the software. Code and infrastructure must still meet\nvarious standards.</p>\n<h3 id=\"ease-of-scaling\">Ease of scaling</h3>\n<p>With cloud providers, you can easily add and remove machines or add more\npower (RAM, CPU etc) to them. The ease with which you can horizontally and\nvertically scale your application without worrying about your\ninfrastructure is powerful, and can revolutionize how your approach\nhardware allocation. As your applications load increases,\nyou can easily scale up in a few minutes.</p>\n<p>One of the perhaps surprising benefits of this is that you don't need to\npreemptively scale up your hardware. Many cloud deployments are able\nto reduce the total compute capacity available in a cluster, relying\non the speed of cloud providers to scale up in response to increases in demand.</p>\n<h3 id=\"focus-on-problem-solving\">Focus on problem solving</h3>\n<p>With no efforts in maintaining the on-premises data center, you can\ninstead put your effort in your application and the problem it solves.\nThis allows you to focus on your core business problems and your\ncustomers.</p>\n<p>While not technically important, the cloud providers have energy\nefficient data centers and run it on better efficiency. As a case study,\n<a href=\"https://cloud.google.com/blog/topics/google-cloud-next/our-heads-in-the-cloud-but-were-keeping-the-earth-in-mind\">Google even uses machine learning technology to make its data centers\nmore\nefficient</a>.\nHence, it might be environmentally a better decision to run your\napplications on cloud.</p>\n<h2 id=\"getting-ready-for-cloud\">Getting ready for Cloud</h2>\n<p>Once you are ready for migrating to the cloud, you can plan for the next\nsteps and initiate the process. We have the following general checklist\nwhich we usually take and tailor it based on our clients requirements:</p>\n<h3 id=\"checklist\">Checklist</h3>\n<ul>\n<li>Make a list of your applications and dependencies which need to be\nmigrated.</li>\n<li>Benchmark your applications to establish cloud performance\nKPIs (Key Performance Indicators).</li>\n<li>List out any required compliance requirements for your\napplication and plan for ensuring it.</li>\n<li>Onboard relevant team members to the cloud service's use management\nsystem, ideally integrating with existing user directories and\nleveraging features like single sign on and automated user provisioning.</li>\n<li>Establish access controls to your cloud service, relying on role based\nauthorization techniques.</li>\n<li>Evaluate your migration options. You might want to re-architect it\nto take advantage of cloud-native technologies. Or you might simply\ndecide to shift the existing application without any changes.</li>\n<li>Create your migration plan in a Runbook.</li>\n<li>Have a rollback plan in case migration fails.</li>\n<li>Test your migration and rollback plans in a separate environment.</li>\n<li>Communicate about the migration to internal stakeholders and customers.</li>\n<li>Execute your cloud migration.</li>\n<li>Prune your on-premises infrastructure.</li>\n<li>Optimize your cloud infrastructure for your workloads.</li>\n</ul>\n<h2 id=\"conclusion\">Conclusion</h2>\n<p>I hope we were able to present you the challenges involved in\nmigration to cloud and how to prepare for them. We have helped various\ncompanies in migration and other devops services. Free feel to <a href=\"https://www.fpcomplete.com/contact-us/\">reach out to\nus</a> regarding any questions on\ncloud migrations or any of the other services.</p>\n",
"permalink": "https://www.fpcomplete.com/blog/cloud-preparation-checklist/",
"slug": "cloud-preparation-checklist",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Cloud preparation checklist",
"description": "Considering a move to the cloud? Read up on cloud advantages, common pain points, and our recommended step by step process",
"updated": null,
"date": "2020-07-22",
"year": 2020,
"month": 7,
"day": 22,
"taxonomies": {
"tags": [
"devops"
],
"categories": [
"devops"
]
},
"extra": {
"author": "Sibi Prabakaran",
"blogimage": "/images/blog-listing/cloud-computing.png"
},
"path": "blog/cloud-preparation-checklist/",
"components": [
"blog",
"cloud-preparation-checklist"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "common-pain-points",
"permalink": "https://www.fpcomplete.com/blog/cloud-preparation-checklist/#common-pain-points",
"title": "Common pain points",
"children": []
},
{
"level": 2,
"id": "advantages-of-moving",
"permalink": "https://www.fpcomplete.com/blog/cloud-preparation-checklist/#advantages-of-moving",
"title": "Advantages of moving",
"children": [
{
"level": 3,
"id": "cost",
"permalink": "https://www.fpcomplete.com/blog/cloud-preparation-checklist/#cost",
"title": "Cost",
"children": []
},
{
"level": 3,
"id": "uptime",
"permalink": "https://www.fpcomplete.com/blog/cloud-preparation-checklist/#uptime",
"title": "Uptime",
"children": []
},
{
"level": 3,
"id": "security-and-compliance",
"permalink": "https://www.fpcomplete.com/blog/cloud-preparation-checklist/#security-and-compliance",
"title": "Security and Compliance",
"children": []
},
{
"level": 3,
"id": "ease-of-scaling",
"permalink": "https://www.fpcomplete.com/blog/cloud-preparation-checklist/#ease-of-scaling",
"title": "Ease of scaling",
"children": []
},
{
"level": 3,
"id": "focus-on-problem-solving",
"permalink": "https://www.fpcomplete.com/blog/cloud-preparation-checklist/#focus-on-problem-solving",
"title": "Focus on problem solving",
"children": []
}
]
},
{
"level": 2,
"id": "getting-ready-for-cloud",
"permalink": "https://www.fpcomplete.com/blog/cloud-preparation-checklist/#getting-ready-for-cloud",
"title": "Getting ready for Cloud",
"children": [
{
"level": 3,
"id": "checklist",
"permalink": "https://www.fpcomplete.com/blog/cloud-preparation-checklist/#checklist",
"title": "Checklist",
"children": []
}
]
},
{
"level": 2,
"id": "conclusion",
"permalink": "https://www.fpcomplete.com/blog/cloud-preparation-checklist/#conclusion",
"title": "Conclusion",
"children": []
}
],
"word_count": 1276,
"reading_time": 7,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/devops-security-and-privacy-strategies.md",
"content": "<p>DevOps Security and Privacy—FP Complete’s\ncomprehensive, easy to understand guide designed\nto help you understand why they’re so critical to\nthe safety of your DevOps strategy.</p>\n<p>The following is a transcription of a live\nwebinar given by <a href=\"https://www.fpcomplete.com/\">FP Complete</a>\nFounder and Chairman Aaron Contorer, on\n<a href=\"https://www.youtube.com/user/FPComplete\">FP Complete's YouTube Channel</a>.</p>\n<h2 id=\"introducing-aaron\">Introducing Aaron</h2>\n<p>I’m the Founder and Chairman of <a href=\"https://www.fpcomplete.com/\">FP Complete</a>,\nwhere we help companies use state-of-the-art tools and\ntechniques to produce secure, lightning-fast,\nfeature-rich software, faster and more often.</p>\n<p>Before founding FP Complete, I was an\nexecutive at Microsoft, where I served as program\nmanager for distributed systems, and general\nmanager of Visual C++, the leading software\ndevelopment tool at that time. Also, I \narchitected MSN’s move to Internet-based server\nsoftware, served as the full-time technology\nadviser to Bill Gates, and I founded and ran the\ncompany’s Productivity Tools Team for complex\nsoftware engineering projects.</p>\n<p>Okay, so enough about me. Let’s begin this\ndiscussion by recognizing our industry’s\nunfortunate—but preventable—reality:</p>\n<h2 id=\"breaches-are-happening-far-too-often\">Breaches are happening far too often</h2>\n<p>We all know how bad the state of the\nworld is within security and privacy\nright now. Projects are getting very\ncomplicated. And I—just as a sample—want\nto point out that this is a very typical\nbreach. Monzo said that for six months\nunauthorized people had access to\npeople’s secret code numbers, their pin\nnumbers. I’m not singling them out at\nall, but rather saying… “This is very\ntypical.” They’re a bank, and they\ncompromised this type of data for months\nand months.</p>\n<p>How does it happen? It’s not only\nbecause of logging and monitoring not\nbeing in place, although that can be a\nbig factor. It’s because of complexity.\nHonestly, we’re all trying very hard to\ndo our jobs, but users keep asking and\nexecutives keep asking for new features.\nAnd that integration just creates point\nafter point where problems can happen,\nand things get overlooked.</p>\n<h2 id=\"opportunities-for-penetration-are-everywhere\">Opportunities for penetration are everywhere</h2>\n<p>I would argue that today’s\napplications are more about assembling\nbuilding blocks than they are about just\nwriting new code. But every time you\nincrease that complexity by adding more\nbuilding blocks, you increase the number\nof interface points between\ncomponents—the number of places where\nsomebody might have done something wrong.\nAnd so we’re really creating a system of\nentry points between component A and\ncomponent B. But entry points—that sounds\nlike something I would compromise if I\nwere a security violator, right?\nFurthermore, we’re manually configuring\nour systems. People aren’t using\ncontinuous deployment. And so there is\nsome wizard who’s supposed to go set up\nthe latest server or integrate it with\nthe database or integrate it with the web\nwith a firewall or whatever they’re\nsupposed to do. Every manual step creates\nfurther opportunities for penetration,\nfor defects, because people are\nimperfect. Even the best person in your\nteam doing a process a hundred times\nmight do it wrong, one or two times. An\nautomated scanner is going to find that\ntime, and it’s going to break into your\nsystem before you know it.</p>\n<h2 id=\"let-s-talk-devsecops\">Let’s talk DevSecOps</h2>\n<p>DevSecOps—DevOps with security stuck\nright in the middle. And I think that’s a\ngood way of looking at this problem. We\nwant to integrate all the different parts\nof our engineering into one pool of\nautomation, and include security and\nquality assurance as part of that\nautomated process. We talked earlier\nabout automated testing being part of our\nbuilds. But we want to go much farther\nthan that, as technical teams. We want to\nstart from the beginning of our projects,\ntalking about how secure they need to be.\nWhat are the risks that they’re supposed\nto defend against or or not create? We\nwant every member of the team to\nunderstand that system downtime—because\nsomebody broke in and trashed it, or even\nworse privacy violations which you can\nnever undo, because when people’s\npersonal information has been published,\nyou can’t unpublish it—we need to let our\nteam members know that these are\npriorities and put them on the to-do list\nfor the project. And we can’t call it\nsomething done if the security part isn’t\ndone. It’s not something we tack on at\nthe end. We don’t build in unsecured,\ncrazy, poorly architected apps, and then\nat the end, ask someone to build a brick\nwall around them. Because as soon as one\nlittle person gets through the brick\nwall, it’s open season. So, we want the\nengineers to know everything they do\nshould be checked for security. That’s a\nculture change to say that it’s\neveryone’s job.</p>\n<p>We need to integrate quality assurance\nwith security, which means somebody is\nchecking the software we wrote for\nweaknesses; somebody is trying to break\nin or, at least, trying to run tools that\nwill show us common ways to break in and\nweather their presence.</p>\n<p>And we need to inspect our cloud\nsystems that are running to make sure\nthat our deployment, and our system\noperations and administration, is as\nsecure as we meant it to be. Did somebody\nomit a step? We want to discover that\nright away and fix it. Or, ideally,\nautomate the way we set up all of our\nsystems using, for example, an\norchestration software package to\nautomatically configure our servers, so\nit isn’t the case that late in the day,\npeople are more likely to make a mistake.\nBecause, well written scripts do just as\ngood a job even when they’re tired.</p>\n<p>And we want to make sure that all of\nour systems are updated and patched and\nnot tell people that security is a waste\nof time and they should get back to work\non features.</p>\n<h2 id=\"process-tips\">Process tips</h2>\n<p>To do all this, we need to have a\nsimple design. And I would encourage\npeople to focus on the idea that\nsimplicity and modular design are great\nways to make a system easier to check for\nsecurity holes.</p>\n<p>We want to make sure that credentials\nthat are used in our modular\nsystems—where one piece of software is\nlogging into another service or another\npiece of software database—are kept in\nproperly secured credential storage. A\ncommon form of security violations is you\nlook at somebody’s source code and… Oh\nlook! There’s the password for the\ndatabase server right there …because the\napp had to connect to the server. That’s\ninappropriate design. There are special\ncredential storage services—your team\nshould use them.</p>\n<p>And we want to make sure that quality\ncontrol remains central to our culture,\nas developers of software, and that\nincludes DevOps, that includes system\nadministration. Too often, we have a good\npiece of software, and then it’s deployed\nincorrectly. And that’s where the problem\noccurs. So if you’re going to test\nwhether your code is written properly,\nmaybe also test whether the servers\nconfigured properly, from time to time.\nIt’s time well spent.</p>\n<h2 id=\"how-to-strengthen-your-security\">How to strengthen your security</h2>\n<p>So how can you move forward on\nsecurity? The good news is, while it may\nsound like a scary and intimidating area,\nthere are lots of practical steps you can\ntake right now, and you don’t even have\nto take them all at the same time, you\ncan take them incrementally. Here are\nsome great steps though that I highly\nrecommend.</p>\n<p>One is that—in your engineering team,\nand if you have multiple teams—in each\nengineering team somebody is explicitly\nthe security person. Somebody knows that\nit’s their job to keep an eye out for\nsecurity issues and prevention and that\nif there’s a problem they’re the person\nwho’s going to hear about it. They should\nhave the power to look into anything they\nneed to make sure there isn’t a security\nhole in the system.</p>\n<p>Use best practices from other\ncompanies. This is a great idea\nthroughout all of DevOps, including\nDevSecOps. You don’t have to reinvent\nanything. You can learn best practices\nand get a checklist together of what\nother companies have found helpful to\nlook for to find opportunities to secure\nyour system incrementally. We just piece\nby piece chip away at the risks that are\npresent in our systems. We don’t have to\nwait until some magic day when all of\nsecurity happens at once.</p>\n<p>Teach your people about security. A\nlot of security problems happen because\none person didn’t realize… Who didn’t\nknow that you’re supposed to not put\npasswords in the source code where\neveryone can see them? Well, one person\ntyped a password into the source code,\nbut now it’s there for everyone. So be\nsure that training and security, and how\nimportant it is, and how to do it is\navailable to everyone in your team. And\nmake sure that there’s a checklist. Who\ntook the security training? Who’s not\nbeen to security training yet?</p>\n<p>Scary but true fact: You should,\naccording to Price Waterhouse Coopers, if\nyou want to be a normal IT operation, be\nspending 11 to 15% of your IT budget on\nsecurity overall. That’s a significant\nnumber. And I think we can all agree that\nwith more internet work and more\nimporting of modules and stuff, we, if\nanything, could be worried that that\nnumber is going to go up. So automation\nthrough DevOps is really a way to keep a\nlid on that number. But I wouldn’t think\nof it as a way to make that number drive\ndown towards zero. Security is everyone’s\njob, and it’s going to remain that\nway.</p>\n<p>Beyond that, I’d say use it use the\nother techniques we talked about earlier\nin this presentation. You don’t have to\nbe the next Equifax, of having no\nmonitoring. You don’t have to allow silly\nmistakes by having no automation. And you\ndon’t have to create more security holes\nby reinventing your own tools and\nprocesses using components. Reuse is your\nfriend.</p>\n<h2 id=\"7-tech-ideas-you-can-start-now\">7 tech ideas you can start now</h2>\n<p>I won’t spend too long on this, but I\nwanted this for people who are more\nhands-on or the people who are\nsupervising hands-on engineers. These are\nsome practical steps that you can take to\nstart turning on pieces of security,\nright now. Every one of these—except\nperhaps service-oriented architecture—is\nsomething that literally you could task\nsomebody to do this week or next\nweek.</p>\n<p>These are straightforward tasks.</p>\n<ol>\n<li>Ensure all databases have firewalls on them. They’re a common data breach source!</li>\n<li>Use a password manager to generate secure passwords; enable two-factor authentication.</li>\n<li>Use roles and policies to assign specific permissions to users and services instead of running everything from root credentials or privileged users.</li>\n<li>Use bastion hosts or VPNs to limit access to internal machines.</li>\n<li>Use service-oriented architecture (SOA) to break off components that need high privilege.</li>\n<li>Include code analysis tools in the dev process and enforce fixes prior to deployment.</li>\n<li>Test your servers with automated scanners for break-in vulnerabilities.</li>\n</ol>\n<h2 id=\"fast-to-market-reliable-and-secure\">Fast to market, reliable, and secure</h2>\n<p>It’s a winning formula!</p>\n<p>So, in short, you have a choice to\nturn on DevOps to use a lot of technology\nthat’s been solved, a lot of best\npractices and engineering techniques that\nhave already been solved and tested at\nnumerous other companies—clients of ours,\nfamous internet companies, everyone. When\nI say “everyone”, the truth is the\nminority of companies are already using\nproper DevOps. But enough companies that\nyou don’t have to be the first, you don’t\nhave to be the Pioneer. DevOps is a\nwinning formula that will get you to\nmarket faster, and more reliable, and\nwith better security. Or you could be the\nnext Equifax and the next Capital One,\nwhich is the default situation.</p>\n<h2 id=\"need-help-with-devops-security-and-privacy\">Need help with DevOps Security and Privacy?</h2>\n<p>FP Complete offers corporations its\nDevOps Success Program which offers\nadvanced Privacy and Security software\nengineering mentoring among many other\nmoving parts in the DevOps world.</p>\n<p>For more information, please <a href=\"https://www.fpcomplete.com/contact-us/\">contact us</a> or see our <a href=\"https://www.fpcomplete.com/platformengineering/\">DevOps homepage</a>.</p>\n",
"permalink": "https://www.fpcomplete.com/blog/devops-security-and-privacy-strategies/",
"slug": "devops-security-and-privacy-strategies",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "DevOps Security and Privacy Strategies",
"description": "DevOps Security and Privacy—FP Complete’s comprehensive, easy to understand guide designed to help you understand why they’re so critical to the safety of your DevOps strategy. The following is a transcription of a live webinar given by FP Complete Founder and Chairman Aaron Contorer, on FP Complete’s YouTube Channel. I’m the Founder and Chairman of FP Complete, where we […]",
"updated": null,
"date": "2020-05-29",
"year": 2020,
"month": 5,
"day": 29,
"taxonomies": {
"categories": [
"devops"
],
"tags": [
"devops",
"insights"
]
},
"extra": {
"author": "Aaron Contorer",
"blogimage": "/images/blog-listing/network-security.png"
},
"path": "blog/devops-security-and-privacy-strategies/",
"components": [
"blog",
"devops-security-and-privacy-strategies"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "introducing-aaron",
"permalink": "https://www.fpcomplete.com/blog/devops-security-and-privacy-strategies/#introducing-aaron",
"title": "Introducing Aaron",
"children": []
},
{
"level": 2,
"id": "breaches-are-happening-far-too-often",
"permalink": "https://www.fpcomplete.com/blog/devops-security-and-privacy-strategies/#breaches-are-happening-far-too-often",
"title": "Breaches are happening far too often",
"children": []
},
{
"level": 2,
"id": "opportunities-for-penetration-are-everywhere",
"permalink": "https://www.fpcomplete.com/blog/devops-security-and-privacy-strategies/#opportunities-for-penetration-are-everywhere",
"title": "Opportunities for penetration are everywhere",
"children": []
},
{
"level": 2,
"id": "let-s-talk-devsecops",
"permalink": "https://www.fpcomplete.com/blog/devops-security-and-privacy-strategies/#let-s-talk-devsecops",
"title": "Let’s talk DevSecOps",
"children": []
},
{
"level": 2,
"id": "process-tips",
"permalink": "https://www.fpcomplete.com/blog/devops-security-and-privacy-strategies/#process-tips",
"title": "Process tips",
"children": []
},
{
"level": 2,
"id": "how-to-strengthen-your-security",
"permalink": "https://www.fpcomplete.com/blog/devops-security-and-privacy-strategies/#how-to-strengthen-your-security",
"title": "How to strengthen your security",
"children": []
},
{
"level": 2,
"id": "7-tech-ideas-you-can-start-now",
"permalink": "https://www.fpcomplete.com/blog/devops-security-and-privacy-strategies/#7-tech-ideas-you-can-start-now",
"title": "7 tech ideas you can start now",
"children": []
},
{
"level": 2,
"id": "fast-to-market-reliable-and-secure",
"permalink": "https://www.fpcomplete.com/blog/devops-security-and-privacy-strategies/#fast-to-market-reliable-and-secure",
"title": "Fast to market, reliable, and secure",
"children": []
},
{
"level": 2,
"id": "need-help-with-devops-security-and-privacy",
"permalink": "https://www.fpcomplete.com/blog/devops-security-and-privacy-strategies/#need-help-with-devops-security-and-privacy",
"title": "Need help with DevOps Security and Privacy?",
"children": []
}
],
"word_count": 2012,
"reading_time": 11,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/rapid-devops-success.md",
"content": "<p>Continuous integration and deployment, monitoring and logging, and security and privacy—FP Complete’s comprehensive, easy to understand guide designed to help you learn why those three DevOps strategies collectively create an environment where high-quality software can be developed quicker and more efficiently than ever before.</p>\n<p>Aaron Contorer, founder and chairman of FP Complete, presented the following webinar. Read below for a transcript of the video.</p>\n<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/5U11unR_py0\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>\n<h2 id=\"introducing-aaron\">Introducing Aaron</h2>\n<p>I’m the Founder and Chairman of <a href=\"https://www.fpcomplete.com/\">FP Complete</a>,\nwhere we help companies use\nstate-of-the-art tools and techniques to\nproduce secure, lightning-fast,\nfeature-rich software, faster and more\noften.</p>\n<p>Before founding FP Complete, I was an\nexecutive at Microsoft, where I served as\nprogram manager for distributed systems,\nand general manager of Visual C++, the\nleading software development tool at that\ntime. Also, I architected MSN’s move\nto Internet-based server software, served\nas the full-time technology adviser to\nBill Gates, and I founded and ran the\ncompany’s Productivity Tools Team for\ncomplex software engineering\nprojects.</p>\n<p>Okay, so enough about me. Let’s begin\nthis presentation by stating the\nobvious:</p>\n<h2 id=\"software-development-is-complicated\">Software development is complicated</h2>\n<p>As information technology and software\npeople, it’s easy to recognize how things\nare changing at an astonishing speed. To\nkeep pace, we need tools and processes\nthat allow us to rapidly deploy better\ncode more frequently with fewer errors.\nIs that a high bar to reach? Yes, of\ncourse, it is. But it absolutely must be\nmet—that is <em>if</em> you\nwant your company to survive.</p>\n<h2 id=\"inefficiencies-are-everywhere\">Inefficiencies are everywhere</h2>\n<p>In most companies, I would argue that\nthe information technology team and the\nsoftware engineering team are not totally\ntrusted by the rest of the company.</p>\n<p>Of course, I don’t mean they’re not\ntrusted as in they’re not good, smart\npeople. What I mean is that they don’t\nmeet their deadlines, leading to sprints\nbecoming longer than initially expected,\nultimately causing everyone to feel\nrushed and end results lacking in\nquality.</p>\n<h2 id=\"it-has-lost-management-s-trust\">IT has lost management’s trust</h2>\n<p>When management begins to not trust\nengineering and IT, a bad dynamic\ndevelops. No longer does the team get to\nfocus on building great things for their\nend-users. Instead, they’re forced to\nfocus on solving their struggles and\ndealing with interpersonal friction.</p>\n<p>Believe it or not, the problems we’re\nhaving aren’t people-problems. It’s not\nthat they lack good intentions or\nbrainpower.</p>\n<p>Instead, the problem is this:</p>\n<h2 id=\"modern-software-ancient-tech\">Modern software, ancient tech</h2>\n<p><strong>Modern software development can’t be performed using ancient technologies applied within simplistic workflows.</strong></p>\n<p>I often like to say…</p>\n<p><em>“The best craftsperson with a\nhandsaw cannot do woodworking as\nefficiently as a robotic cutting\ntool.”</em></p>\n<p>When we automate our work, it becomes\nfaster and easier to replicate. We don’t\nbuild in lots of mistakes. As a result,\nwe get to move on with our lives instead\nof going back and reworking things over\nand over again.</p>\n<p>When we automate with good tools and\nbetter processes programmed in, and we\nrepeat this same process every time,\neveryone can trust that our work will be\nperformed with quality, and our systems\nwill be more safe and secure.</p>\n<p>Sounds ideal, doesn’t it? Of course,\nit does.</p>\n<p>But how do you do it? How do you\nevolve from the environment you’re\noperating in today to the utopia DevOps\nstrategies will allow you to live and\nwork within well into the future?</p>\n<p>Learn more about how <a href=\"https://www.fpcomplete.com/platformengineering/\">FP Complete does DevOps</a>.</p>\n",
"permalink": "https://www.fpcomplete.com/blog/rapid-devops-success/",
"slug": "rapid-devops-success",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Webinar Review: Learn Rapid DevOps Success",
"description": "Continuous integration and deployment, monitoring and logging, and security and privacy—FP Complete’s comprehensive, easy to understand guide designed to help you learn why...",
"updated": null,
"date": "2020-05-29",
"year": 2020,
"month": 5,
"day": 29,
"taxonomies": {
"categories": [
"devops"
],
"tags": [
"devops",
"insights"
]
},
"extra": {
"author": "Aaron Contorer",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "blog/rapid-devops-success/",
"components": [
"blog",
"rapid-devops-success"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "introducing-aaron",
"permalink": "https://www.fpcomplete.com/blog/rapid-devops-success/#introducing-aaron",
"title": "Introducing Aaron",
"children": []
},
{
"level": 2,
"id": "software-development-is-complicated",
"permalink": "https://www.fpcomplete.com/blog/rapid-devops-success/#software-development-is-complicated",
"title": "Software development is complicated",
"children": []
},
{
"level": 2,
"id": "inefficiencies-are-everywhere",
"permalink": "https://www.fpcomplete.com/blog/rapid-devops-success/#inefficiencies-are-everywhere",
"title": "Inefficiencies are everywhere",
"children": []
},
{
"level": 2,
"id": "it-has-lost-management-s-trust",
"permalink": "https://www.fpcomplete.com/blog/rapid-devops-success/#it-has-lost-management-s-trust",
"title": "IT has lost management’s trust",
"children": []
},
{
"level": 2,
"id": "modern-software-ancient-tech",
"permalink": "https://www.fpcomplete.com/blog/rapid-devops-success/#modern-software-ancient-tech",
"title": "Modern software, ancient tech",
"children": []
}
],
"word_count": 592,
"reading_time": 3,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/rust-devops.md",
"content": "<p>On February 2, 2020, one of FP Complete's Lead Software Engineers—Mike McGirr—presented a webinar on using Rust for creating DevOps tooling.</p>\n<h2 id=\"webinar-outline\">Webinar Outline</h2>\n<p>FP Complete is hosting a functional programming\nwebinar on, “Learn Rapid Rust with DevOps Success\nStrategies.” A beginner’s guide including sample Rust\ndemonstration on writing your DevOps tools with Rust\nover Haskell. An introduction to Rust, with basic DevOps\nuse cases, and the library ecosystem, airing on\nFebruary 5th, 2020.</p>\n<p>The webinar will be hosted by Mike McGirr, a DevOps\nSoftware Engineer at FP Complete which will provide an\nabundance of Rust information with respect to\nfunctional programming and DevOps, featuring (safety,\nspeed and accuracy) that make it unique and contributes\nto its popularity, and its possible preference as a\nlanguage of choice for operating systems over Haskell,\nweb browsers and device drivers among others. The\nwebinar offers an interesting opportunity to learn and\nuse Rust in developing real world projects aside from\nHaskell or other functional programming languages\navailable today.</p>\n<h2 id=\"topics-covered\">Topics covered</h2>\n<p>During the webinar we will cover the following\ntopics:</p>\n<ul>\n<li>A quick intro and background into the Rust programming language</li>\n<li>Some scenarios and reasons why you would want to use Rust for writing your DevOps tooling (and some reasons why you wouldn’t)</li>\n<li>A small example of using the existing AWS libraries to create a basic DevOps tool</li>\n<li>How to Integrate FP into your Organization</li>\n</ul>\n<p>Mike Mcgirr, a Lead Software Engineer at FP\nComplete,will help us understand reasoning that\nsupports using Rust over other functional programming\nlanguages offered in the market today.</p>\n<h2 id=\"more-about-your-host\">More about your host</h2>\n<p>The webinar will be hosted by Mike McGirr, a veteran\nDevOps Software Engineer at FP Complete. With years of\nexperience in DevOps software development, Mike will\nwalk us through a first in a series of Rust webinars\ndiscussing why we would, and how we could utilize Rust\nas a functional programming language to build DevOps\nover other functional programming languages available\nin the market today. Mike will also share with us a\nsmall example script written in Rust showing how Rust\nmay be used.</p>\n",
"permalink": "https://www.fpcomplete.com/blog/rust-devops/",
"slug": "rust-devops",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Rust with DevOps Success Strategies",
"description": "Wednesday Feb 5th, 2020, at 10:00 AM PST. Webinar Outline: FP Complete is hosting a functional programming webinar on, “Learn Rapid Rust with DevOps Success Strategies.” A beginner’s guide including sample Rust demonstration on writing your DevOps tools with Rust over Hasell. An introduction to Rust, with basic DevOps use cases, and the library ecosystem, […]",
"updated": null,
"date": "2020-02-05",
"year": 2020,
"month": 2,
"day": 5,
"taxonomies": {
"categories": [
"functional programming",
"devops"
],
"tags": [
"devops",
"rust",
"insights"
]
},
"extra": {
"author": "Mike McGirr",
"blogimage": "/images/blog-listing/rust.png"
},
"path": "blog/rust-devops/",
"components": [
"blog",
"rust-devops"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "webinar-outline",
"permalink": "https://www.fpcomplete.com/blog/rust-devops/#webinar-outline",
"title": "Webinar Outline",
"children": []
},
{
"level": 2,
"id": "topics-covered",
"permalink": "https://www.fpcomplete.com/blog/rust-devops/#topics-covered",
"title": "Topics covered",
"children": []
},
{
"level": 2,
"id": "more-about-your-host",
"permalink": "https://www.fpcomplete.com/blog/rust-devops/#more-about-your-host",
"title": "More about your host",
"children": []
}
],
"word_count": 351,
"reading_time": 2,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/what_is_govcloud.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/2019/05/what_is_govcloud/",
"slug": "what-is-govcloud",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "What is GovCloud?",
"description": "Devops, FedRAMP Compliance, and Making your Migration to GovCloud Successful - What is GovCloud?",
"updated": null,
"date": "2019-05-28T17:54:00Z",
"year": 2019,
"month": 5,
"day": 28,
"taxonomies": {
"tags": [
"devops",
"aws",
"govcloud"
],
"categories": [
"devops"
]
},
"extra": {
"author": "J Boyer",
"html": "hubspot-blogs/what_is_govcloud.html",
"blogimage": "/images/blog-listing/cloud-computing.png"
},
"path": "blog/2019/05/what_is_govcloud/",
"components": [
"blog",
"2019",
"05",
"what_is_govcloud"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/deploying_haskell_apps_with_kubernetes.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/deploying_haskell_apps_with_kubernetes/",
"slug": "deploying-haskell-apps-with-kubernetes",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Deploying Haskell Apps with Kubernetes",
"description": "This webinar describes how to Deploy Haskell applications using Kubernetes. Topics to be discussed include creation of a Kube cluster using Terraform and Kops, describe pods, deployments, services, load balancers, etc., deployment of a built image using kubectl and deploy, and more.",
"updated": null,
"date": "2018-09-11T16:24:00Z",
"year": 2018,
"month": 9,
"day": 11,
"taxonomies": {
"categories": [
"functional programming",
"devops"
],
"tags": [
"haskell",
"devops"
]
},
"extra": {
"author": "Robert Bobbett",
"html": "hubspot-blogs/deploying_haskell_apps_with_kubernetes.html",
"blogimage": "/images/blog-listing/kubernetes.png"
},
"path": "blog/deploying_haskell_apps_with_kubernetes/",
"components": [
"blog",
"deploying_haskell_apps_with_kubernetes"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/devsecops.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/devsecops/",
"slug": "devsecops",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "DevSecOps - Putting the Sec in DevOps",
"description": "With today's tremendous security pressures, DevOps teams are moving to continuous development and integration, but continuous security is harder to integrate. To better understand how to secure your DevOps and protect your network read on.",
"updated": null,
"date": "2018-07-18T13:11:00Z",
"year": 2018,
"month": 7,
"day": 18,
"taxonomies": {
"categories": [
"devops"
],
"tags": [
"devops"
]
},
"extra": {
"author": "Robert Bobbett",
"html": "hubspot-blogs/devsecops.html",
"blogimage": "/images/blog-listing/network-security.png"
},
"path": "blog/devsecops/",
"components": [
"blog",
"devsecops"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/deploying-rust-with-docker-and-kubernetes.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/2018/07/deploying-rust-with-docker-and-kubernetes/",
"slug": "deploying-rust-with-docker-and-kubernetes",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Deploying Rust with Docker and Kubernetes",
"description": "Using a tiny Rust app to demonstrate deploying Rust with Docker and Kubernetes.",
"updated": null,
"date": "2018-07-17T14:36:00Z",
"year": 2018,
"month": 7,
"day": 17,
"taxonomies": {
"tags": [
"rust",
"devops",
"kubernetes"
],
"categories": [
"functional programming",
"devops"
]
},
"extra": {
"author": "Chris Allen",
"html": "hubspot-blogs/deploying-rust-with-docker-and-kubernetes.html",
"blogimage": "/images/blog-listing/rust.png"
},
"path": "blog/2018/07/deploying-rust-with-docker-and-kubernetes/",
"components": [
"blog",
"2018",
"07",
"deploying-rust-with-docker-and-kubernetes"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/devops-to-prepare-for-a-blockchain-world.md",
"content": "<h2 id=\"introduction\">Introduction</h2>\n<p>As the world adopts blockchain technologies, your IT infrastructure — and its\npredictability — become critical. Many companies lack the levels of automation\nand control needed to survive in this high-opportunity, high-threat environment.</p>\n<p>Are your software, cloud, and server systems automated and robust enough? Do you\nhave enough quality control for both your development and your online operations?\nOr will you join the list of companies bruised by huge data breaches and loss o\nf control over their own computer systems? If you are involved in blockchain, or\nany industry for that matter, these are the questions you need to ask yourself.</p>\n<p>Blockchain will require you to put more information online than ever before,\ncreating huge exposures for organizations that do not have a handle on their\nsecurity. Modern DevOps technologies, including many open-source systems, offer\npowerful solutions that can improve your systems to a level suitable for use with\nblockchain.</p>\n<h2 id=\"are-companies-really-ready-for-blockchain-technology\">Are companies REALLY ready for Blockchain technology?</h2>\n<p>The answer to it is most of the companies are NOT and those who are need to audit\nor reevaluate whether they are. The reason is BlockChain puts data to public making\nit prone to outside attacks if systems are not hardenend and updated on timely\nmanner.</p>\n<p>Big companies such as Equifax had millions of records stolen, Heartland credit\nprocessing was hacked and eventually had to pay 110 million and Airbus A400M due \nto wrong installation of manual software patch resulted in death of everyone on\non the plain. These are few of many such big companies that was hacked due to poorly\nimplemented IT technology.</p>\n<p>Once hailed as unhackable, blockchains are now getting hacked. According to a MIT\ntechnology review, hackers have stolen nearly $2 billion worth of cryptocurrency\nsince the beginning of 2017.</p>\n<h2 id=\"big-question-why-companies-are-getting-hacked\">Big Question: Why Companies are getting hacked ?</h2>\n<p>Blockchain itself isn't always the problem. Sometimes the blockchain is secure \nbut the IT infrastructure is not capable to supporting it. There are cases where \nopen firewalls, unencrypted data, poor testing and manual errors were reasons \nbehind the hacking.</p>\n<p>So, the question to ask is: Is the majority of your IT infrastructure secure \nand reliable enough to support Blockchain Technology ?</p>\n<h2 id=\"what-is-an-it-factory\">What is an IT Factory ?</h2>\n<p>IT factory as per <a href=\"https://www.fpcomplete.com/our-team/\">Aaron Contorer</a>, founder \nand Chariman of FP Complete is divided into 3 parts</p>\n<ol>\n<li>Development</li>\n<li>Deployment</li>\n<li>System Operations</li>\n</ol>\n<p>If IT factory is implemented properly at each stage it could result in a new and\nbetter IT services leading to a more reliable, scalable and secure environment.</p>\n<p>Deployment is a bridge that allows software running on a developer laptop all the\nway to a scalable system and running Ops for monitoring. With DevOps practice,\nwe can ensure all the three stages of IT factory implemented.</p>\n<p>But, the key to build a working IT factory is Automation that ensure each step\nin the deployment process is reliable. With microservices architecture ,building\nand testing a reliable containerized based system is much easier now compared to\nthe earlier days.</p>\n<p>The only way to ensure a reliable, reproducible system is if companies start\nautomating each step of their software life cycle journey. Companies that are ensuring\ngood DevOps practices have a robust IT infrastructure compared to those that are\nNOT.</p>\n<h2 id=\"devops-for-blockchain\">DevOps for Blockchain</h2>\n<p>DevOps tools helps BlockChain better as it can ensure all code is tracked, tested,\ndeployed automatically, audited and Quality Assurance tested along each stage of\nthe delivery pipeline.</p>\n<p>The other benefits of having DevOps methods implemented in BlockChain is that it \nreduces the overall operational cost to companies, speeds up the overall pace of \nsoftware development and release cycle, improves the software quality and increases\nthe productivity.</p>\n<p>The following DevOps methods, if implemented in Blockchain, can be very helpful</p>\n<p><strong>1. Engineer for Safety</strong></p>\n<ul>\n<li>With proper version control tool like GITHUB , source code can be viewed,\ntracked with proper history of all changes to the base</li>\n<li>Development tools used by developers should be of the same version, should be\ntracked and should be uniform across the project</li>\n<li>Continuous Integration (CI) pipeline must be implemented at the development\nstage to ensure nothing breaks on each commit. There are tools such as Jenkins,\nBamboo, Code Pipeline and many more that can help in setting up a proper CI .</li>\n<li>Each commit should be properly tested using test case management system with\nproper unit test cases for each commit</li>\n<li>Each Project should also have an Issue tracking system like JIRA, GITLAB etc\nto ensure all requests are properly tracked and closed.</li>\n</ul>\n<p><strong>2. Deploy for Safety</strong></p>\n<ul>\n<li>Continuous Deployment via DevOps tools to ensure code is automatically deployed\nto each environment</li>\n<li>Each environment (Development, Testing, DR, Production) should be a replica\nof each other</li>\n<li>Allow automation to setup all relevant infrastructure related to allow successful\ndeployment of code</li>\n<li>Setup infrastructure as code (IAC) to provision infrastructure that helps in\nreducing manual errors</li>\n<li>Sanity of each deployment by running test cases to ensure each component is\nfunctioning as expected</li>\n<li>Running Security testing after each Deployment on each environment</li>\n<li>Ensure system can be RollBack/Rollforward without any manual intervention like\nCanary/Blue-Green Deployment</li>\n<li>Use container based deployments that provide more reliability for deployments</li>\n</ul>\n<p><strong>3. Operate for Safety</strong></p>\n<ul>\n<li>Set up Continuous Automated Monitoring and Logging</li>\n<li>Set up Anomaly detection and alerting mechanism</li>\n<li>Set up Automated Response and Recovery for any failures</li>\n<li>Ensure a Highly Available and scalable system for reliability</li>\n<li>Ensure data is encrypted for all outbound and inbound communication</li>\n<li>Ensure separation of admin powers, database powers, deployment powers , user \naccess etc. The more the powers are separated the lesser the risk</li>\n</ul>\n<p><strong>4. Separate for Safety</strong></p>\n<ul>\n<li>Separate each system internally from each other by using multiple small networks.\nFor Eg: database/backend on private subnets while UI on public subnets</li>\n<li>Set Internal and MutFirewalls ensure the database systems are protected with no access</li>\n<li>Separate Responsibility and credentials for reduce risk of exposure</li>\n</ul>\n<p><strong>5. Human systems</strong></p>\n<p>Despite keeping hardware and software checks, most the breaking of blockchain\nsystems today has happened because of "People" or "Human Errors".</p>\n<p>Most people try hacks/workaround to get stuff working on production with no knowledge\non the impacts it could do on the system. Sometimes these stuff are not documented\nmaking it hard for the other person to fix it. Sometimes asking others to login\nto unauthorized systems by sharing credentials over calls paves a path for unsecure\nsystems</p>\n<p>To ensure companies must,</p>\n<ul>\n<li>Train people to STOP doing manual efforts to fix a broken system.</li>\n<li>Train people NOT to do "Social Engineering" like asking colleagues \nto login to systems on their behalf, sharing passwords etc.</li>\n</ul>\n<p><strong>6. Quality Assurance</strong></p>\n<ul>\n<li>Need to review the Architectural as well as best practices are ensured in the\nproduct life cycle</li>\n<li>Need to ensure the code deploy pipeline has scope for penetration Testing</li>\n<li>Need to ensure there is weekly/monthly auditing of metrics, logs , systems to\ncheck for threats to the systems</li>\n<li>Each component and patch on system should be tested and approved by QA before\nrolling out to Production</li>\n<li>Companies could also hire third parties to audit their system on their behalf</li>\n</ul>\n<h2 id=\"how-to-get-there\">How to get there ?</h2>\n<p>The good news is "IT IS POSSIBLE". There is no need for giant or all-in-one solutions.</p>\n<p>Companies that are starting fresh need to start at the early phase of development\nto building a reliable system by focussing on above 6 points mentioned above. They\nneed to start thinking on all areas in the "Plan and Design" phase itself.</p>\n<p>For companies who are already on production or nearing production does not need\nto have to start fresh . They can start making incremental progress but it needs\nto start TODAY.</p>\n<p>Automation is the only SCIENCE in IT that can reduce errors and help towards building \na more and more reliable system. It will in the future save money and resources that \ncan be redirected to focus on other areas.</p>\n<p>To conclude, <a href=\"https://www.fpcomplete.com\">FP Complete</a> has been a leading consultant \non providing DevOps services. We excel at what we do and if you are looking to implement \nDevOps in your BlockChain. Please feel free to reach out to us for free consultations.</p>\n",
"permalink": "https://www.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/",
"slug": "devops-to-prepare-for-a-blockchain-world",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "DevOps to Prepare for a Blockchain World",
"description": "This webinar describes how Devops can be used to prepare any company that is interested in adopting blockchain technology. Many companies lack the level of automation and control needed to survive in this high-opportunity, high-threat environment but DevOps technologies can offer powerful solutions.",
"updated": null,
"date": "2018-06-07T08:03:00Z",
"year": 2018,
"month": 6,
"day": 7,
"taxonomies": {
"tags": [
"devops",
"blockchain"
],
"categories": [
"functional programming",
"devops"
]
},
"extra": {
"author": "FP Complete Team",
"blogimage": "/images/blog-listing/distributed-ledger.png"
},
"path": "blog/devops-to-prepare-for-a-blockchain-world/",
"components": [
"blog",
"devops-to-prepare-for-a-blockchain-world"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "introduction",
"permalink": "https://www.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/#introduction",
"title": "Introduction",
"children": []
},
{
"level": 2,
"id": "are-companies-really-ready-for-blockchain-technology",
"permalink": "https://www.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/#are-companies-really-ready-for-blockchain-technology",
"title": "Are companies REALLY ready for Blockchain technology?",
"children": []
},
{
"level": 2,
"id": "big-question-why-companies-are-getting-hacked",
"permalink": "https://www.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/#big-question-why-companies-are-getting-hacked",
"title": "Big Question: Why Companies are getting hacked ?",
"children": []
},
{
"level": 2,
"id": "what-is-an-it-factory",
"permalink": "https://www.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/#what-is-an-it-factory",
"title": "What is an IT Factory ?",
"children": []
},
{
"level": 2,
"id": "devops-for-blockchain",
"permalink": "https://www.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/#devops-for-blockchain",
"title": "DevOps for Blockchain",
"children": []
},
{
"level": 2,
"id": "how-to-get-there",
"permalink": "https://www.fpcomplete.com/blog/devops-to-prepare-for-a-blockchain-world/#how-to-get-there",
"title": "How to get there ?",
"children": []
}
],
"word_count": 1354,
"reading_time": 7,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/controlling-access-to-nomad-clusters.md",
"content": "<p>In this blog post, we will learn how to control access to nomad.</p>\n<h2 id=\"introduction\">Introduction</h2>\n<p>Nomad is an application scheduler, that helps you schedule application-processes\nefficiently, across multiple servers, and keep your infrastructure costs low.\nNomad is capable of scheduling containers, virtual machines, as well as isolated\nforked processes.</p>\n<p>There are other schedulers available, such as Kubernetes, Mesos or Docker Swarm,\nbut each has different mechanisms for securing access. By following this post,\nyou will understand the main components in securing your Nomad cluster, but the\noverall idea is valid across any of the other schedulers available.</p>\n<p>One of Nomad's selling points, and why you could consider it over tools like\nKubernetes, is that you can schedule not only containers, but also QEMU\nimages, LXC, isolated <code>fork/exec</code> processes, and even Java applications in a\nchroot(!). All you need is a driver implemented for Nomad. On the other hand,\nits community is smaller than Kubernetes, so the tradeoffs have to be measured\non a project-by-project basis.</p>\n<p>We will start by deploying a test cluster and configuring access control lists\n(ACLs).</p>\n<h2 id=\"overview\">Overview</h2>\n<ul>\n<li>Nomad uses tokens to authenticate client requests.</li>\n<li>Each token is associated with policies.</li>\n<li>Policies are a collection of rules to allow or deny operations on resources.</li>\n</ul>\n<p>In this tutorial, we will:</p>\n<ol>\n<li>Setup our environment to run nomad inside a Vagrant virtual machine for running experiments</li>\n<li>We generate a root/admin token (usually known as the "management" token) and activate ACLs</li>\n<li>Using the management token, we add a new "non-admin" policy and create a token associated with this new policy</li>\n<li>Use the "non-admin" token to demonstrate access control.</li>\n</ol>\n<h2 id=\"setup-the-environment\">Setup the environment</h2>\n<p>Pre-requisites:</p>\n<ul>\n<li>POSIX shell, such as GNU Bash</li>\n<li>Vagrant > <code>2.0.1</code></li>\n<li>Nomad demo <a href=\"https://raw.githubusercontent.com/hashicorp/nomad/master/demo/vagrant/Vagrantfile\"><code>Vagrantfile</code></a></li>\n</ul>\n<p>We will run everything from within a virtual machine with all the necessary\nconfiguration and applications. Execute the following commands on your shell:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code>$ cd $(mktemp --directory)\n$ curl -LO https://raw.githubusercontent.com/hashicorp/nomad/master/demo/vagrant/Vagrantfile\n$ vagrant up\n ...\n lines and lines of Vagrant output\n this might take a while\n ...\n$ vagrant ssh\n ...\n Message of the day greeting from VM\n Anything after this point is being executed inside the virtual machine\n ...\nvagrant@nomad:~$ nomad version\nNomad vX.X.X\nvagrant@nomad:~$ uname -n\nnomad\n</code></pre>\n<p>Depending on your system and the version of <code>Vagrantfile</code> used, the prompt may\nbe different.</p>\n<h2 id=\"setup-nomad\">Setup Nomad</h2>\n<p>We configure nomad to execute both as server and client for convenience, as\nopposed to a production environment where the server is remote and client is\nlocal to each machine or node. Create a <code>nomad-agent.conf</code> with the following\ncontents:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code>bind_addr = "0.0.0.0"\ndata_dir = "/var/lib/nomad"\nregion = "global"\nacl {\n enabled = true\n}\nserver {\n enabled = true\n bootstrap_expect = 1\n authoritative_region = "global"\n}\nclient {\n enabled = true\n}\n</code></pre>\n<p>Then, execute:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code>vagrant@nomad:~$ sudo nomad agent -config=nomad-agent.conf # sudo is needed to run as a client\n</code></pre>\n<p>You should see output indicating that Nomad is running.</p>\n<blockquote>\n<p>Clients need root access to be able to execute processes, while servers only\ncommunicate to synchronize state.</p>\n</blockquote>\n<h2 id=\"acl-bootstrap\">ACL Bootstrap</h2>\n<p>On another terminal, after running <code>vagrant ssh</code> from our temporary working\ndirectory, run the following command:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code>vagrant@nomad:~$ nomad acl bootstrap\n\nAccessor ID = 2f34299b-0403-074d-83e2-60511341a54c\nSecret ID = 9fff6a06-b991-22db-7fed-55f17918e846\nName = Bootstrap Token\nType = management\nGlobal = true\nPolicies = n/a\nCreate Time = 2018-02-14 19:09:23.424119008 +0000 UTC\nCreate Index = 13\nModify Index = 13\n</code></pre>\n<p>This <code>Secret ID</code> is our <code>management</code> (admin) token. This token is valid globally\nand all operations are permitted. No policies are necessary while authenticating\nwith the management token, and so, none are configured by default.</p>\n<p>It is important to copy the <code>Accessor ID</code> and <code>Secret ID</code> to some file, for\nsafekeeping, as we will need these values later. For a production environment,\nit is safest to store these in a separate vault permanently.</p>\n<p>Once ACLs are on, all operations are denied <em>unless</em> a valid token is provided\nwith each request, and the operation we want is allowed by a policy associated\nwith the provided token.</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code>vagrant@nomad:~$ nomad node-status\nError querying node status: Unexpected response code: 403 (Permission denied)\n\nvagrant@nomad:~$ export NOMAD_TOKEN='9fff6a06-b991-22db-7fed-55f17918e846' # Secret ID, above\nvagrant@nomad:~$ nomad node-status\n\nID DC Name Class Drain Status\n1f638a17 dc1 nomad <none> false ready\n</code></pre><h2 id=\"designing-policies\">Designing policies</h2>\n<p>Policies are a collection of (ideally, non-overlapping) roles, that provide\naccess to different operations. The table below shows typical users of a Nomad\ncluster.</p>\n<table><thead><tr><th>Role</th><th>Namespace</th><th>Agent</th><th>Node</th><th>Remarks</th></tr></thead><tbody>\n<tr><td>Anonymous</td><td><code>deny</code></td><td><code>deny</code></td><td><code>deny</code></td><td>Unnecessary, as token-less requests are denied all operations.</td></tr>\n<tr><td>Developer</td><td><code>write</code></td><td><code>deny</code></td><td><code>read</code></td><td>Developers are permitted to debug their applications, but not to perform cluster management</td></tr>\n<tr><td>Logger</td><td><code>list-jobs</code>, <code>read-logs</code></td><td><code>deny</code></td><td><code>read</code></td><td>Automated log aggregators or analyzers that need read access to logs</td></tr>\n<tr><td>Job requester</td><td><code>submit-job</code></td><td><code>deny</code></td><td><code>deny</code></td><td>CI systems create new jobs, but don't interact with running jobs.</td></tr>\n<tr><td>Infrastructure</td><td><code>read</code></td><td><code>write</code></td><td><code>write</code></td><td>DevOps teams perform cluster management but seldom need to interact with running jobs.</td></tr>\n</tbody></table>\n<blockquote>\n<p>For namespace access, <code>read</code> is equivalent to\n<code>[read-job, list-jobs]</code>. <code>write</code> is equivalent to\n<code>[list-jobs, read-job, submit-job, read-logs, read-fs, dispatch-job]</code>.</p>\n</blockquote>\n<blockquote>\n<p>In the event that operators do need to have access to namespaces, one can\nalways create a token that has <em>both</em> Developer and Infrastructure policies\nattached. This is equivalent to having a <code>management</code> token.</p>\n</blockquote>\n<p>We have left out multi-region and multi-namespace setups here. We have assumed\neverything to be running under the <code>default</code> namespace. It should be noted that\non production deployments, with much larger needs, the policies could be\ndesigned per-namespace, and tracked between regions.</p>\n<h2 id=\"policy-specification\">Policy specification</h2>\n<p>Policies are expressed by a combination of rules Note that the <code>deny</code> rule will\npreside over any conflicting capability.</p>\n<p>Nomad accepts a JSON payload with the name and description of a policy, along\nwith a <em>quoted</em> JSON or HCL document with rules, like the following.</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code>{\n "Description": "Agent and node management",\n "Name": "infrastructure",\n "Rules": "{\\"agent\\":{\\"policy\\":\\"write\\"},\\"node\\":{\\"policy\\":\\"write\\"}}"\n}\n</code></pre>\n<p>This policy matches what we have in the table above.\nCreate an <code>infrastructure.json</code> with the content above for use in the next step.</p>\n<blockquote>\n<p>TIP:</p>\n<p>To avoid error-prone quoting, one could write the policies in YAML:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code>Name: infrastructure\nDescription: Agent and node management\nRules:\n agent:\n policy: write\n node:\n policy: write\n</code></pre>\n<p>And then, convert them to JSON with the necessary quoting, by:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code>$ yaml2json < infrastructure.yaml | jq '.Rules = (.Rules | @text)' > infrastructure.json\n</code></pre></blockquote>\n<h2 id=\"adding-a-policy\">Adding a policy</h2>\n<p>To add the policy, simply make an HTTP POST request to the server. The\n<code>NOMAD_TOKEN</code> below is the "management" token that we first created.</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code>vagrant@nomad:~$ curl \\\n --request POST \\\n --data @infrastructure.json \\\n --header "X-Nomad-Token: ${NOMAD_TOKEN}" \\\n https://127.0.0.1:4646/v1/acl/policy/infrastructure\n\nvagrant@nomad:~$ nomad acl policy list\nName Description\ninfrastructure Agent and node management\n\nvagrant@nomad:~$ nomad acl policy info infrastructure\nName = infrastructure\nDescription = Agent and node management\nRules = {"agent":{"policy":"write"},"node":{"policy":"write"}}\nCreateIndex = 425\nModifyIndex = 425\n</code></pre><h2 id=\"creating-a-token-for-a-policy\">Creating a token for a policy</h2>\n<p>We now create a token for the <code>infrastructure</code> policy, and attempt a few operations\nwith it:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code>vagrant@nomad:~$ nomad acl token create \\\n -name='devops-team' \\\n -type='client' \\\n -global='true' \\\n -policy='infrastructure'\n\nAccessor ID = 927ea7a4-e689-037f-be89-54a2cdbd338c\nSecret ID = 26832c8d-9315-c1ef-aabf-2058c8632da8\nName = devops-team\nType = client\nGlobal = true\nPolicies = [infrastructure]\nCreate Time = 2018-02-15 19:53:59.97900843 +0000 UTC\nCreate Index = 432\nModify Index = 432\n\nvagrant@nomad:~$ export NOMAD_TOKEN='26832c8d-9315-c1ef-aabf-2058c8632da8' # change the token to the new one with the "infrastructure" policy attached\nvagrant@nomad:~$ nomad status\nError querying jobs: Unexpected response code: 403 (Permission denied)\n\nvagrant@nomad:~$ nomad node-status\nID DC Name Class Drain Status\n1f638a17 dc1 nomad <none> false ready\n</code></pre>\n<p>As you can see, anyone with the <code>devops-team</code> token will be allowed to\nrun operations on nodes, but not on jobs -- i.e. on namespace resources.</p>\n<h2 id=\"where-to-go-next\">Where to go next</h2>\n<p>The example above demonstrates adding one of the policies from our list at the\nbeginning. Adding the rest of them and trying different commands could be a\ngood exercise.</p>\n<p>As a reference, the FP Complete team maintains a\n<a href=\"https://github.com/fpco/nomad-acl-policies\">repository</a> with\npolicies ready for use.</p>\n<h4 id=\"related-articles\">Related articles</h4>\n<ul>\n<li><a href=\"https://www.fpcomplete.com/blog/2016/11/devops-best-practices-immutability/\">DevOps best practices: immutability</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/\">How to implement containers to streamline your DevOps workflow</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/2016/05/stack-security-gnupg-keys/\">Stack security: GnuPG keys</a></li>\n</ul>\n",
"permalink": "https://www.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/",
"slug": "controlling-access-to-nomad-clusters",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Controlling access to Nomad clusters",
"description": "Learn how to control access to your Nomad clusters on a per-role basis. This will get you the benefits of application schedulers such as Nomad and Kubernetes with all the security guarantees your services need but without the complex and lengthy setup that some other popular tools demand.",
"updated": null,
"date": "2018-05-17T13:21:00Z",
"year": 2018,
"month": 5,
"day": 17,
"taxonomies": {
"categories": [
"devops"
],
"tags": [
"devops"
]
},
"extra": {
"author": "FP Complete Team",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "blog/2018/05/controlling-access-to-nomad-clusters/",
"components": [
"blog",
"2018",
"05",
"controlling-access-to-nomad-clusters"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "introduction",
"permalink": "https://www.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#introduction",
"title": "Introduction",
"children": []
},
{
"level": 2,
"id": "overview",
"permalink": "https://www.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#overview",
"title": "Overview",
"children": []
},
{
"level": 2,
"id": "setup-the-environment",
"permalink": "https://www.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#setup-the-environment",
"title": "Setup the environment",
"children": []
},
{
"level": 2,
"id": "setup-nomad",
"permalink": "https://www.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#setup-nomad",
"title": "Setup Nomad",
"children": []
},
{
"level": 2,
"id": "acl-bootstrap",
"permalink": "https://www.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#acl-bootstrap",
"title": "ACL Bootstrap",
"children": []
},
{
"level": 2,
"id": "designing-policies",
"permalink": "https://www.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#designing-policies",
"title": "Designing policies",
"children": []
},
{
"level": 2,
"id": "policy-specification",
"permalink": "https://www.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#policy-specification",
"title": "Policy specification",
"children": []
},
{
"level": 2,
"id": "adding-a-policy",
"permalink": "https://www.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#adding-a-policy",
"title": "Adding a policy",
"children": []
},
{
"level": 2,
"id": "creating-a-token-for-a-policy",
"permalink": "https://www.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#creating-a-token-for-a-policy",
"title": "Creating a token for a policy",
"children": []
},
{
"level": 2,
"id": "where-to-go-next",
"permalink": "https://www.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#where-to-go-next",
"title": "Where to go next",
"children": [
{
"level": 4,
"id": "related-articles",
"permalink": "https://www.fpcomplete.com/blog/2018/05/controlling-access-to-nomad-clusters/#related-articles",
"title": "Related articles",
"children": []
}
]
}
],
"word_count": 1393,
"reading_time": 7,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/continuous-integration-delivery-best-practices.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/continuous-integration-delivery-best-practices/",
"slug": "continuous-integration-delivery-best-practices",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Best practices when implementing continuous integration and delivery",
"description": "Although, there are countless reasons to ditch the old ways of development and adopt DevOps practices, the change from one to the another can be an intimidating task. Use these best practices to ensure your company succeeds during these transitions. ",
"updated": null,
"date": "2018-04-11T12:49:00Z",
"year": 2018,
"month": 4,
"day": 11,
"taxonomies": {
"categories": [
"devops",
"kub360"
],
"tags": [
"devops"
]
},
"extra": {
"author": "Deni Bertovic",
"html": "hubspot-blogs/continuous-integration-delivery-best-practices.html",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "blog/continuous-integration-delivery-best-practices/",
"components": [
"blog",
"continuous-integration-delivery-best-practices"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/fintech-best-practices-devops-priorities-for-financial-technology-applications.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/fintech-best-practices-devops-priorities-for-financial-technology-applications/",
"slug": "fintech-best-practices-devops-priorities-for-financial-technology-applications",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "FinTech best practices: DevOps Priorities for Financial Technology Applications",
"description": "Modern software development is complicated, but developing software for the FinTech industry adds a whole new dimension of complexity. Adopting modern DevOps principals will ensure your software adheres to FinTech best practices. This blog explains how you can get started and be successful.",
"updated": null,
"date": "2018-04-05T12:21:00Z",
"year": 2018,
"month": 4,
"day": 5,
"taxonomies": {
"categories": [
"devops",
"kube360"
],
"tags": [
"devops",
"fintech"
]
},
"extra": {
"author": "Aaron Contorer",
"html": "hubspot-blogs/fintech-best-practices-devops-priorities-for-financial-technology-applications.html",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "blog/fintech-best-practices-devops-priorities-for-financial-technology-applications/",
"components": [
"blog",
"fintech-best-practices-devops-priorities-for-financial-technology-applications"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/recover-your-elasticsearch.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/2018/04/recover-your-elasticsearch/",
"slug": "recover-your-elasticsearch",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Recover your Elasticsearch",
"description": "When using Elasticsearch you may run into cluster problems that could lose data because of a corrupt index. All is not lost because there are ways to recover your Elasticsearch. Find out how to bring the cluster to a healthy state with minimal or no data loss in such situation. ",
"updated": null,
"date": "2018-04-03T13:42:00Z",
"year": 2018,
"month": 4,
"day": 3,
"taxonomies": {
"tags": [
"devops"
],
"categories": [
"devops"
]
},
"extra": {
"author": "Alexey Kuleshevich",
"html": "hubspot-blogs/recover-your-elasticsearch.html",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "blog/2018/04/recover-your-elasticsearch/",
"components": [
"blog",
"2018",
"04",
"recover-your-elasticsearch"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/without-performance-tests-we-will-have-a-bad-time-forever.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/without-performance-tests-we-will-have-a-bad-time-forever/",
"slug": "without-performance-tests-we-will-have-a-bad-time-forever",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Without performance tests, we will have a bad time, forever",
"description": "When writing Haskell software code, you cannot assume performance is optimized. You must utilize automated testing and eliminate human inspection. Performance regression is not an option, or you will have a bad day.",
"updated": null,
"date": "2018-03-15T11:36:00Z",
"year": 2018,
"month": 3,
"day": 15,
"taxonomies": {
"tags": [
"haskell"
],
"categories": [
"devops"
]
},
"extra": {
"author": "Niklas Hambüchen",
"html": "hubspot-blogs/without-performance-tests-we-will-have-a-bad-time-forever.html",
"blogimage": "/images/blog-listing/qa.png"
},
"path": "blog/without-performance-tests-we-will-have-a-bad-time-forever/",
"components": [
"blog",
"without-performance-tests-we-will-have-a-bad-time-forever"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/how-to-implement-containers-to-streamline-your-devops-workflow.md",
"content": "<h1 id=\"what-are-docker-containers\">What are Docker Containers?</h1>\n<p>Docker containers are a form of "lightweight" virtualization They allow a\nprocess or process group to run in an environment with its own file system,\nsomewhat like <code>chroot</code> jails , and also with its own process table, users and\ngroups and, optionally, virtual network and resource limits. For most purposes,\nthe processes in a container think they have an entire OS to themselves and do\nnot have access to anything outside the container (unless explicitly granted).\nThis lets you precisely control the environment in which your processes run,\nallows multiple processes on the same (virtual) machine that have completely\ndifferent (even conflicting) requirements, and significantly increases isolation\nand container security.</p>\n<p>In addition to containers, Docker makes it easy to build and distribute images\nthat wrap up an application with its complete runtime environment.</p>\n<p>For more information, see \n<a href=\"https://www.cio.com/article/2924995/software/what-are-containers-and-why-do-you-need-them.html\">What are containers and why do you need them?</a> \nand \n<a href=\"https://containerjournal.com/2017/01/11/containers-devops-anyway/\">What Do Containers Have to Do with DevOps, Anyway?</a>.</p>\n<h1 id=\"containers-vs-virtual-machines-vms\">Containers vs Virtual Machines (VMs)</h1>\n<p>The difference between the "lightweight" virtualization of containers and\n"heavyweight" virtualization of VMs boils down to that, for the former, the\nvirtualization happens at the kernel level while for the latter it happens at\nthe hypervisor level. In other words, all the containers on a machine share the\nsame kernel, and code in the kernel isolates the containers from each other\nwhereas each VM acts like separate hardware and has its own kernel.</p>\n<img alt=\"Docker Carrying Haskell.jpg\" sizes=\"(max-width: 320px) 100vw, 320px\" src=\"/images/hubspot/4536576cadee37e3ea1e0a35a83a97a55015af6773242ecda5f919a7f1628cc5.jpeg\" srcset=\"/images/hubspot/04a7b5b957c890331f8535859d7c8528eadf4d83c82ae65e86ea28fea6f82898.jpeg 160w, /images/hubspot/4536576cadee37e3ea1e0a35a83a97a55015af6773242ecda5f919a7f1628cc5.jpeg 320w, /images/hubspot/1652b04e09bee96b23e47adb5830543a1feac5a48d5488b22602cec12a1b131d.jpeg 480w, /images/hubspot/4a5e5498d817ee00db5fdc27b5827a41a41d07253d95f73e093809cd27d6ea45.jpeg 640w, /images/hubspot/d77567fd61f4146be574d81e707b90ca7f80f3005770d6ef527ff656eb9b913d.jpeg 800w, /images/hubspot/f9971781a2d67ed9b0b30a5798652fdc1975985603d9fde0b60bf89de73faa7a.jpeg 960w\" style=\"width: 320px; margin: 0px 0px 10px 10px; letter-spacing: -0.08px; float: right;\" width=\"320\">\n<p>Containers are much less resource intensive than VMs because they do not need\nto be allocated exclusive memory and file system space or have the overhead of\nrunning an entire operating system. This makes it possible to run many more\ncontainers on a machine than you would VMs. Containers start nearly as fast as\nregular processes (you don't have to wait for the OS to boot), and parts of the\nhost's file system can be easily "mounted" into the container's file system\nwithout any additional overhead of network file system protocols.</p>\n<p>On the other hand, isolation is less guaranteed. If not careful, you can\noversubscribe a machine by running containers that need more resources than the\nmachine has available (this can be mitigated by setting appropriate resource\nlimits on containers). While containers security is an improvement over normal\nprocesses, the shared kernel means the attack surface is greater and there is\nmore risk of leakage between containers than there is between VMs.</p>\n<p>For more information, see <a href=\"https://blog.netapp.com/blogs/containers-vs-vms/\">Docker containers vs. virtual machines: What's the\ndifference?</a> and <a href=\"https://www.fpcomplete.com/blog/2016/11/devops-best-practices-immutability/\">DevOps Best\nPractices: Immutability</a></p>\n<h1 id=\"how-docker-containers-enhance-continuous-delivery-pipelines\">How Docker Containers Enhance Continuous Delivery Pipelines</h1>\n<p>There are, broadly, two areas where containers fit into your devops\nworkflow: for builds, and for deployment. They are often used together,\nbut do not have to be.</p>\n<h3 id=\"builds\">Builds</h3>\n<ul>\n<li>\n<p><strong>Synchronizing build environments:</strong> It can be difficult to keep\nbuild environments synchronized between developers and CI/CD\nservers, which can lead to unexpected build failures or changes in\nbehaviour . Docker images let you specify <em>exactly</em> the build tools,\nlibraries, and other dependencies (including their versions)\nrequired without needing to install them on individual machines, and\ndistribute those images easily. This way you can be sure that\neveryone is using exactly the same build environment.</p>\n</li>\n<li>\n<p><strong>Managing changes to build environments:</strong> Managing changes to\nbuild environments can also be difficult, since you need to roll\nthose out to all developers and build servers at the right time.\nThis can be especially tricky when there are multiple branches of\ndevelopment some of which may need older or newer environments than\neach other. With Docker, you can specify a particular version of the\nbuild image along with the source code, which means a particular\nrevision of the source code will always build in the right\nenvironment.</p>\n</li>\n<li>\n<p><strong>Isolating build environments:</strong> One CI/CD server may have to build\nmultiple projects, which may have conflicting requirements for build\ntools, libraries, and other dependencies. By running each build in\nits own ephemeral container created from potentially different\nDocker images, you can be certain that these builds environments\nwill not interfere with each other.</p>\n</li>\n</ul>\n<h3 id=\"deployment\">Deployment</h3>\n<ul>\n<li>\n<p><strong>Runtime environment bundled with application :</strong> The CD system\nbuilds a complete Docker image which bundles the application's\nenvironment with the application itself and then deploys the whole\nimage as one "atomic" step. There is no chance for configuration\nmanagement scripts to fail at deployment time, and no risk of the\nsystem configuration to be out of sync.</p>\n</li>\n<li>\n<p><strong>Preventing malicious changes:</strong> Container security is improved by\nusing immutable SHA digests to identify Docker images, which means\nthere is no way for a malicious actor to inject malware into your\napplication or its environment.</p>\n</li>\n<li>\n<p><strong>Easily roll back to a previous version:</strong> All it takes to roll\nback is to deploy a previous version of the Docker image. There is\nno worrying about system configuration changes needing to be\nmanually rolled back.</p>\n</li>\n<li>\n<p><strong>Zero downtime rollouts:</strong> In conjunction with container\norchestration tools like Kubernetes, it is easily to roll out new\nimage versions with zero downtime.</p>\n</li>\n<li>\n<p><strong>High availability and horizontal scaling:</strong> Container\norchestration tools like Kubernetes make it easy to distribute the\nsame image to containers on multiple servers, and add/remove\nreplicas at will or automatically.</p>\n</li>\n<li>\n<p><strong>Sharing a server between multiple applications:</strong> Multiple\napplications, or multiple versions of the same application (e.g. a\ndev and qa deployment), can run on the same server even if they have\nconflicting dependencies, since their runtime environments are\ncompletely separate.</p>\n</li>\n<li>\n<p><strong>Isolating applications:</strong> When multiple applications are deployed\nto a server in containers, they are isolated from one another.\nContainer security means each has its own file system, processes,\nand users there is less risk that they interfere with each other\nintentionally. When data <em>does</em> need to be shared between\napplications, parts of the host file system can be mounted into\nmultiple containers, but this is something you have full control\nover.</p>\n</li>\n</ul>\n<p>For more information, see:</p>\n<ul>\n<li><a href=\"https://www.fpcomplete.com/blog/2017/03/continuous-integration/\">Continuous Integration: An Overview</a></li>\n<li><a href=\"https://docs.microsoft.com/en-us/dotnet/standard/containerized-lifecycle-architecture/docker-application-lifecycle/containers-foundation-for-devops-collaboration\">Containers as the foundation for DevOps collaboration</a></li>\n<li><a href=\"https://www.sumologic.com/blog/devops/how-containerization-enables-devops/\">Docker and DevOps -- Enabling DevOps Teams Through Containerization</a>.</li>\n</ul>\n<h1 id=\"implementing-containers-into-your-devops-workflow\">Implementing Containers into Your DevOps Workflow</h1>\n<p>Containers can be integrated into your DevOps toolchain incrementally.\nOften it makes sense to start with the build environment, and then move\non to the deployment environment. This is a very broad overview of the\nsteps for a simple approach, without delving into the technical details\nvery much or covering all the possible variations.</p>\n<h3 id=\"requirements\">Requirements</h3>\n<ul>\n<li>Docker Engine installed on build servers and/or application servers</li>\n<li>Access to a Docker Registry. This is where Docker images are stored\nand pulled. There are numerous services that provide registries, and\nit's also easy to run your own.</li>\n</ul>\n<h3 id=\"containerizing-the-build-environment\">Containerizing the build environment</h3>\n<p>Many CI/CD systems now include built-in Docker support or easily enable\nit through plugins, but <code>docker</code> is a command-line application which\ncan be called from any build script even if your CI/CD system does not\nhave explicit support.</p>\n<ol>\n<li>\n<p>Determine your build environment requirements and write\na <code>Dockerfile</code> based on an existing Docker image, which is the\nspecification used to build an image for build containers. If you\nalready use a configuration management tool, you can use it within\nthe Dockerfile. Always specify precise versions of base images and\ninstalled packages so that image builds are consistent and upgrades\nare deliberate.</p>\n</li>\n<li>\n<p>Build the image using <code>docker build</code> and push it to the Docker\nregistry using <code>docker push</code> .</p>\n</li>\n<li>\n<p>Create a <code>Dockerfile</code> for the application that is based on the build\nimage (specify the exact version of the base build image). This file\nbuilds the application, adds any required runtime dependencies that\naren't in the build image, and tests the application. A multi-stage\n <code>Dockerfile</code> can be used if you don't want the application\ndeployment image to include all the build dependencies.</p>\n</li>\n<li>\n<p>Modify CI build scripts to build the application image and push it\nto the Docker registry. The image should be tagged with the build\nnumber, and possibly additional information such as the name of the\nbranch.</p>\n</li>\n<li>\n<p>If you are not yet ready to deploy with Docker, you can extract the\nbuild artifacts from the resulting Docker image.</p>\n</li>\n</ol>\n<p>It is best to <em>also</em> integrate building the build image itself into your\ndevops automation tools.</p>\n<h3 id=\"containerizing-deployment\">Containerizing deployment</h3>\n<p>This can be easier if your CD tool has support for Docker, but that is\nby no means necessary. We also recommend deploying to a container\norchestration system such as Kubernetes in most cases.</p>\n<p>Half the work has already been done, since the build process creates and\npushes an image containing the application and its environment.</p>\n<ul>\n<li>\n<p>If using Docker directly, now it's a matter of updating deployment\nscripts to use <code>docker run</code> on the application server with the\nimage and tag that was pushed in the previous section (after\nstopping any existing container). Ideally your application accepts\nits configuration via environment variables, in which case you use\nthe <code>-e</code> argument to specify those values depending on which\nstage is being deployed. If a configuration file are used, write it\nto the host file system and then use the <code>-v</code> argument to mount\nit to the correct path in the container.</p>\n</li>\n<li>\n<p>If using a container orchestration system such as Kubernetes, you\nwill typically have the deployment script connect to the\norchestration API endpoint to trigger an image update (e.g. using \n<code>kubectl set image</code> , a Helm chart, or better yet, a\n<code>kustomization</code>.).</p>\n</li>\n</ul>\n<p>Once deployed, tools such as Prometheus are well suited to docker\ncontainer monitoring and alerting, but this can be plugged into existing\nmonitoring systems as well.</p>\n<p>FP Complete has implemented this kind of DevOps workflow, and\nsignificantly more complex ones, for many clients and would love to\ncount you among them! See our <a href=\"https://www.fpcomplete.com/platformengineering/\">Devops Services</a> page.</p>\n<p>For more information, see <a href=\"https://techbeacon.com/how-secure-container-lifecycle\">How to secure the container\nlifecycle</a> and <a href=\"https://www.fpcomplete.com/blog/2017/01/containerize-legacy-app/\">Containerizing\na legacy application: an\noverview</a>.</p>\n",
"permalink": "https://www.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/",
"slug": "how-to-implement-containers-to-streamline-your-devops-workflow",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "How to Implement Containers to Streamline Your DevOps Workflow",
"description": "Many technology companies have been rapidly implementing Docker Containers to enhance their continuous delivery pipeline. However, implementing containers into your DevOps workflow can be difficult. Learn how to execute this process efficiently and securely here. ",
"updated": null,
"date": "2018-01-31T08:00:00Z",
"year": 2018,
"month": 1,
"day": 31,
"taxonomies": {
"categories": [
"devops"
],
"tags": [
"devops",
"docker"
]
},
"extra": {
"author": "Emanuel Borsboom",
"blogimage": "/images/blog-listing/container.png"
},
"path": "blog/how-to-implement-containers-to-streamline-your-devops-workflow/",
"components": [
"blog",
"how-to-implement-containers-to-streamline-your-devops-workflow"
],
"summary": null,
"toc": [
{
"level": 1,
"id": "what-are-docker-containers",
"permalink": "https://www.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#what-are-docker-containers",
"title": "What are Docker Containers?",
"children": []
},
{
"level": 1,
"id": "containers-vs-virtual-machines-vms",
"permalink": "https://www.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#containers-vs-virtual-machines-vms",
"title": "Containers vs Virtual Machines (VMs)",
"children": []
},
{
"level": 1,
"id": "how-docker-containers-enhance-continuous-delivery-pipelines",
"permalink": "https://www.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#how-docker-containers-enhance-continuous-delivery-pipelines",
"title": "How Docker Containers Enhance Continuous Delivery Pipelines",
"children": [
{
"level": 3,
"id": "builds",
"permalink": "https://www.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#builds",
"title": "Builds",
"children": []
},
{
"level": 3,
"id": "deployment",
"permalink": "https://www.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#deployment",
"title": "Deployment",
"children": []
}
]
},
{
"level": 1,
"id": "implementing-containers-into-your-devops-workflow",
"permalink": "https://www.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#implementing-containers-into-your-devops-workflow",
"title": "Implementing Containers into Your DevOps Workflow",
"children": [
{
"level": 3,
"id": "requirements",
"permalink": "https://www.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#requirements",
"title": "Requirements",
"children": []
},
{
"level": 3,
"id": "containerizing-the-build-environment",
"permalink": "https://www.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#containerizing-the-build-environment",
"title": "Containerizing the build environment",
"children": []
},
{
"level": 3,
"id": "containerizing-deployment",
"permalink": "https://www.fpcomplete.com/blog/how-to-implement-containers-to-streamline-your-devops-workflow/#containerizing-deployment",
"title": "Containerizing deployment",
"children": []
}
]
}
],
"word_count": 1752,
"reading_time": 9,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/signs-your-business-needs-a-devops-consultant.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/signs-your-business-needs-a-devops-consultant/",
"slug": "signs-your-business-needs-a-devops-consultant",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Signs Your Business Needs a DevOps Consultant",
"description": "Today’s business challenges cause issues with traditional deployment models. Find out why a DevOps consultant may be right for you. ",
"updated": null,
"date": "2018-01-18T15:06:00Z",
"year": 2018,
"month": 1,
"day": 18,
"taxonomies": {
"tags": [
"devops"
],
"categories": [
"insights",
"devops"
]
},
"extra": {
"author": "Aaron Contorer",
"html": "hubspot-blogs/signs-your-business-needs-a-devops-consultant.html",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "blog/signs-your-business-needs-a-devops-consultant/",
"components": [
"blog",
"signs-your-business-needs-a-devops-consultant"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/devops-value-how-to-measure-the-success-of-devops.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/devops-value-how-to-measure-the-success-of-devops/",
"slug": "devops-value-how-to-measure-the-success-of-devops",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "DevOps Value: How to Measure the Success of DevOps",
"description": "Faster time to market and lower failure rate are the beginning of the many benefits DevOps offers companies. Discover the measurable metrics and KPIs, as well as the true business value DevOps offers.",
"updated": null,
"date": "2018-01-04T13:51:00Z",
"year": 2018,
"month": 1,
"day": 4,
"taxonomies": {
"categories": [
"devops",
"insights"
],
"tags": [
"devops"
]
},
"extra": {
"author": "Robert Bobbett",
"html": "hubspot-blogs/devops-value-how-to-measure-the-success-of-devops.html",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "blog/devops-value-how-to-measure-the-success-of-devops/",
"components": [
"blog",
"devops-value-how-to-measure-the-success-of-devops"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/nat-gateways-in-amazon-govcloud.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/nat-gateways-in-amazon-govcloud/",
"slug": "nat-gateways-in-amazon-govcloud",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "NAT Gateways in Amazon GovCloud",
"description": "Since AWS GovCloud has no managed NAT gateways this task is left for you to set up. This post is the third in a series to explain how you can make it work.",
"updated": null,
"date": "2017-11-30T14:25:00Z",
"year": 2017,
"month": 11,
"day": 30,
"taxonomies": {
"tags": [
"devops",
"aws",
"govcloud"
],
"categories": [
"devops",
"kube360"
]
},
"extra": {
"author": "Yghor Kerscher",
"html": "hubspot-blogs/nat-gateways-in-amazon-govcloud.html",
"blogimage": "/images/blog-listing/cloud-computing.png"
},
"path": "blog/nat-gateways-in-amazon-govcloud/",
"components": [
"blog",
"nat-gateways-in-amazon-govcloud"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/my-devops-journey-and-how-i-became-a-recovering-it-operations-manager.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/my-devops-journey-and-how-i-became-a-recovering-it-operations-manager/",
"slug": "my-devops-journey-and-how-i-became-a-recovering-it-operations-manager",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "My DevOps Journey and How I Became a Recovering IT Operations Manager",
"description": "Learn how containerization and automated deployments laid the groundwork for what would become know as DevOps for a Fortune 500 IT company.",
"updated": null,
"date": "2017-11-15T13:30:00Z",
"year": 2017,
"month": 11,
"day": 15,
"taxonomies": {
"tags": [
"devops"
],
"categories": [
"devops",
"insights"
]
},
"extra": {
"author": "Steve Bogdan",
"html": "hubspot-blogs/my-devops-journey-and-how-i-became-a-recovering-it-operations-manager.html",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "blog/my-devops-journey-and-how-i-became-a-recovering-it-operations-manager/",
"components": [
"blog",
"my-devops-journey-and-how-i-became-a-recovering-it-operations-manager"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/amazon-govcloud-has-no-route53-how-to-solve-this.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/amazon-govcloud-has-no-route53-how-to-solve-this/",
"slug": "amazon-govcloud-has-no-route53-how-to-solve-this",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Amazon GovCloud has no Route53! How to solve this?",
"description": "Since Route53 is not yet available on Amazon GovCloud you need to find a different way to create custom DNS records for your services. We tell you how. ",
"updated": null,
"date": "2017-11-08T14:12:00Z",
"year": 2017,
"month": 11,
"day": 8,
"taxonomies": {
"categories": [
"devops"
],
"tags": [
"devops",
"aws",
"govcloud"
]
},
"extra": {
"author": "Yghor Kerscher",
"html": "hubspot-blogs/amazon-govcloud-has-no-route53-how-to-solve-this.html",
"blogimage": "/images/blog-listing/cloud-computing.png"
},
"path": "blog/amazon-govcloud-has-no-route53-how-to-solve-this/",
"components": [
"blog",
"amazon-govcloud-has-no-route53-how-to-solve-this"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/intro-to-devops-on-govcloud.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/intro-to-devops-on-govcloud/",
"slug": "intro-to-devops-on-govcloud",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Intro to Devops on GovCloud",
"description": "If you have strict compliance criteria that require you to use AWS GovCloud, there are some obstacles you will encounter that we will help you address.",
"updated": null,
"date": "2017-10-26T11:02:00Z",
"year": 2017,
"month": 10,
"day": 26,
"taxonomies": {
"tags": [
"devops",
"govcloud"
],
"categories": [
"devops"
]
},
"extra": {
"author": "J Boyer",
"html": "hubspot-blogs/intro-to-devops-on-govcloud.html",
"blogimage": "/images/blog-listing/cloud-computing.png"
},
"path": "blog/intro-to-devops-on-govcloud/",
"components": [
"blog",
"intro-to-devops-on-govcloud"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/credstash.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/2017/08/credstash/",
"slug": "credstash",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Manage Secrets on AWS with credstash and terraform",
"description": "Managing secrets is hard. Moving them around securely is even harder. Learn how to get secrets to the Cloud with terraform and credstash.",
"updated": null,
"date": "2017-08-28T15:00:00Z",
"year": 2017,
"month": 8,
"day": 28,
"taxonomies": {
"tags": [
"devops",
"aws"
],
"categories": [
"devops"
]
},
"extra": {
"author": "Alexey Kuleshevich",
"html": "hubspot-blogs/credstash.html",
"blogimage": "/images/blog-listing/aws.png"
},
"path": "blog/2017/08/credstash/",
"components": [
"blog",
"2017",
"08",
"credstash"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/functional-programming-and-modern-devops.md",
"content": "<p>In this presentation, Aaron Contorer presents on how modern tools can\nbe used to reach the Engineering sweet spot.</p>\n<iframe width=\"100%\" height=\"315\"\nsrc=\"https://www.youtube.com/embed/ybSBCVhVWs8\" frameborder=\"0\"\nallow=\"accelerometer; autoplay; encrypted-media; gyroscope;\npicture-in-picture\" allowfullscreen></iframe>\n<br>\n<br>\n<h2 id=\"do-you-know-fp-complete\">Do you know FP Complete</h2>\n<p>At FP Complete, we do so many things to help companies it’s hard to\nencapsulate our impact in a few words. They say a picture is worth a\nthousand words, so a video has to be worth 10,000 words (at\nleast). Therefore, to tell all we can in as little time as possible,\ncheck out our explainer video. It’s only 108 seconds to get the full\nstory of FP Complete.</p>\n<iframe allowfullscreen=\n \"allowfullscreen\" height=\"315\" src=\n \"https://www.youtube.com/embed/JCcuSn_lFKs\"\n target=\"_blank\" width=\n \"100%\"></iframe>\n<br>\n<br>\n<p>Reach us to on <a href=\"mailto:sales@fpcomplete.com\">sales@fpcomplete.com</a> if you have suggestions or if\nyou would like to learn more about FP Complete and the services we\noffer.</p>\n",
"permalink": "https://www.fpcomplete.com/blog/functional-programming-and-modern-devops/",
"slug": "functional-programming-and-modern-devops",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Functional Programming and Modern DevOps",
"description": "In this presentation, Aaron Contorer presents on how modern tools can be used to reach the Engineering sweet spot.",
"updated": null,
"date": "2017-08-11",
"year": 2017,
"month": 8,
"day": 11,
"taxonomies": {
"categories": [
"functional programming",
"devops"
],
"tags": [
"devops",
"haskell",
"insights"
]
},
"extra": {
"author": "Aaron Contorer",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "blog/functional-programming-and-modern-devops/",
"components": [
"blog",
"functional-programming-and-modern-devops"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "do-you-know-fp-complete",
"permalink": "https://www.fpcomplete.com/blog/functional-programming-and-modern-devops/#do-you-know-fp-complete",
"title": "Do you know FP Complete",
"children": []
}
],
"word_count": 162,
"reading_time": 1,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/continuous-integration.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/2017/03/continuous-integration/",
"slug": "continuous-integration",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Continuous Integration: an overview",
"description": "Continuous integration makes development teams more productive and releases less stressful. Catch regressions quickly and deploy applications automatically.",
"updated": null,
"date": "2017-03-03T17:11:00Z",
"year": 2017,
"month": 3,
"day": 3,
"taxonomies": {
"categories": [
"devops"
],
"tags": [
"devops"
]
},
"extra": {
"author": "Emanuel Borsboom",
"html": "hubspot-blogs/continuous-integration.html",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "blog/2017/03/continuous-integration/",
"components": [
"blog",
"2017",
"03",
"continuous-integration"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/immutability-docker-haskells-st-type.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/2017/02/immutability-docker-haskells-st-type/",
"slug": "immutability-docker-haskells-st-type",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Immutability, Docker, and Haskell's ST type",
"description": "Immutability in software development is a well known constant in functional programming but is relatively new in modern devops and the parallels are worth examining.",
"updated": null,
"date": "2017-02-13T15:24:00Z",
"year": 2017,
"month": 2,
"day": 13,
"taxonomies": {
"tags": [
"haskell",
"docker",
"devops"
],
"categories": [
"functional programming",
"devops"
]
},
"extra": {
"author": "Michael Snoyman",
"html": "hubspot-blogs/immutability-docker-haskells-st-type.html",
"blogimage": "/images/blog-listing/docker.png"
},
"path": "blog/2017/02/immutability-docker-haskells-st-type/",
"components": [
"blog",
"2017",
"02",
"immutability-docker-haskells-st-type"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/quickcheck.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/2017/01/quickcheck/",
"slug": "quickcheck",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "QuickCheck and Magic of Testing",
"description": "Discover the power of random testing in Haskell with QuickCheck. Learn how to use function properties and software specification to write bug-free software.",
"updated": null,
"date": "2017-01-24T14:24:00Z",
"year": 2017,
"month": 1,
"day": 24,
"taxonomies": {
"categories": [
"devops"
],
"tags": [
"haskell"
]
},
"extra": {
"author": "Alexey Kuleshevich",
"html": "hubspot-blogs/quickcheck.html",
"blogimage": "/images/blog-listing/qa.png"
},
"path": "blog/2017/01/quickcheck/",
"components": [
"blog",
"2017",
"01",
"quickcheck"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/containerize-legacy-app.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/2017/01/containerize-legacy-app/",
"slug": "containerize-legacy-app",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Containerizing a legacy application: an overview",
"description": "Running your legacy apps in Docker containers takes the pain out of deployment and puts you on a path to modern practices. Learn what is involved in containerizing your app.",
"updated": null,
"date": "2017-01-12T15:45:00Z",
"year": 2017,
"month": 1,
"day": 12,
"taxonomies": {
"tags": [
"devops"
],
"categories": [
"devops"
]
},
"extra": {
"author": "Emanuel Borsboom",
"html": "hubspot-blogs/containerize-legacy-app.html",
"blogimage": "/images/blog-listing/container.png"
},
"path": "blog/2017/01/containerize-legacy-app/",
"components": [
"blog",
"2017",
"01",
"containerize-legacy-app"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/devops-best-practices-multifaceted-testing.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/2016/11/devops-best-practices-multifaceted-testing/",
"slug": "devops-best-practices-multifaceted-testing",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Devops best practices: Multifaceted Testing",
"description": ".",
"updated": null,
"date": "2016-11-28T18:00:00Z",
"year": 2016,
"month": 11,
"day": 28,
"taxonomies": {
"tags": [
"devops"
],
"categories": [
"devops"
]
},
"extra": {
"author": "Aaron Contorer",
"html": "hubspot-blogs/devops-best-practices-multifaceted-testing.html",
"blogimage": "/images/blog-listing/qa.png"
},
"path": "blog/2016/11/devops-best-practices-multifaceted-testing/",
"components": [
"blog",
"2016",
"11",
"devops-best-practices-multifaceted-testing"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/devops-best-practices-immutability.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/2016/11/devops-best-practices-immutability/",
"slug": "devops-best-practices-immutability",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Devops best practices: Immutability",
"description": ".",
"updated": null,
"date": "2016-11-13T18:00:00Z",
"year": 2016,
"month": 11,
"day": 13,
"taxonomies": {
"tags": [
"devops"
],
"categories": [
"devops"
]
},
"extra": {
"author": "Aaron Contorer",
"html": "hubspot-blogs/devops-best-practices-immutability.html",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "blog/2016/11/devops-best-practices-immutability/",
"components": [
"blog",
"2016",
"11",
"devops-best-practices-immutability"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/docker-demons-pid1-orphans-zombies-signals.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/2016/10/docker-demons-pid1-orphans-zombies-signals/",
"slug": "docker-demons-pid1-orphans-zombies-signals",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Docker demons: PID-1, orphans, zombies, and signals",
"description": ".",
"updated": null,
"date": "2016-10-05T02:00:00Z",
"year": 2016,
"month": 10,
"day": 5,
"taxonomies": {
"categories": [
"devops"
],
"tags": [
"devops",
"docker"
]
},
"extra": {
"author": "Michael Snoyman",
"html": "hubspot-blogs/docker-demons-pid1-orphans-zombies-signals.html",
"blogimage": "/images/blog-listing/docker.png"
},
"path": "blog/2016/10/docker-demons-pid1-orphans-zombies-signals/",
"components": [
"blog",
"2016",
"10",
"docker-demons-pid1-orphans-zombies-signals"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/docker-split-images.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/2015/12/docker-split-images/",
"slug": "docker-split-images",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "The split-image approach to building minimal runtime Docker images",
"description": ".",
"updated": null,
"date": "2015-12-15T00:00:00Z",
"year": 2015,
"month": 12,
"day": 15,
"taxonomies": {
"tags": [
"devops",
"docker"
],
"categories": [
"devops"
]
},
"extra": {
"author": "Emanuel Borsboom",
"html": "hubspot-blogs/docker-split-images.html",
"blogimage": "/images/blog-listing/docker.png"
},
"path": "blog/2015/12/docker-split-images/",
"components": [
"blog",
"2015",
"12",
"docker-split-images"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/kubernetes.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/2015/11/kubernetes/",
"slug": "kubernetes",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Kubernetes for Haskell Services",
"description": ".",
"updated": null,
"date": "2015-11-19T19:00:00Z",
"year": 2015,
"month": 11,
"day": 19,
"taxonomies": {
"tags": [
"haskell",
"kubernetes"
],
"categories": [
"devops"
]
},
"extra": {
"author": "Tim Dysinger",
"html": "hubspot-blogs/kubernetes.html",
"blogimage": "/images/blog-listing/kubernetes.png"
},
"path": "blog/2015/11/kubernetes/",
"components": [
"blog",
"2015",
"11",
"kubernetes"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/distributing-packages-without-sysadmin.md",
"content": "",
"permalink": "https://www.fpcomplete.com/blog/2015/05/distributing-packages-without-sysadmin/",
"slug": "distributing-packages-without-sysadmin",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Distributing our packages without a sysadmin",
"description": ".",
"updated": null,
"date": "2015-05-13T00:00:00Z",
"year": 2015,
"month": 5,
"day": 13,
"taxonomies": {
"categories": [
"insights",
"devops"
],
"tags": [
"devops"
]
},
"extra": {
"author": "Michael Snoyman",
"html": "hubspot-blogs/distributing-packages-without-sysadmin.html",
"blogimage": "/images/blog-listing/devops.png"
},
"path": "blog/2015/05/distributing-packages-without-sysadmin/",
"components": [
"blog",
"2015",
"05",
"distributing-packages-without-sysadmin"
],
"summary": null,
"toc": [],
"word_count": 0,
"reading_time": 0,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
}
]
},
{
"name": "devsecops",
"slug": "devsecops",
"permalink": "https://www.fpcomplete.com/categories/devsecops/",
"pages": [
{
"relative_path": "blog/cloud-native.md",
"content": "<p>You hear "go Cloud-Native," but if you're like many, you wonder, "what does that mean, and how can applying a Cloud-Native strategy help my company's Dev Team be more productive?"\nAt a high level, Cloud-Native architecture means adapting to the many new possibilities—but a very different set of architectural constraints—offered by the cloud compared to traditional on-premises infrastructure.</p>\n<p>Cloud-Native architecture optimizes systems and software for the cloud. This optimization creates an efficient way to utilize the platform by streamlining the processes and workflows. This is accomplished by harnessing the cloud's inherent strengths: </p>\n<ul>\n<li>its flexibility, </li>\n<li>on-demand infrastructure; and </li>\n<li>robust managed services. </li>\n</ul>\n<p>Cloud-native computing couples these strengths with cloud-optimized technologies such as microservices, containers, and continuous delivery. Cloud-Native takes advantage of the cloud's distributed, scalable and adaptable nature. By doing this, Cloud-Native will maximize your dev team's focus on writing code, reducing operational tasks, creating business value, and keeping your customers happy by building high-impact applications faster, without compromising on quality. You might even think you can’t do cloud-native without using one of the big cloud providers- this simply isn’t true, many of the benefits of cloud-native are the approaches and emphasis on better tooling around automation.</p>\n<h2 id=\"why-move-to-cloud-native-now\">Why Move to Cloud-Native Now?</h2>\n<p><em>#1 - High-Frequency Software Release</em></p>\n<p>Faster and more frequent updates and new features releases allow your organization to respond to user needs in near real-time, increasing user retention. For example, new software versions with novel features can be released incrementally and more often as they become available. In addition, Cloud-native makes high-frequency software possible via continuous integration (CI) and continuous deployment (CD), where full version commits are no longer needed. Instead, one can modify, test, and commit just a few lines of code continuously and automatically to meet changing customer trends, thereby giving your organization an edge. </p>\n<p><em>#2 - Automatic Software Updates</em></p>\n<p>One of the most valuable Cloud-native features is automation. For example, updates are deployed automatically without interfering with core applications or user base. Automated redundancies for infrastructure can automatically move applications between data centers as needed with little to zero human intervention. Even scalability, testing, and resource allocation can be automated. There are many available automation tools in the marketplace, such as FP Complete Corporation's widely accepted tool, <a href=\"https://www.fpcomplete.com/products/kube360/\">Kube360</a>.</p>\n<p><em>#3 - Greater Protection from Software Failures</em></p>\n<p>Isolation of containers is another important cloud-native feature. Software failures and bugs can be traced to a specific microservice version, rolled back, or fixed quickly. Software fixes can be tested in isolation without compromising the stability of the entire application. On the other hand, if there's a widespread failure, automation can restore the application to a previous stable state, minimizing downtime. Automated DevOps testing before code goes to production (example: linting and software scrubbing) drives faster bug detection and resolution- reducing the risk of bugs in production.</p>\n<h2 id=\"wow-cloud-native-seems-perfect-what-s-the-catch\">WOW – Cloud-Native Seems Perfect – What's the Catch?</h2>\n<p>Switching over to Cloud-Native architecture requires a thorough assessment of your existing application setup. The biggest question you and your team need to ask before making any moves is, "should our business modernize our current applications, or should we build new applications from scratch and utilize Cloud-Native development practices?"</p>\n<p>If you choose to modernize your existing application, you will save time and money by capitalizing on the cloud's agility, flexibility, and scalability. Your dev team can retain existing application functionality and business logic, re-architect into a Cloud-Native app, and containerize to utilize the cloud platform's strengths.</p>\n<p>You can also build a net-new application using Cloud-Native development practices instead of upgrading your legacy applications. Building from scratch may make more sense from a corporate culture, risk management, and regulatory compliance standpoint. You keep running old application code unchanged while developing and phasing in a platform. Building new applications also allows dev teams to develop applications free from prior architectural constraints, allowing developers to experiment and deliver innovation to users.</p>\n<h2 id=\"three-essential-tools-for-successful-cloud-native-architecture\">Three Essential Tools for Successful Cloud-Native Architecture</h2>\n<p>Whether you decide to create a new Cloud-Native application or modernize your existing ones, your dev team needs to use these three tools for successful implementation of Cloud-Native Architecture:</p>\n<ol>\n<li><em>Microservices Architecture</em>. </li>\n</ol>\n<p>A cloud-native microservice architecture is considered a "best practice" architectural approach for creating cloud applications because each application makes up a set of services. Each service runs its processes and communicates through clearly defined APIs, which provide good foundations for continuous delivery. With microservices, ideally each service is independently deployable This architecture allows each service to be updated independently without interfering with another service. This results in:</p>\n<ul>\n<li>reduced downtime for users; </li>\n<li>simplified troubleshooting; and </li>\n<li>minimized disruptions even if a problem's identified. \nWhich allows for high-frequency updates and continuous delivery. </li>\n</ul>\n<ol start=\"2\">\n<li><em>Container-based Infrastructure Platform</em>.</li>\n</ol>\n<p>Now that your microservice architecture is broken down into individual container-based services, the next essential tool is a system to manage all those containers automatically - known as a ‘container orchestrator. The most widely accepted platform is Kubernetes, an open-source system originally developed in collaboration with Google, Microsoft, and others. It runs the containerized applications and controls the automated deployment, storage, scaling, scheduling, load balancing, updates, and monitors containers across clusters of hosts. Kubernetes supports all major public cloud service providers, including Azure, AWS, Google Cloud Platform, and Oracle Cloud.</p>\n<ol start=\"3\">\n<li><em>CI/CD Pipeline</em>.</li>\n</ol>\n<p>A CI/CD Pipeline is the third essential tool for a cloud-native environment to work seamlessly. Continuous integration and continuous delivery embody a set of operating principles and a collection of practices that allow dev teams to deliver code changes more frequently and reliably. This implementation is known as the CI/CD Pipeline. By automating deployment processes, the CI/CD pipeline will allow your dev team to focus on:</p>\n<ul>\n<li>meeting business requirements; </li>\n<li>code quality; and </li>\n<li>security. \nCI/CD tools preserve the environment-specific parameters that must be included with each delivery. CI/CD automation then performs any necessary service calls to web servers, databases, and other services that may require a restart or follow other procedures when applications are deployed.</li>\n</ul>\n<h2 id=\"cloud-native-isn-t-plug-play-is-there-a-comprehensive-tool-that-my-dev-team-can-use\">Cloud-Native Isn't Plug & Play – Is there a Comprehensive Tool that my Dev Team Can Use?</h2>\n<p>As you can probably guess, countless tools make up the cloud-native architecture. Unfortunately, these tools are complex, require separate authentication, and frequently do not interact with each other. In essence, you are expected to integrate these cloud tools yourself as a user. We at FP Complete became frustrated with this approach. So, to save time and provide a turn-key solution, we created Kube360. Kube360 puts all necessary tools into one easy-to-use toolbox, accessed via a single sign-on, and operating as a fully integrated environment. Kube360 combines best practices, technologies, and processes into one complete package, and Kube360 has been proven an effective tool at multiple customer site deployments. In addition, Kube360 supports multiple cloud providers and on-premise infrastructure. Kube360 is vendor agnostic, fully customizable, and has no vendor lock-in.</p>\n<p><strong>Kube360 - Centralized Management</strong>. Kube360 employs centralized management, which increases your dev team's productivity. Increased Dev Team productivity will happen through:</p>\n<ul>\n<li>single-sign-on functionality </li>\n<li>speed-up of installation and setup</li>\n<li>Quick access to all tools</li>\n<li>Automation of logs, backups, and alerts</li>\n</ul>\n<p>This simplified administration hides frequent login complexities and allows single-sign-on through existing company identity management. Kube360 also streamlines tool authentication and access, eliminating many standard security holes. In the background, Kube360 automatically runs everyday tasks such as backups, log aggregation, and alerts.</p>\n<p><strong>Kube360 - Automated Features</strong>. Kube360's automated features include:</p>\n<ul>\n<li>automatic backups of the etcd config;</li>\n<li>log aggregation and indexing of all services; and</li>\n<li>integrated monitoring and alert framework.</li>\n</ul>\n<p><strong>Kube360 - Kubernetes Tooling Features</strong>. Kube360 simplifies Kubernetes management and allows you to take advantage of many cloud-native features such as:\nautoscaling; to stay cost efficient with growing and shrinking demands on systems</p>\n<ul>\n<li>high availability;</li>\n<li>health checks; and</li>\n<li>integrated secrets management.</li>\n</ul>\n<p><strong>Kube360 - Service Mesh</strong>.</p>\n<ul>\n<li>Mutual TLS based encryption within the cluster</li>\n<li>Tracing tools</li>\n<li>Rerouting traffic</li>\n<li>Canary deployments</li>\n</ul>\n<p><strong>Kube360 - Integration</strong>.</p>\n<ul>\n<li>Integrates into existing AWS & Azure infrastructures</li>\n<li>Deploys into existing VPCs</li>\n<li>Leverages existing subnets</li>\n<li>Communicates with components outside of Kube360</li>\n<li>Supports multiple clusters per organization</li>\n<li>Installed by FP Complete team or customer</li>\n</ul>\n<p>As you can see – Kube360 is one of the most comprehensive tools you can rely on for Cloud Native architecture. Kube360 is your one-stop, fully integrated enterprise Kubernetes ecosystem. Kube360 standardizes containerization, software deployment, fault tolerance, auto-scaling, auto-healing, and security - by design. Kube360's modular, standardized architecture mitigates proprietary lock-in, high support costs, and obsolescence. In addition, Kube360 delivers a seamless deployment experience for you and your team.\nFind out how Kube360 can make your business more efficient, more reliable, and more secure, all in a fraction of the time. Speed up your dev team's productivity - <a href=\"https://www.fpcomplete.com/contact-us/\">Contact us today!</a></p>\n",
"permalink": "https://www.fpcomplete.com/blog/cloud-native/",
"slug": "cloud-native",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Confused about Cloud-Native? Want to speed up your dev team's productivity?",
"description": "Learn about Cloud-Native architecture.",
"updated": null,
"date": "2022-01-17",
"year": 2022,
"month": 1,
"day": 17,
"taxonomies": {
"categories": [
"devsecops",
"devops"
],
"tags": [
"kubernetes",
"cloud native"
]
},
"extra": {
"author": "FP Complete",
"keywords": "devsecops, devops",
"blogimage": "/images/blog-listing/cloud-computing.png"
},
"path": "blog/cloud-native/",
"components": [
"blog",
"cloud-native"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "why-move-to-cloud-native-now",
"permalink": "https://www.fpcomplete.com/blog/cloud-native/#why-move-to-cloud-native-now",
"title": "Why Move to Cloud-Native Now?",
"children": []
},
{
"level": 2,
"id": "wow-cloud-native-seems-perfect-what-s-the-catch",
"permalink": "https://www.fpcomplete.com/blog/cloud-native/#wow-cloud-native-seems-perfect-what-s-the-catch",
"title": "WOW – Cloud-Native Seems Perfect – What's the Catch?",
"children": []
},
{
"level": 2,
"id": "three-essential-tools-for-successful-cloud-native-architecture",
"permalink": "https://www.fpcomplete.com/blog/cloud-native/#three-essential-tools-for-successful-cloud-native-architecture",
"title": "Three Essential Tools for Successful Cloud-Native Architecture",
"children": []
},
{
"level": 2,
"id": "cloud-native-isn-t-plug-play-is-there-a-comprehensive-tool-that-my-dev-team-can-use",
"permalink": "https://www.fpcomplete.com/blog/cloud-native/#cloud-native-isn-t-plug-play-is-there-a-comprehensive-tool-that-my-dev-team-can-use",
"title": "Cloud-Native Isn't Plug & Play – Is there a Comprehensive Tool that my Dev Team Can Use?",
"children": []
}
],
"word_count": 1482,
"reading_time": 8,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
}
]
},
{
"name": "functional programming",
"slug": "functional-programming",
"permalink": "https://www.fpcomplete.com/categories/functional-programming/",
"pages": [
{
"relative_path": "blog/hidden-dangers-of-ratio.md",
"content": "<p>Here's a new Haskell <a href=\"https://www.destroyallsoftware.com/talks/wat\"><strong>WAT?!</strong></a></p>\n<p>Haskell has a type <code>Rational</code> for working with precisely-valued fractional numbers, and it models the mathematical concept of a <a href=\"https://en.wikipedia.org/wiki/Rational_number\">rational number</a>. Although it's relatively slow compared with <code>Double</code>, it doesn't suffer from the rounding that's intrinsic to floating-point arithmetic. It's very useful when writing tests because an exact result can be predicted ahead of time. For example, a computation that should produce zero will produce exactly zero rather than a small value within some range that would have to be determined.</p>\n<p><code>Rational</code> is actually a (monomorphic) specialization of the more general (polymorphic) type <code>Ratio</code> (from <a href=\"https://hackage.haskell.org/package/base/docs/Data-Ratio.html\"><code>Data.Ratio</code></a>). <code>Ratio</code> allows you to specify the underlying type used for the numerator and denominator. For example, to work with rational numbers using <code>Int</code> as the underlying type you can use <code>Ratio Int</code>. For the common case of using <code>Integer</code> as the underlying type, the type synonym <code>Rational</code> is provided:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#859900;\">type </span><span style=\"color:#cb4b16;\">Rational </span><span style=\"color:#859900;\">= </span><span style=\"color:#cb4b16;\">Ratio Integer\n</span></code></pre>\n<p>It's tempting to use <code>Ratio</code> with a fixed-width type like <code>Int</code> because <code>Int</code> is much faster than <code>Integer</code>. However, let's see what can happen if you do this:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">λ</span><span style=\"color:#859900;\">> </span><span style=\"color:#cb4b16;\">import </span><span style=\"color:#859900;\">Data.Int</span><span style=\"color:#657b83;\">\nλ</span><span style=\"color:#859900;\">> </span><span style=\"color:#cb4b16;\">import </span><span style=\"color:#859900;\">Data.Ratio</span><span style=\"color:#657b83;\">\nλ</span><span style=\"color:#859900;\">> let</span><span style=\"color:#657b83;\"> r </span><span style=\"color:#859900;\">= </span><span style=\"color:#6c71c4;\">1 </span><span style=\"color:#859900;\">% </span><span style=\"color:#6c71c4;\">12 </span><span style=\"color:#859900;\">:: </span><span style=\"color:#cb4b16;\">Rational </span><span style=\"color:#859900;\">in</span><span style=\"color:#657b83;\"> r </span><span style=\"color:#859900;\">-</span><span style=\"color:#657b83;\"> r </span><span style=\"color:#859900;\">== </span><span style=\"color:#6c71c4;\">0\n</span><span style=\"color:#cb4b16;\">True</span><span style=\"color:#657b83;\">\nλ</span><span style=\"color:#859900;\">> let</span><span style=\"color:#657b83;\"> r </span><span style=\"color:#859900;\">= </span><span style=\"color:#6c71c4;\">1 </span><span style=\"color:#859900;\">% </span><span style=\"color:#6c71c4;\">12 </span><span style=\"color:#859900;\">:: </span><span style=\"color:#cb4b16;\">Ratio Int8 </span><span style=\"color:#859900;\">in</span><span style=\"color:#657b83;\"> r </span><span style=\"color:#859900;\">-</span><span style=\"color:#657b83;\"> r </span><span style=\"color:#859900;\">== </span><span style=\"color:#6c71c4;\">0\n</span><span style=\"color:#cb4b16;\">False\n</span></code></pre>\n<p><strong>WAT?!</strong></p>\n<p>Let's see what those subtracted values evaluate to:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">λ</span><span style=\"color:#859900;\">> let</span><span style=\"color:#657b83;\"> r </span><span style=\"color:#859900;\">= </span><span style=\"color:#6c71c4;\">1 </span><span style=\"color:#859900;\">% </span><span style=\"color:#6c71c4;\">12 </span><span style=\"color:#859900;\">:: </span><span style=\"color:#cb4b16;\">Rational </span><span style=\"color:#859900;\">in</span><span style=\"color:#657b83;\"> r </span><span style=\"color:#859900;\">-</span><span style=\"color:#657b83;\"> r\n</span><span style=\"color:#6c71c4;\">0 </span><span style=\"color:#859900;\">% </span><span style=\"color:#6c71c4;\">1</span><span style=\"color:#657b83;\">\nλ</span><span style=\"color:#859900;\">> let</span><span style=\"color:#657b83;\"> r </span><span style=\"color:#859900;\">= </span><span style=\"color:#6c71c4;\">1 </span><span style=\"color:#859900;\">% </span><span style=\"color:#6c71c4;\">12 </span><span style=\"color:#859900;\">:: </span><span style=\"color:#cb4b16;\">Ratio Int8 </span><span style=\"color:#859900;\">in</span><span style=\"color:#657b83;\"> r </span><span style=\"color:#859900;\">-</span><span style=\"color:#657b83;\"> r\n</span><span style=\"color:#6c71c4;\">0 </span><span style=\"color:#859900;\">%</span><span style=\"color:#657b83;\"> (</span><span style=\"color:#859900;\">-</span><span style=\"color:#6c71c4;\">1</span><span style=\"color:#657b83;\">)\n</span></code></pre>\n<p>Hmmm, let's see if that <code>Ratio Int8</code> value is considered equal to <code>0</code>:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">λ</span><span style=\"color:#859900;\">> let</span><span style=\"color:#657b83;\"> r </span><span style=\"color:#859900;\">= </span><span style=\"color:#6c71c4;\">0 </span><span style=\"color:#859900;\">%</span><span style=\"color:#657b83;\"> (</span><span style=\"color:#859900;\">-</span><span style=\"color:#6c71c4;\">1</span><span style=\"color:#657b83;\">) </span><span style=\"color:#859900;\">:: </span><span style=\"color:#cb4b16;\">Ratio Int8 </span><span style=\"color:#859900;\">in</span><span style=\"color:#657b83;\"> r </span><span style=\"color:#859900;\">== </span><span style=\"color:#6c71c4;\">0\n</span><span style=\"color:#cb4b16;\">True\n</span></code></pre>\n<p><strong>WAT?!</strong></p>\n<p>Let's see what those manually-entered values are:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">λ</span><span style=\"color:#859900;\">> </span><span style=\"color:#6c71c4;\">0 </span><span style=\"color:#859900;\">%</span><span style=\"color:#657b83;\"> (</span><span style=\"color:#859900;\">-</span><span style=\"color:#6c71c4;\">1</span><span style=\"color:#657b83;\">) </span><span style=\"color:#859900;\">:: </span><span style=\"color:#cb4b16;\">Ratio Int8\n</span><span style=\"color:#6c71c4;\">0 </span><span style=\"color:#859900;\">% </span><span style=\"color:#6c71c4;\">1</span><span style=\"color:#657b83;\">\nλ</span><span style=\"color:#859900;\">> </span><span style=\"color:#6c71c4;\">0 </span><span style=\"color:#859900;\">:: </span><span style=\"color:#cb4b16;\">Ratio Int8\n</span><span style=\"color:#6c71c4;\">0 </span><span style=\"color:#859900;\">% </span><span style=\"color:#6c71c4;\">1\n</span></code></pre>\n<p>OK, so these values really are equal, but why are the values in the subtraction different? The explanation is two-fold.</p>\n<p>First, <code>0 % (-1)</code> is a denormalized state for <code>Ratio</code> and shouldn't occur. (As you've probably suspected, it arises from integer overflow. More on that in a minute.) It's not too surprising, then, that it isn't equal to <code>0</code>.</p>\n<p>But why is it equal to <code>0</code> when we enter it directly? It's because <code>%</code> is a function not a constructor, and it normalizes the signs of the numerator and denominator before constructing the value:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">x </span><span style=\"color:#859900;\">%</span><span style=\"color:#657b83;\"> y </span><span style=\"color:#859900;\">=</span><span style=\"color:#657b83;\"> reduce (x * signum y) (abs y)\n</span></code></pre>\n<p>The underlying assumption (the <em>invariant</em>) is that denominators will always be positive.</p>\n<p><code>reduce</code> is a function that reduces the numerator and denominator to their lowest terms, by dividing by the greatest common divisor:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">reduce x y </span><span style=\"color:#859900;\">=</span><span style=\"color:#657b83;\"> (x </span><span style=\"color:#859900;\">`quot`</span><span style=\"color:#657b83;\"> d) </span><span style=\"color:#859900;\">:%</span><span style=\"color:#657b83;\"> (y </span><span style=\"color:#859900;\">`quot`</span><span style=\"color:#657b83;\"> d)\n </span><span style=\"color:#859900;\">where</span><span style=\"color:#657b83;\"> d </span><span style=\"color:#859900;\">=</span><span style=\"color:#657b83;\"> gcd x y\n</span></code></pre>\n<p>Here you can see the constructor that actually creates the values from their components, which is <code>:%</code>. It's not exported from <code>Data.Ratio</code> and the "smart constructor" <code>%</code> is used instead, to ensure that new <code>Ratio</code> values always satisfy the invariant.</p>\n<p>Second, addition and subtraction are implemented without trying to minimize the possibility of integer overflow. For example:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">(x </span><span style=\"color:#859900;\">:%</span><span style=\"color:#657b83;\"> y) </span><span style=\"color:#859900;\">-</span><span style=\"color:#657b83;\"> (x' </span><span style=\"color:#859900;\">:%</span><span style=\"color:#657b83;\"> y') </span><span style=\"color:#859900;\">=</span><span style=\"color:#657b83;\"> reduce (x * y' </span><span style=\"color:#859900;\">-</span><span style=\"color:#657b83;\"> x' * y) (y * y')\n</span></code></pre>\n<p>If <code>y * y'</code> overflows to a negative value, <code>reduce</code> will not normalize the signs. The result of <code>gcd</code> is always non-negative so the signs don't change and denormalized values are never renormalized. That happens only in <code>%</code> when constructing <code>Ratio</code> values.</p>\n<p>Let's look at what happens in our example:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">λ</span><span style=\"color:#859900;\">></span><span style=\"color:#657b83;\"> x </span><span style=\"color:#859900;\">= </span><span style=\"color:#6c71c4;\">1</span><span style=\"color:#657b83;\">; y </span><span style=\"color:#859900;\">= </span><span style=\"color:#6c71c4;\">12</span><span style=\"color:#657b83;\">; x' </span><span style=\"color:#859900;\">= </span><span style=\"color:#6c71c4;\">1</span><span style=\"color:#657b83;\">; y' </span><span style=\"color:#859900;\">= </span><span style=\"color:#6c71c4;\">12</span><span style=\"color:#657b83;\">\nλ</span><span style=\"color:#859900;\">></span><span style=\"color:#657b83;\"> x * y' </span><span style=\"color:#859900;\">-</span><span style=\"color:#657b83;\"> x' * y </span><span style=\"color:#859900;\">:: </span><span style=\"color:#cb4b16;\">Int8\n</span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\">\nλ</span><span style=\"color:#859900;\">></span><span style=\"color:#657b83;\"> y * y' </span><span style=\"color:#859900;\">:: </span><span style=\"color:#cb4b16;\">Int8\n</span><span style=\"color:#859900;\">-</span><span style=\"color:#6c71c4;\">112</span><span style=\"color:#657b83;\">\nλ</span><span style=\"color:#859900;\">></span><span style=\"color:#657b83;\"> gcd </span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\"> (</span><span style=\"color:#859900;\">-</span><span style=\"color:#6c71c4;\">112</span><span style=\"color:#657b83;\">)\n</span><span style=\"color:#6c71c4;\">112</span><span style=\"color:#657b83;\">\nλ</span><span style=\"color:#859900;\">> </span><span style=\"color:#6c71c4;\">0 </span><span style=\"color:#859900;\">`quot` </span><span style=\"color:#6c71c4;\">112\n0</span><span style=\"color:#657b83;\">\nλ</span><span style=\"color:#859900;\">></span><span style=\"color:#657b83;\"> (</span><span style=\"color:#859900;\">-</span><span style=\"color:#6c71c4;\">112</span><span style=\"color:#657b83;\">) </span><span style=\"color:#859900;\">`quot` </span><span style=\"color:#6c71c4;\">112\n</span><span style=\"color:#859900;\">-</span><span style=\"color:#6c71c4;\">1\n</span></code></pre>\n<p>The reduced result of <code>1 % 12 - 1 % 12</code> is therefore the denormalized value <code>0 :% (-1)</code> which isn't considered equal to the normalized value <code>0 % 1</code>.</p>\n<p>Even though <code>12</code> is much less than <code>maxBound :: Int8</code>, when squared it results in integer overflow. The implementation of <code>Num</code> for <code>Ratio</code> is <em>not</em> designed to avoid overflows and they can happen very easily with numerators and denominators that are much less than the <code>maxBound</code> for the type.</p>\n<p>The implementation could have used a slightly different approach:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">(x </span><span style=\"color:#859900;\">:%</span><span style=\"color:#657b83;\"> y) </span><span style=\"color:#859900;\">-</span><span style=\"color:#657b83;\"> (x' </span><span style=\"color:#859900;\">:%</span><span style=\"color:#657b83;\"> y') </span><span style=\"color:#859900;\">=</span><span style=\"color:#657b83;\"> reduce (x * z' </span><span style=\"color:#859900;\">-</span><span style=\"color:#657b83;\"> x' * z) (y * z')\n </span><span style=\"color:#859900;\">where</span><span style=\"color:#657b83;\"> z </span><span style=\"color:#859900;\">=</span><span style=\"color:#657b83;\"> y </span><span style=\"color:#859900;\">`quot`</span><span style=\"color:#657b83;\"> d\n z' </span><span style=\"color:#859900;\">=</span><span style=\"color:#657b83;\"> y' </span><span style=\"color:#859900;\">`quot`</span><span style=\"color:#657b83;\"> d\n d </span><span style=\"color:#859900;\">=</span><span style=\"color:#657b83;\"> gcd y y'\n</span></code></pre>\n<p>However, the use of <code>reduce</code> is still necessary (consider <code>3 % 10 - 2 % 15</code>) so this requires two more divisions and a <code>gcd</code> compared with the actual implementation.</p>\n<p>Using a type as small as <code>Int8</code> might seem a little unrealistic, but the problem can occur with any fixed-width integral type and I used <code>Int8</code> for the illustration because it's easier to understand the problem when working with small values. I originally encountered it when using <code>Ratio Int</code> even though <code>Int</code> has a very large <code>maxBound</code>. I was writing property tests using QuickCheck for some polymorphic arithmetic code that was supposed to produce a zero sum as a result. The test succeeded with <code>Rational</code> and failed with <code>Ratio Int</code> and I couldn't understand why because the random values being generated by the test framework had numerators and denominators far less than <code>maxBound :: Int</code>. However, they <em>were</em> greater than its square root.</p>\n<p>The documentation for <code>Ratio</code> says:</p>\n<blockquote>\n<p>Note that Ratio's instances inherit the deficiencies from the type parameter's.\nFor example, <code>Ratio Natural</code>'s <code>Num</code> instance has similar problems to <code>Natural</code>'s.</p>\n</blockquote>\n<p>However, that doesn't really prepare you for what might happen with other type parameters! The moral of this story is that <code>Ratio</code> isn't much use on its own and you should always use <code>Rational</code> unless you <em>really</em> understand what you're getting into.</p>\n<h2 id=\"further-reading\">Further reading</h2>\n<p>Like that blog post? Check out the <a href=\"https://www.fpcomplete.com/haskell/\">Haskell section</a> of our site with tutorials and other blog posts. You can also check out all <a href=\"/tags/haskell/\">Haskell tagged blog posts</a>.</p>\n<p><strong>We're hiring.</strong> Interested in working with our team on solving these kinds of WAT issues? Check out our <a href=\"https://www.fpcomplete.com/jobs/\">jobs page</a> for more information.</p>\n",
"permalink": "https://www.fpcomplete.com/blog/hidden-dangers-of-ratio/",
"slug": "hidden-dangers-of-ratio",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "The Hidden Dangers of Haskell's Ratio Type",
"description": "Haskell's Rational datatype is useful, but using a type other than Integer with Ratio leads to surprises.",
"updated": null,
"date": "2022-04-27",
"year": 2022,
"month": 4,
"day": 27,
"taxonomies": {
"categories": [
"functional programming"
],
"tags": [
"haskell"
]
},
"extra": {
"author": "Neil Mayhew",
"author_avatar": "/images/staff/neil-mayhew.png",
"blogimage": "/images/blog-listing/functional.png"
},
"path": "blog/hidden-dangers-of-ratio/",
"components": [
"blog",
"hidden-dangers-of-ratio"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "further-reading",
"permalink": "https://www.fpcomplete.com/blog/hidden-dangers-of-ratio/#further-reading",
"title": "Further reading",
"children": []
}
],
"word_count": 1001,
"reading_time": 6,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/hiring-haskell-developers.md",
"content": "<p>FP Complete is actively seeking multiple engineers to work with our globally distributed team of software engineers. This blog post is to announce a new job opening for a developer role focused on Haskell development. At all times you can find an <a href=\"https://www.fpcomplete.com/jobs/\">up-to-date listing of job openings on our jobs page</a>. Below is the job information on the new Haskell position.</p>\n<hr />\n<h2 id=\"senior-haskell-engineer\">Senior Haskell Engineer</h2>\n<p>FP Complete is an engineering consulting firm specializing in reliable, automated server-side systems. Our customers span the globe and cover such diverse industries as FinTech, life sciences, academia, and blockchain. Our software, systems, and DevOps engineers are a remote-first team who love to solve complicated problems well, delivering elegant and robust solutions to complex problems.</p>\n<p>We're seeking to expand our team of Haskell developers with at least one additional team member. The focus of this role is to augment our existing team working on customer-facing projects. Our goal is to improve stability and performance of the codebases while adding additional features and integration points.</p>\n<p>If you're looking to work on interesting projects with a team of experienced Haskell engineers, keep reading for more details, and be sure to send us your CV at <a href=\"mailto:jobs@fpcomplete.com\">jobs@fpcomplete.com</a>.</p>\n<p><strong>Location</strong>: Fully remote<br />\n<strong>Type of engagement</strong>: Preference for full time, though part time positions may be available for the right candidate.</p>\n<h3 id=\"requirements\">Requirements</h3>\n<p>We are looking for software developers with professional development experience. Developers with significant Haskell knowledge but no prior Haskell professional work experience are welcome to apply. We strive to create an environment where theoretical Haskell skills can be applied to real world codebases.</p>\n<ul>\n<li>No specific location requirements, work from anywhere. You just need a good internet connection and the ability to communicate well in English, both in writing and orally.</li>\n<li>4+ years professional software development experience</li>\n<li>2+ years experience with Haskell. Professional experience or open source contributions are ideal, though demonstrable knowledge through personal projects will work as well.</li>\n<li>Passion to learn and hone new skills</li>\n<li>Ability to communicate clearly and consistently with a remote team, including coworkers and customers</li>\n<li>Experience with FP Complete approaches to Haskell are a plus, such as the RIO library and exception handling best practices.</li>\n<li>Experience working with SQL databases, and ideally Haskell libraries for working with SQL such as <code>persistent</code>.</li>\n</ul>\n<p>Additionally, the following skills are huge plus:</p>\n<ul>\n<li>Experience with CI/CD management, ideally for Haskell projects, but general experience is helpful too</li>\n<li>Infrastructure management, especially cloud</li>\n<li>Server software development and debugging</li>\n<li>Skillsets matching other FP Complete job postings, such as DevOps, Rust, Scala, or frontend development</li>\n</ul>\n<h3 id=\"why-fp-complete\">Why FP Complete</h3>\n<p>FP Complete is an engineer-driven organization. We strive to foster an environment where engineers can create excellent solutions that they’re proud of. You will have an opportunity to work with, learn from, and mentor other engineers across the globe with a variety of different skill sets, including DevOps engineers, web developers, high performance computing experts, and compiler authors. We try to give every team member opportunities to learn, grow, and thrive. This includes cross training on projects, as well as regular internal collaboration and training meetings on general engineering topics, Haskell, Rust, and DevOps.</p>\n<p>For our entire ten-year history, FP Complete has been a <strong>remote-first</strong> company, with no central office. We offer flexible work hours and location. You don’t need to worry about missing the in-office discussions, as the entire team communicates exclusively remotely.</p>\n<p>We service a wide range of industries with customers of various sizes and differing tech stacks. While the work can be challenging, it offers great opportunities to get a broad view of the industry in general.</p>\n<p>We are also strong proponents of open-source software. As a company, and as individuals on our team, we maintain a large swath of open-source projects, including many critical pieces of Haskell infrastructure, plus Rust and DevOps projects as well. Our approach to DevOps always follows a strong OSS bias.</p>\n<p>Learn more about what we do at <a href=\"https://www.fpcomplete.com/\">https://www.fpcomplete.com/</a>.</p>\n<h3 id=\"how-to-apply\">How to apply</h3>\n<p>To apply for this position, please send a cover letter and CV/resume to <a href=\"mailto:jobs@fpcomplete.com\">jobs@fpcomplete.com</a>.</p>\n",
"permalink": "https://www.fpcomplete.com/blog/hiring-haskell-developers/",
"slug": "hiring-haskell-developers",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Hiring Haskell Developers",
"description": "FP Complete has multiple engineering roles open, including several Haskell developer positions",
"updated": null,
"date": "2022-04-04",
"year": 2022,
"month": 4,
"day": 4,
"taxonomies": {
"categories": [
"functional programming"
],
"tags": [
"haskell"
]
},
"extra": {
"author": "Michael Snoyman",
"author_avatar": "/images/leaders/michael-snoyman.png",
"blogimage": "/images/blog-listing/functional.png"
},
"path": "blog/hiring-haskell-developers/",
"components": [
"blog",
"hiring-haskell-developers"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "senior-haskell-engineer",
"permalink": "https://www.fpcomplete.com/blog/hiring-haskell-developers/#senior-haskell-engineer",
"title": "Senior Haskell Engineer",
"children": [
{
"level": 3,
"id": "requirements",
"permalink": "https://www.fpcomplete.com/blog/hiring-haskell-developers/#requirements",
"title": "Requirements",
"children": []
},
{
"level": 3,
"id": "why-fp-complete",
"permalink": "https://www.fpcomplete.com/blog/hiring-haskell-developers/#why-fp-complete",
"title": "Why FP Complete",
"children": []
},
{
"level": 3,
"id": "how-to-apply",
"permalink": "https://www.fpcomplete.com/blog/hiring-haskell-developers/#how-to-apply",
"title": "How to apply",
"children": []
}
]
}
],
"word_count": 690,
"reading_time": 4,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/axum-hyper-tonic-tower-part4.md",
"content": "<p>This is the fourth and final post in a series on combining web and gRPC services into a single service using Tower, Hyper, Axum, and Tonic. The full four parts are:</p>\n<ol>\n<li><a href=\"https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/\">Overview of Tower</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/\">Understanding Hyper, and first experiences with Axum</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/\">Demonstration of Tonic for a gRPC client/server</a></li>\n<li>Today's post: How to combine Axum and Tonic services into a single service</li>\n</ol>\n<h2 id=\"single-port-two-protocols\">Single port, two protocols</h2>\n<p>That heading is a lie. Both an Axum web application and a gRPC server speak the same protocol: HTTP/2. It may be more fair to say they speak different dialects of it. But importantly, it's trivially easy to look at a request and determine whether it wants to talk to the gRPC server or not. gRPC requests will all include the header <code>Content-Type: application/grpc</code>. So our final step today is to write something that can accept both a gRPC <code>Service</code> and a normal <code>Service</code>, and return one unified service. Let's do it! For reference, complete code is in <a href=\"https://github.com/snoyberg/tonic-example/blob/master/src/bin/server-hybrid.rs\"><code>src/bin/server-hybrid.rs</code></a>.</p>\n<p>Let's start off with our <code>main</code> function, and demonstrate what we want this thing to look like:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">#[</span><span style=\"color:#268bd2;\">tokio</span><span style=\"color:#657b83;\">::</span><span style=\"color:#268bd2;\">main</span><span style=\"color:#657b83;\">]\nasync </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">main</span><span style=\"color:#657b83;\">() {\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> addr = SocketAddr::from(([</span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\">, </span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\">, </span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\">, </span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\">], </span><span style=\"color:#6c71c4;\">3000</span><span style=\"color:#657b83;\">));\n\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> axum_make_service = axum::Router::new()\n .</span><span style=\"color:#859900;\">route</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">/</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, axum::handler::get(|| async { </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Hello world!</span><span style=\"color:#839496;\">" </span><span style=\"color:#657b83;\">}))\n .</span><span style=\"color:#859900;\">into_make_service</span><span style=\"color:#657b83;\">();\n\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> grpc_service = tonic::transport::Server::builder()\n .</span><span style=\"color:#859900;\">add_service</span><span style=\"color:#657b83;\">(EchoServer::new(MyEcho))\n .</span><span style=\"color:#859900;\">into_service</span><span style=\"color:#657b83;\">();\n\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> hybrid_make_service = </span><span style=\"color:#859900;\">hybrid</span><span style=\"color:#657b83;\">(axum_make_service, grpc_service);\n\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> server = hyper::Server::bind(</span><span style=\"color:#859900;\">&</span><span style=\"color:#657b83;\">addr).</span><span style=\"color:#859900;\">serve</span><span style=\"color:#657b83;\">(hybrid_make_service);\n\n </span><span style=\"color:#859900;\">if </span><span style=\"color:#268bd2;\">let </span><span style=\"color:#859900;\">Err</span><span style=\"color:#657b83;\">(e) = server.await {\n </span><span style=\"color:#859900;\">eprintln!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">server error: </span><span style=\"color:#cb4b16;\">{}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, e);\n }\n}\n</span></code></pre>\n<p>We set up simplistic <code>axum_make_service</code> and <code>grpc_service</code> values, and then use the <code>hybrid</code> function to combine them into a single service. Notice the difference in those names, and the fact that we called <code>into_make_service</code> for the former and <code>into_service</code> for the latter. Believe it or not, that's going to cause us a lot of pain very shortly.</p>\n<p>Anyway, with that yet-to-be-explained <code>hybrid</code> function, spinning up a hybrid server is a piece of cake. But the devil's in the details!</p>\n<p>Also: there are simpler ways of going about the code below using trait objects. I avoided any type erasure techniques, since (1) I thought the code was a bit clearer this way, and (2) it turns into a nicer tutorial in my opinion. The one exception is that I <em>am</em> using a trait object for errors, since Hyper itself does so, and it simplifies the code significantly to use the same error representation across services.</p>\n<h1 id=\"defining-hybrid\">Defining <code>hybrid</code></h1>\n<p>Our <code>hybrid</code> function is going to return a <code>HybridMakeService</code> value:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">hybrid</span><span style=\"color:#657b83;\"><MakeWeb, Grpc>(</span><span style=\"color:#268bd2;\">make_web</span><span style=\"color:#657b83;\">: MakeWeb, </span><span style=\"color:#268bd2;\">grpc</span><span style=\"color:#657b83;\">: Grpc) -> HybridMakeService<MakeWeb, Grpc> {\n HybridMakeService { make_web, grpc }\n}\n\n</span><span style=\"color:#268bd2;\">struct </span><span style=\"color:#b58900;\">HybridMakeService</span><span style=\"color:#657b83;\"><MakeWeb, Grpc> {\n </span><span style=\"color:#268bd2;\">make_web</span><span style=\"color:#657b83;\">: MakeWeb,\n </span><span style=\"color:#268bd2;\">grpc</span><span style=\"color:#657b83;\">: Grpc,\n}\n</span></code></pre>\n<p>I'm going to be consistent and verbose with the type variable names throughout. Here, we have the type variables <code>MakeWeb</code> and <code>Grpc</code>. This reflects the difference between what Axum and Tonic provide from an API perspective. We'll need to provide Axum's <code>MakeWeb</code> with connection information in order to get the request-handling <code>Service</code>. With <code>Grpc</code>, we won't have to do that.</p>\n<p>In any event, we're ready to implement our <code>Service</code> for <code>HybridMakeService</code>:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">impl</span><span style=\"color:#657b83;\"><ConnInfo, MakeWeb, Grpc> Service<ConnInfo> </span><span style=\"color:#859900;\">for </span><span style=\"color:#b58900;\">HybridMakeService</span><span style=\"color:#657b83;\"><MakeWeb, Grpc>\n</span><span style=\"color:#859900;\">where</span><span style=\"color:#657b83;\">\n MakeWeb: Service<ConnInfo>,\n Grpc: Clone,\n{\n </span><span style=\"color:#93a1a1;\">// ...\n</span><span style=\"color:#657b83;\">}\n</span></code></pre>\n<p>We have the two expected type variables <code>MakeWeb</code> and <code>Grpc</code>, as well as <code>ConnInfo</code>, to represent whatever connection information we're given. <code>Grpc</code> won't care about that at all, but the <code>ConnInfo</code> must match up with what <code>MakeWeb</code> is receiving. Therefore, we have the bound <code>MakeWeb: Service<ConnInfo></code>. The <code>Grpc: Clone</code> bound will make sense shortly.</p>\n<p>When we receive an incoming connection, we'll need to do two things:</p>\n<ul>\n<li>Get a new <code>Service</code> from <code>MakeWeb</code>. Doing this may happen asynchronously, and may have some an error.\n<ul>\n<li><strong>SIDE NOTE</strong> If you remember the actual implementation of Axum, we know for a fact that neither of these are true. Getting a <code>Service</code> from an Axum <code>IntoMakeService</code> will always succeed, and never does any async work. But there are no APIs in Axum exposing this fact, so we're stuck behind the <code>Service</code> API.</li>\n</ul>\n</li>\n<li>Clone the <code>Grpc</code> we already have.</li>\n</ul>\n<p>Once we have the new <code>Web</code> <code>Service</code> and the cloned <code>Grpc</code>, we'll wrap these up into a new <code>struct</code>, <code>HybridService</code>. We're also going to need some help to perform the necessary async actions, so we'll create a new helper <code>Future</code> type. This all looks like:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Response </span><span style=\"color:#657b83;\">= HybridService<</span><span style=\"color:#268bd2;\">MakeWeb::</span><span style=\"color:#657b83;\">Response, Grpc>;\n</span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Error </span><span style=\"color:#657b83;\">= MakeWeb::Error;\n</span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Future </span><span style=\"color:#657b83;\">= HybridMakeServiceFuture<</span><span style=\"color:#268bd2;\">MakeWeb::</span><span style=\"color:#657b83;\">Future, Grpc>;\n\n</span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">poll_ready</span><span style=\"color:#657b83;\">(\n </span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">,\n </span><span style=\"color:#268bd2;\">cx</span><span style=\"color:#657b83;\">: </span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#657b83;\">std::task::Context,\n) -> std::task::Poll<</span><span style=\"color:#859900;\">Result</span><span style=\"color:#657b83;\"><(), </span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Error>> {\n </span><span style=\"color:#d33682;\">self</span><span style=\"color:#657b83;\">.make_web.</span><span style=\"color:#859900;\">poll_ready</span><span style=\"color:#657b83;\">(cx)\n}\n\n</span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">call</span><span style=\"color:#657b83;\">(</span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">, </span><span style=\"color:#268bd2;\">conn_info</span><span style=\"color:#657b83;\">: ConnInfo) -> </span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Future {\n HybridMakeServiceFuture {\n web_future: </span><span style=\"color:#d33682;\">self</span><span style=\"color:#657b83;\">.make_web.</span><span style=\"color:#859900;\">call</span><span style=\"color:#657b83;\">(conn_info),\n grpc: </span><span style=\"color:#859900;\">Some</span><span style=\"color:#657b83;\">(</span><span style=\"color:#d33682;\">self</span><span style=\"color:#657b83;\">.grpc.</span><span style=\"color:#859900;\">clone</span><span style=\"color:#657b83;\">()),\n }\n}\n</span></code></pre>\n<p>Note that we're deferring to <code>self.make_web</code> to say it's ready and passing along its errors. Let's tie this piece off by looking at <code>HybridMakeServiceFuture</code>:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">#[</span><span style=\"color:#268bd2;\">pin_project</span><span style=\"color:#657b83;\">]\n</span><span style=\"color:#268bd2;\">struct </span><span style=\"color:#b58900;\">HybridMakeServiceFuture</span><span style=\"color:#657b83;\"><WebFuture, Grpc> {\n #[</span><span style=\"color:#268bd2;\">pin</span><span style=\"color:#657b83;\">]\n </span><span style=\"color:#268bd2;\">web_future</span><span style=\"color:#657b83;\">: WebFuture,\n </span><span style=\"color:#268bd2;\">grpc</span><span style=\"color:#657b83;\">: </span><span style=\"color:#859900;\">Option</span><span style=\"color:#657b83;\"><Grpc>,\n}\n\n</span><span style=\"color:#268bd2;\">impl</span><span style=\"color:#657b83;\"><WebFuture, Web, WebError, Grpc> Future </span><span style=\"color:#859900;\">for </span><span style=\"color:#b58900;\">HybridMakeServiceFuture</span><span style=\"color:#657b83;\"><WebFuture, Grpc>\n</span><span style=\"color:#859900;\">where</span><span style=\"color:#657b83;\">\n WebFuture: Future<Output = </span><span style=\"color:#859900;\">Result</span><span style=\"color:#657b83;\"><Web, WebError>>,\n{\n </span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Output </span><span style=\"color:#657b83;\">= </span><span style=\"color:#859900;\">Result</span><span style=\"color:#657b83;\"><HybridService<Web, Grpc>, WebError>;\n\n </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">poll</span><span style=\"color:#657b83;\">(</span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">: Pin<</span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#268bd2;\">Self</span><span style=\"color:#657b83;\">>, </span><span style=\"color:#268bd2;\">cx</span><span style=\"color:#657b83;\">: </span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#657b83;\">std::task::Context) -> Poll<</span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Output> {\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> this = </span><span style=\"color:#d33682;\">self</span><span style=\"color:#657b83;\">.</span><span style=\"color:#859900;\">project</span><span style=\"color:#657b83;\">();\n </span><span style=\"color:#859900;\">match</span><span style=\"color:#657b83;\"> this.web_future.</span><span style=\"color:#859900;\">poll</span><span style=\"color:#657b83;\">(cx) {\n Poll::Pending </span><span style=\"color:#859900;\">=> </span><span style=\"color:#657b83;\">Poll::Pending,\n Poll::Ready(</span><span style=\"color:#859900;\">Err</span><span style=\"color:#657b83;\">(e)) </span><span style=\"color:#859900;\">=> </span><span style=\"color:#657b83;\">Poll::Ready(</span><span style=\"color:#859900;\">Err</span><span style=\"color:#657b83;\">(e)),\n Poll::Ready(</span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">(web)) </span><span style=\"color:#859900;\">=> </span><span style=\"color:#657b83;\">Poll::Ready(</span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">(HybridService {\n web,\n grpc: this.grpc.</span><span style=\"color:#859900;\">take</span><span style=\"color:#657b83;\">().</span><span style=\"color:#859900;\">expect</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Cannot poll twice!</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">),\n })),\n }\n }\n}\n</span></code></pre>\n<p>We need to pull in <a href=\"https://lib.rs/crates/pin-project\"><code>pin_project</code></a> to allow us to project the pinned web future inside our <code>poll</code> implementation. (If you're not familiar with <code>pin_project</code>, don't worry, we'll describe things later on with <code>HybridFuture</code>.) When we poll <code>web_future</code>, we could end up in one of three states:</p>\n<ul>\n<li><code>Pending</code>: the <code>MakeWeb</code> isn't ready, so we aren't ready either</li>\n<li><code>Ready(Err(e))</code>: the <code>MakeWeb</code> failed, so we pass along the error</li>\n<li><code>Ready(Ok(web))</code>: the <code>MakeWeb</code> is successful, so package up the new <code>web</code> value with the <code>grpc</code> value</li>\n</ul>\n<p>There's some funny business with that <code>this.grpc.take()</code> to get the cloned <code>Grpc</code> value out of the <code>Option</code>. <code>Future</code>s have an invariant that, once they return <code>Ready</code>, they cannot be polled again. Therefore, it's safe to assume that <code>take</code> will only ever be called once. But all of this pain could be avoided if Axum exposed an <code>into_service</code> method instead.</p>\n<h2 id=\"hybridservice\"><code>HybridService</code></h2>\n<p>The previous types will ultimately produce a <code>HybridService</code>. Let's look at what that is:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">struct </span><span style=\"color:#b58900;\">HybridService</span><span style=\"color:#657b83;\"><Web, Grpc> {\n </span><span style=\"color:#268bd2;\">web</span><span style=\"color:#657b83;\">: Web,\n </span><span style=\"color:#268bd2;\">grpc</span><span style=\"color:#657b83;\">: Grpc,\n}\n\n</span><span style=\"color:#268bd2;\">impl</span><span style=\"color:#657b83;\"><Web, Grpc, WebBody, GrpcBody> Service<Request<Body>> </span><span style=\"color:#859900;\">for </span><span style=\"color:#b58900;\">HybridService</span><span style=\"color:#657b83;\"><Web, Grpc>\n</span><span style=\"color:#859900;\">where</span><span style=\"color:#657b83;\">\n Web: Service<Request<Body>, Response = Response<WebBody>>,\n Grpc: Service<Request<Body>, Response = Response<GrpcBody>>,\n </span><span style=\"color:#268bd2;\">Web::</span><span style=\"color:#657b83;\">Error: </span><span style=\"color:#859900;\">Into</span><span style=\"color:#657b83;\"><</span><span style=\"color:#859900;\">Box</span><span style=\"color:#657b83;\"><dyn std::error::Error </span><span style=\"color:#859900;\">+ Send + Sync + </span><span style=\"color:#586e75;\">'static</span><span style=\"color:#657b83;\">>>,\n </span><span style=\"color:#268bd2;\">Grpc::</span><span style=\"color:#657b83;\">Error: </span><span style=\"color:#859900;\">Into</span><span style=\"color:#657b83;\"><</span><span style=\"color:#859900;\">Box</span><span style=\"color:#657b83;\"><dyn std::error::Error </span><span style=\"color:#859900;\">+ Send + Sync + </span><span style=\"color:#586e75;\">'static</span><span style=\"color:#657b83;\">>>,\n{\n </span><span style=\"color:#93a1a1;\">// ...\n</span><span style=\"color:#657b83;\">}\n</span></code></pre>\n<p>This <code>HybridService</code> will take <code>Request<Body></code> as input. The underlying <code>Web</code> and <code>Grpc</code> will also take <code>Request<Body></code> as input, but they'll produce slightly different output: either <code>Response<WebBody></code> or <code>Response<GrpcBody></code>. We're going to need to somehow unify those body representations. As mentioned above, we're going to use trait objects for error handling, so no unification there is necessary.</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Response </span><span style=\"color:#657b83;\">= Response<HybridBody<WebBody, GrpcBody>>;\n</span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Error </span><span style=\"color:#657b83;\">= </span><span style=\"color:#859900;\">Box</span><span style=\"color:#657b83;\"><dyn std::error::Error </span><span style=\"color:#859900;\">+ Send + Sync + </span><span style=\"color:#586e75;\">'static</span><span style=\"color:#657b83;\">>;\n</span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Future </span><span style=\"color:#657b83;\">= HybridFuture<</span><span style=\"color:#268bd2;\">Web::</span><span style=\"color:#657b83;\">Future, </span><span style=\"color:#268bd2;\">Grpc::</span><span style=\"color:#657b83;\">Future>;\n</span></code></pre>\n<p>The associated <code>Response</code> type is going to be a <code>Response<...></code> as well, but its body is going to be the <code>HybridBody<WebBody, GrpcBody></code> type. We'll get to that later. Similarly, we have two different <code>Future</code>s that may get called, depending on the kind of request. We need to unify over that with a <code>HybridFuture</code> type.</p>\n<p>Next, let's look at <code>poll_ready</code>. We need to check for both <code>Web</code> and <code>Grpc</code> being ready for a new request. And each check can result in one of three cases: <code>Pending</code>, <code>Ready(Err)</code>, or <code>Ready(Ok)</code>. This function is all about pattern matching and unifying the error representation using <code>.into()</code>:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">poll_ready</span><span style=\"color:#657b83;\">(\n </span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">,\n </span><span style=\"color:#268bd2;\">cx</span><span style=\"color:#657b83;\">: </span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#657b83;\">std::task::Context<'</span><span style=\"color:#859900;\">_</span><span style=\"color:#657b83;\">>,\n) -> std::task::Poll<</span><span style=\"color:#859900;\">Result</span><span style=\"color:#657b83;\"><(), </span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Error>> {\n </span><span style=\"color:#859900;\">match </span><span style=\"color:#d33682;\">self</span><span style=\"color:#657b83;\">.web.</span><span style=\"color:#859900;\">poll_ready</span><span style=\"color:#657b83;\">(cx) {\n Poll::Ready(</span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">(())) </span><span style=\"color:#859900;\">=> match </span><span style=\"color:#d33682;\">self</span><span style=\"color:#657b83;\">.grpc.</span><span style=\"color:#859900;\">poll_ready</span><span style=\"color:#657b83;\">(cx) {\n Poll::Ready(</span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">(())) </span><span style=\"color:#859900;\">=> </span><span style=\"color:#657b83;\">Poll::Ready(</span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">(())),\n Poll::Ready(</span><span style=\"color:#859900;\">Err</span><span style=\"color:#657b83;\">(e)) </span><span style=\"color:#859900;\">=> </span><span style=\"color:#657b83;\">Poll::Ready(</span><span style=\"color:#859900;\">Err</span><span style=\"color:#657b83;\">(e.</span><span style=\"color:#859900;\">into</span><span style=\"color:#657b83;\">())),\n Poll::Pending </span><span style=\"color:#859900;\">=> </span><span style=\"color:#657b83;\">Poll::Pending,\n },\n Poll::Ready(</span><span style=\"color:#859900;\">Err</span><span style=\"color:#657b83;\">(e)) </span><span style=\"color:#859900;\">=> </span><span style=\"color:#657b83;\">Poll::Ready(</span><span style=\"color:#859900;\">Err</span><span style=\"color:#657b83;\">(e.</span><span style=\"color:#859900;\">into</span><span style=\"color:#657b83;\">())),\n Poll::Pending </span><span style=\"color:#859900;\">=> </span><span style=\"color:#657b83;\">Poll::Pending,\n }\n}\n</span></code></pre>\n<p>And finally, we can see <code>call</code>, where the real logic we're trying to accomplish lives. This is where we get to look at the request and determine where to route it:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">call</span><span style=\"color:#657b83;\">(</span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">, </span><span style=\"color:#268bd2;\">req</span><span style=\"color:#657b83;\">: Request<Body>) -> </span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Future {\n </span><span style=\"color:#859900;\">if</span><span style=\"color:#657b83;\"> req.</span><span style=\"color:#859900;\">headers</span><span style=\"color:#657b83;\">().</span><span style=\"color:#859900;\">get</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">content-type</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">).</span><span style=\"color:#859900;\">map</span><span style=\"color:#657b83;\">(|</span><span style=\"color:#268bd2;\">x</span><span style=\"color:#657b83;\">| x.</span><span style=\"color:#859900;\">as_bytes</span><span style=\"color:#657b83;\">()) == </span><span style=\"color:#859900;\">Some</span><span style=\"color:#657b83;\">(</span><span style=\"color:#268bd2;\">b</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">application/grpc</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">) {\n HybridFuture::Grpc(</span><span style=\"color:#d33682;\">self</span><span style=\"color:#657b83;\">.grpc.</span><span style=\"color:#859900;\">call</span><span style=\"color:#657b83;\">(req))\n } </span><span style=\"color:#859900;\">else </span><span style=\"color:#657b83;\">{\n HybridFuture::Web(</span><span style=\"color:#d33682;\">self</span><span style=\"color:#657b83;\">.web.</span><span style=\"color:#859900;\">call</span><span style=\"color:#657b83;\">(req))\n }\n}\n</span></code></pre>\n<p>Amazing. All of this work for essentially 5 lines of meaningful code!</p>\n<h2 id=\"hybridfuture\"><code>HybridFuture</code></h2>\n<p>That's it, we're at the end! The final type we're going to analyze in this series is <code>HybridFuture</code>. (There's also a <code>HybridBody</code> type, but it's similar enough to <code>HybridFuture</code> that it doesn't warrant its own explanation.) The <code>struct</code>'s definition is:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">#[</span><span style=\"color:#268bd2;\">pin_project</span><span style=\"color:#657b83;\">(project </span><span style=\"color:#859900;\">=</span><span style=\"color:#657b83;\"> HybridFutureProj)]\n</span><span style=\"color:#268bd2;\">enum </span><span style=\"color:#b58900;\">HybridFuture</span><span style=\"color:#657b83;\"><WebFuture, GrpcFuture> {\n Web(#[</span><span style=\"color:#268bd2;\">pin</span><span style=\"color:#657b83;\">] WebFuture),\n Grpc(#[</span><span style=\"color:#268bd2;\">pin</span><span style=\"color:#657b83;\">] GrpcFuture),\n}\n</span></code></pre>\n<p>Like before, we're using <code>pin_project</code>. This time, let's explore why. The interface for the <code>Future</code> trait requires pinned pointers in memory. Specifically, the first argument to <code>poll</code> is <code>self: Pin<&mut Self></code>. Rust itself never gives any guarantees about object permanence, and that's absolutely critical to writing an async runtime system.</p>\n<p>The <code>poll</code> method on <code>HybridFuture</code> is therefore going to receive an argument of type <code>Pin<&mut HybridFuture></code>. The problem is that we need to call the <code>poll</code> method on the underlying <code>WebBody</code> or <code>GrpcBody</code>. Assuming we have the <code>Web</code> variant, the problem we face is that pattern matching on <code>HybridFuture</code> will give us a <code>&WebFuture</code> or <code>&mut WebFuture</code>. It won't give us a <code>Pin<&mut WebFuture></code>, which is what we need!</p>\n<p><code>pin_project</code> makes a projected data type, and provides a method <code>.project()</code> on the original that gives us those pinned mutable references instead. This allows us to implement the <code>Future</code> trait for <code>HybridFuture</code> correctly, like so:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">impl</span><span style=\"color:#657b83;\"><WebFuture, GrpcFuture, WebBody, GrpcBody, WebError, GrpcError> Future\n </span><span style=\"color:#859900;\">for </span><span style=\"color:#b58900;\">HybridFuture</span><span style=\"color:#657b83;\"><WebFuture, GrpcFuture>\n</span><span style=\"color:#859900;\">where</span><span style=\"color:#657b83;\">\n WebFuture: Future<Output = </span><span style=\"color:#859900;\">Result</span><span style=\"color:#657b83;\"><Response<WebBody>, WebError>>,\n GrpcFuture: Future<Output = </span><span style=\"color:#859900;\">Result</span><span style=\"color:#657b83;\"><Response<GrpcBody>, GrpcError>>,\n WebError: </span><span style=\"color:#859900;\">Into</span><span style=\"color:#657b83;\"><</span><span style=\"color:#859900;\">Box</span><span style=\"color:#657b83;\"><dyn std::error::Error </span><span style=\"color:#859900;\">+ Send + Sync + </span><span style=\"color:#586e75;\">'static</span><span style=\"color:#657b83;\">>>,\n GrpcError: </span><span style=\"color:#859900;\">Into</span><span style=\"color:#657b83;\"><</span><span style=\"color:#859900;\">Box</span><span style=\"color:#657b83;\"><dyn std::error::Error </span><span style=\"color:#859900;\">+ Send + Sync + </span><span style=\"color:#586e75;\">'static</span><span style=\"color:#657b83;\">>>,\n{\n </span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Output </span><span style=\"color:#657b83;\">= </span><span style=\"color:#859900;\">Result</span><span style=\"color:#657b83;\"><\n Response<HybridBody<WebBody, GrpcBody>>,\n </span><span style=\"color:#859900;\">Box</span><span style=\"color:#657b83;\"><dyn std::error::Error </span><span style=\"color:#859900;\">+ Send + Sync + </span><span style=\"color:#586e75;\">'static</span><span style=\"color:#657b83;\">>,\n >;\n\n </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">poll</span><span style=\"color:#657b83;\">(</span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">: Pin<</span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#268bd2;\">Self</span><span style=\"color:#657b83;\">>, </span><span style=\"color:#268bd2;\">cx</span><span style=\"color:#657b83;\">: </span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#657b83;\">std::task::Context) -> Poll<</span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Output> {\n </span><span style=\"color:#859900;\">match </span><span style=\"color:#d33682;\">self</span><span style=\"color:#657b83;\">.</span><span style=\"color:#859900;\">project</span><span style=\"color:#657b83;\">() {\n HybridFutureProj::Web(a) </span><span style=\"color:#859900;\">=> match</span><span style=\"color:#657b83;\"> a.</span><span style=\"color:#859900;\">poll</span><span style=\"color:#657b83;\">(cx) {\n Poll::Ready(</span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">(res)) </span><span style=\"color:#859900;\">=> </span><span style=\"color:#657b83;\">Poll::Ready(</span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">(res.</span><span style=\"color:#859900;\">map</span><span style=\"color:#657b83;\">(HybridBody::Web))),\n Poll::Ready(</span><span style=\"color:#859900;\">Err</span><span style=\"color:#657b83;\">(e)) </span><span style=\"color:#859900;\">=> </span><span style=\"color:#657b83;\">Poll::Ready(</span><span style=\"color:#859900;\">Err</span><span style=\"color:#657b83;\">(e.</span><span style=\"color:#859900;\">into</span><span style=\"color:#657b83;\">())),\n Poll::Pending </span><span style=\"color:#859900;\">=> </span><span style=\"color:#657b83;\">Poll::Pending,\n },\n HybridFutureProj::Grpc(b) </span><span style=\"color:#859900;\">=> match</span><span style=\"color:#657b83;\"> b.</span><span style=\"color:#859900;\">poll</span><span style=\"color:#657b83;\">(cx) {\n Poll::Ready(</span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">(res)) </span><span style=\"color:#859900;\">=> </span><span style=\"color:#657b83;\">Poll::Ready(</span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">(res.</span><span style=\"color:#859900;\">map</span><span style=\"color:#657b83;\">(HybridBody::Grpc))),\n Poll::Ready(</span><span style=\"color:#859900;\">Err</span><span style=\"color:#657b83;\">(e)) </span><span style=\"color:#859900;\">=> </span><span style=\"color:#657b83;\">Poll::Ready(</span><span style=\"color:#859900;\">Err</span><span style=\"color:#657b83;\">(e.</span><span style=\"color:#859900;\">into</span><span style=\"color:#657b83;\">())),\n Poll::Pending </span><span style=\"color:#859900;\">=> </span><span style=\"color:#657b83;\">Poll::Pending,\n },\n }\n }\n}\n</span></code></pre>\n<p>We unify together the successful response bodies with the <code>HybridBody</code> <code>enum</code> and use a trait object for error handling. And now we're presenting a single unified type for both types of requests. Huzzah!</p>\n<h2 id=\"conclusions\">Conclusions</h2>\n<p>Thank you dear reader for getting through these posts. I hope it was helpful. I definitely felt more comfortable with the Tower/Hyper ecosystem after diving into these details like this. Let's sum up some highlights from this series:</p>\n<ul>\n<li>Tower provides a Rusty interface called <code>Service</code> for async functions from inputs to outputs, or requests to responses, which may fail\n<ul>\n<li>Don't forget, there are two levels of async behavior in this interface: checking whether the <code>Service</code> is ready and then waiting for it to complete processing</li>\n</ul>\n</li>\n<li>HTTP itself necessitates two levels of async functions: a <code>type InnerService = Request -> IO Response</code> for individual requests, and <code>type OuterService = ConnectionInfo -> IO InnerService</code> for the overall connection</li>\n<li>Hyper provides a concrete server implementation that can accept things that look like <code>OuterService</code> and run them\n<ul>\n<li>It uses a lot of traits, some of which are not publicly exposed, to generalize</li>\n<li>It provides significant flexibility in the request and response body representation</li>\n<li>The helper functions <code>service_fn</code> and <code>make_service_fn</code> are a common way to create the two levels of <code>Service</code> necessary</li>\n</ul>\n</li>\n<li>Axum is a lightweight framework sitting on top of Hyper, and exposing a lot of its interface</li>\n<li>gRPC is an HTTP/2 based protocol which can be hosted via Hyper using the Tonic library</li>\n<li>Dispatching between an Axum service and gRPC is conceptually easy: just check the <code>content-type</code> header to see if something is a gRPC request</li>\n<li>But to make that happen, we need a bunch of helper "hybrid" types to unify the different types between Axum and Tonic</li>\n<li>A lot of the time, you can get away with trait objects to enable type erasure, but hybridizing <code>Either</code>-style <code>enum</code>s work as well\n<ul>\n<li>While they're more verbose, they may also be clearer</li>\n<li>There's also a potential performance gain by avoiding dynamic dispatch</li>\n</ul>\n</li>\n</ul>\n<p>If you want to review it, remember that a complete project is available on GitHub at <a href=\"https://github.com/snoyberg/tonic-example\">https://github.com/snoyberg/tonic-example</a>.</p>\n<p>Finally, some more subjective takeaways from me:</p>\n<ul>\n<li>I'm overall liking Axum, and I'm already using it for a new client project.</li>\n<li>I do wish it was a little higher level, and that the type errors weren't quite as intimidating. I think there may be some room in this space for more aggressive type erasure-focused frameworks, exchanging a bit of runtime performance for significantly simpler ergonomics.</li>\n<li>I'm also looking at rewriting our Zehut product to leverage Axum. So far, it's gone pretty well, but other responsibilities have taken me off of that work for the foreseeable future. And there are some <a href=\"https://github.com/tokio-rs/axum/issues/200\">painful compilation issues</a> to be aware of.\n<ul>\n<li><strong>UPDATE January 23, 2022</strong> As <a href=\"https://twitter.com/rbtcollins/status/1484559351490744330?s=21\">pointed out on Twitter</a>, Axum has fixed this issue in newer versions. I've actually already used this improvement in other projects since then, but forgot to update the blog post. Thanks for the reminder Robert!</li>\n</ul>\n</li>\n<li>I do miss strongly typed routes, but overall I'd rather use something like Axum than push farther with <code>routetype</code>. In the future, though, I may look into providing some <code>routetype</code>/<code>axum</code> bridge.</li>\n</ul>\n<p>If this kind of content was helpful, and you're interested in more in the future, please consider <a href=\"https://blogtrottr.com/?subscribe=https://www.fpcomplete.com/feed/\">subscribing to our blog</a>. Let me know (<a href=\"https://twitter.com/snoyberg\">on Twitter</a> or elsewhere) if you have any requests for additional content like this.</p>\n<p>If you're looking for more Rust content, check out:</p>\n<ul>\n<li><a href=\"/tags/rust/\">Rust tagged blog posts</a></li>\n<li><a href=\"https://www.fpcomplete.com/rust/\">Rust homepage</a></li>\n<li><a href=\"https://www.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a></li>\n</ul>\n",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/",
"slug": "axum-hyper-tonic-tower-part4",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 4",
"description": "Part 4 of a blog post series examining the Hyper/Tower web ecosystem in Rust, and specifically combining the Axum framework and Tonic gRPC servers.",
"updated": null,
"date": "2021-09-20",
"year": 2021,
"month": 9,
"day": 20,
"taxonomies": {
"tags": [
"rust"
],
"categories": [
"functional programming"
]
},
"extra": {
"author": "Michael Snoyman",
"author_avatar": "/images/leaders/michael-snoyman.png",
"image": "images/blog/thumbs/axum-hyper-tonic-tower-part4.png",
"blogimage": "/images/blog-listing/rust.png"
},
"path": "blog/axum-hyper-tonic-tower-part4/",
"components": [
"blog",
"axum-hyper-tonic-tower-part4"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "single-port-two-protocols",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/#single-port-two-protocols",
"title": "Single port, two protocols",
"children": []
},
{
"level": 1,
"id": "defining-hybrid",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/#defining-hybrid",
"title": "Defining hybrid",
"children": [
{
"level": 2,
"id": "hybridservice",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/#hybridservice",
"title": "HybridService",
"children": []
},
{
"level": 2,
"id": "hybridfuture",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/#hybridfuture",
"title": "HybridFuture",
"children": []
},
{
"level": 2,
"id": "conclusions",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/#conclusions",
"title": "Conclusions",
"children": []
}
]
}
],
"word_count": 2427,
"reading_time": 13,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/axum-hyper-tonic-tower-part3.md",
"content": "<p>This is the third of four posts in a series on combining web and gRPC services into a single service using Tower, Hyper, Axum, and Tonic. The full four parts are:</p>\n<ol>\n<li><a href=\"https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/\">Overview of Tower</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/\">Understanding Hyper, and first experiences with Axum</a></li>\n<li>Today's post: Demonstration of Tonic for a gRPC client/server</li>\n<li><a href=\"https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/\">How to combine Axum and Tonic services into a single service</a></li>\n</ol>\n<h2 id=\"tonic-and-grpc\">Tonic and gRPC</h2>\n<p>Tonic is a gRPC client and server library. gRPC is a protocol that sits on top of HTTP/2, and therefore Tonic is built on top of Hyper (and Tower). I already mentioned at the beginning of this series that my ultimate goal is to be able to serve hybrid web/gRPC services over a single port. But for now, let's get comfortable with a standard Tonic client/server application. We're going to create an echo server, which provides an endpoint that will repeat back whatever message you send it.</p>\n<p>The full code for this is <a href=\"https://github.com/snoyberg/tonic-example\">available on GitHub</a>. The repository is structured as a single package with three different crates:</p>\n<ul>\n<li>A library crate providing the protobuf definitions and Tonic-generated server and client items</li>\n<li>A binary crate providing a simple client tool</li>\n<li>A binary crate providing the server executable</li>\n</ul>\n<p>The first file we'll look at is the protobuf definition of our service, located in <code>proto/echo.proto</code>:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">syntax = "proto3";\n\npackage echo;\n\nservice Echo {\n rpc Echo (EchoRequest) returns (EchoReply) {}\n}\n\nmessage EchoRequest {\n string message = 1;\n}\n\nmessage EchoReply {\n string message = 1;\n}\n</span></code></pre>\n<p>Even if you're not familiar with protobuf, hopefully the example above is fairly self-explanatory. We need a <code>build.rs</code> file to use <code>tonic_build</code> to compile this file:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">main</span><span style=\"color:#657b83;\">() {\n tonic_build::configure()\n .</span><span style=\"color:#859900;\">compile</span><span style=\"color:#657b83;\">(</span><span style=\"color:#859900;\">&</span><span style=\"color:#657b83;\">[</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">proto/echo.proto</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">], </span><span style=\"color:#859900;\">&</span><span style=\"color:#657b83;\">[</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">proto</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">])\n .</span><span style=\"color:#859900;\">unwrap</span><span style=\"color:#657b83;\">();\n}\n</span></code></pre>\n<p>And finally, we have our mammoth <code>src/lib.rs</code> providing all the items we'll need for implementing our client and server:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">tonic::include_proto</span><span style=\"color:#859900;\">!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">echo</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">);\n</span></code></pre>\n<p>There's nothing terribly interesting about the client. It's a typical <code>clap</code>-based CLI tool that uses Tokio and Tonic. You can <a href=\"https://github.com/snoyberg/tonic-example/blob/master/src/bin/client.rs\">read the source on GitHub</a>.</p>\n<p>Let's move onto the important part: the server.</p>\n<h2 id=\"the-server\">The server</h2>\n<p>The Tonic code we put into our library crate generates an <code>Echo</code> trait. We need to implement that trait on some type to make our gRPC service. This isn't directly related to our topic today. It's also fairly straightforward Rust code. I've so far found the experience of writing client/server apps with Tonic to be a real pleasure, specifically because of how easy these kinds of implementations are:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#859900;\">use </span><span style=\"color:#657b83;\">tonic_example::echo_server::{Echo, EchoServer};\n</span><span style=\"color:#859900;\">use </span><span style=\"color:#657b83;\">tonic_example::{EchoReply, EchoRequest};\n\n</span><span style=\"color:#586e75;\">pub </span><span style=\"color:#268bd2;\">struct </span><span style=\"color:#b58900;\">MyEcho</span><span style=\"color:#657b83;\">;\n\n#[</span><span style=\"color:#268bd2;\">async_trait</span><span style=\"color:#657b83;\">]\n</span><span style=\"color:#268bd2;\">impl </span><span style=\"color:#657b83;\">Echo </span><span style=\"color:#859900;\">for </span><span style=\"color:#b58900;\">MyEcho </span><span style=\"color:#657b83;\">{\n async </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">echo</span><span style=\"color:#657b83;\">(\n </span><span style=\"color:#859900;\">&</span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">,\n </span><span style=\"color:#268bd2;\">request</span><span style=\"color:#657b83;\">: tonic::Request<EchoRequest>,\n ) -> </span><span style=\"color:#859900;\">Result</span><span style=\"color:#657b83;\"><tonic::Response<EchoReply>, tonic::Status> {\n </span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">(tonic::Response::new(EchoReply {\n message: </span><span style=\"color:#859900;\">format!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Echoing back: </span><span style=\"color:#cb4b16;\">{}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, request.</span><span style=\"color:#859900;\">get_ref</span><span style=\"color:#657b83;\">().message),\n }))\n }\n}\n</span></code></pre>\n<p>If you look in the <a href=\"https://github.com/snoyberg/tonic-example/blob/master/src/bin/server.rs\">source on GitHub</a>, there are two different implementations of <code>main</code>, one of them commented out. That one's the more straightforward approach, so let's start with that:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">#[</span><span style=\"color:#268bd2;\">tokio</span><span style=\"color:#657b83;\">::</span><span style=\"color:#268bd2;\">main</span><span style=\"color:#657b83;\">]\nasync </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">main</span><span style=\"color:#657b83;\">() -> anyhow::</span><span style=\"color:#859900;\">Result</span><span style=\"color:#657b83;\"><()> {\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> addr = ([</span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\">, </span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\">, </span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\">, </span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\">], </span><span style=\"color:#6c71c4;\">3000</span><span style=\"color:#657b83;\">).</span><span style=\"color:#859900;\">into</span><span style=\"color:#657b83;\">();\n\n tonic::transport::Server::builder()\n .</span><span style=\"color:#859900;\">add_service</span><span style=\"color:#657b83;\">(EchoServer::new(MyEcho))\n .</span><span style=\"color:#859900;\">serve</span><span style=\"color:#657b83;\">(addr)\n .await</span><span style=\"color:#859900;\">?</span><span style=\"color:#657b83;\">;\n\n </span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">(())\n}\n</span></code></pre>\n<p>This uses Tonic's <code>Server::builder</code> to create a new <code>Server</code> value. It then calls <code>add_service</code>, which looks like this:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">impl</span><span style=\"color:#657b83;\"><L> </span><span style=\"color:#b58900;\">Server</span><span style=\"color:#657b83;\"><L> {\n </span><span style=\"color:#586e75;\">pub </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">add_service</span><span style=\"color:#657b83;\"><S>(</span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">, </span><span style=\"color:#268bd2;\">svc</span><span style=\"color:#657b83;\">: S) -> Router<S, Unimplemented, L>\n </span><span style=\"color:#859900;\">where</span><span style=\"color:#657b83;\">\n S: Service<Request<Body>, Response = Response<BoxBody>>\n + NamedService\n + Clone\n + Send\n + </span><span style=\"color:#586e75;\">'static</span><span style=\"color:#657b83;\">,\n </span><span style=\"color:#268bd2;\">S::</span><span style=\"color:#657b83;\">Future: Send + </span><span style=\"color:#586e75;\">'static</span><span style=\"color:#657b83;\">,\n </span><span style=\"color:#268bd2;\">S::</span><span style=\"color:#657b83;\">Error: </span><span style=\"color:#859900;\">Into</span><span style=\"color:#657b83;\"><crate::Error> + Send,\n L: Clone\n}\n</span></code></pre>\n<p>We've got another <code>Router</code>. This works like in Axum, but it's for routing gRPC calls to the appropriate named service. Let's talk through the type parameters and traits here:</p>\n<ul>\n<li><code>L</code> represents the <em>layer</em>, or the middlewares added to this server. It will default to <a href=\"https://docs.rs/tower/0.4.8/tower/layer/util/struct.Identity.html\"><code>Identity</code></a>, to represent the no middleware case.</li>\n<li><code>S</code> is the new service we're trying to add, which in our case is an <code>EchoServer</code>.</li>\n<li>Our service needs to accept the ever-familiar <code>Request<Body></code> type, and respond with a <code>Response<BoxBody></code>. (We'll discuss <code>BoxBody</code> on its own below.) It also needs to be <a href=\"https://docs.rs/tonic/0.5.2/tonic/transport/trait.NamedService.html\"><code>NamedService</code></a> (for routing).</li>\n<li>As usual, there are a bunch of <code>Clone</code>, <code>Send</code>, and <code>'static</code> bounds too, and requirements on the error representation.</li>\n</ul>\n<p>As complicated as all of that appears, the nice thing is that we don't really need to deal with those details in a simple Tonic application. Instead, we simply call the <code>serve</code> method and everything works like magic.</p>\n<p>But we're trying to go off the beaten path and get a better understanding of how this interacts with Hyper. So let's go deeper!</p>\n<h2 id=\"into-service\"><code>into_service</code></h2>\n<p>In addition to the <code>serve</code> method, Tonic's <code>Router</code> type also provides an <a href=\"https://docs.rs/tonic/0.5.2/tonic/transport/server/struct.Router.html#method.into_service\"><code>into_service</code> method</a>. I'm not going to go into all of its glory here, since it doesn't add much to the discussion but adds a lot to the reading you'll have to do. Instead, suffice it to say that</p>\n<ul>\n<li><code>into_service</code> returns a <code>RouterService<S></code> value</li>\n<li><code>S</code> must implement <code>Service<Request<Body>, Response = Response<ResBody>></code></li>\n<li><code>ResBody</code> is a type that Hyper can use for response bodies</li>\n</ul>\n<p>OK, cool? Now we can write our slightly more long-winded <code>main</code> function. First we create our <code>RouterService</code> value:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> grpc_service = tonic::transport::Server::builder()\n .</span><span style=\"color:#859900;\">add_service</span><span style=\"color:#657b83;\">(EchoServer::new(MyEcho))\n .</span><span style=\"color:#859900;\">into_service</span><span style=\"color:#657b83;\">();\n</span></code></pre>\n<p>But now we have a bit of a problem. Hyper expects a "make service" or an "app factory", and instead we just have a request handling service. So we need to go back to Hyper and use <code>make_service_fn</code>:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> make_grpc_service = </span><span style=\"color:#859900;\">make_service_fn</span><span style=\"color:#657b83;\">(</span><span style=\"color:#586e75;\">move </span><span style=\"color:#859900;\">|</span><span style=\"color:#657b83;\">_conn</span><span style=\"color:#859900;\">| </span><span style=\"color:#657b83;\">{\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> grpc_service = grpc_service.</span><span style=\"color:#859900;\">clone</span><span style=\"color:#657b83;\">();\n async { </span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">::<</span><span style=\"color:#859900;\">_</span><span style=\"color:#657b83;\">, Infallible>(grpc_service) }\n});\n</span></code></pre>\n<p>Notice that we need to clone a new copy of the <code>grpc_service</code>, and we need to play all the games with splitting up the closure and the async block, plus <code>Infallible</code>, that we saw before. But now, with <em>that</em> in place, we can launch our gRPC service:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> server = hyper::Server::bind(</span><span style=\"color:#859900;\">&</span><span style=\"color:#657b83;\">addr).</span><span style=\"color:#859900;\">serve</span><span style=\"color:#657b83;\">(make_grpc_service);\n\n</span><span style=\"color:#859900;\">if </span><span style=\"color:#268bd2;\">let </span><span style=\"color:#859900;\">Err</span><span style=\"color:#657b83;\">(e) = server.await {\n </span><span style=\"color:#859900;\">eprintln!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">server error: </span><span style=\"color:#cb4b16;\">{}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, e);\n}\n</span></code></pre>\n<p>If you want to play with this, you can clone <a href=\"https://github.com/snoyberg/tonic-example\">the tonic-example repo</a> and then:</p>\n<ul>\n<li>Run <code>cargo run --bin server</code> in one terminal</li>\n<li>Run <code>cargo run --bin client "Hello world!"</code> in another</li>\n</ul>\n<p>However, trying to open up http://localhost:3000 in your browser isn't going to work out too well. This server will only handle gRPC connections, not standard web browser requests, RESTful APIs, etc. We've got one final step now: writing something that can handle both Axum and Tonic services and route to them appropriately.</p>\n<h2 id=\"boxbody\"><code>BoxBody</code></h2>\n<p>Let's look into that <code>BoxBody</code> type in a little more detail. We're using the <code>tonic::body::BoxBody</code> <code>struct</code>, which is defined as:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#586e75;\">pub </span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">BoxBody </span><span style=\"color:#657b83;\">= http_body::combinators::BoxBody<bytes::Bytes, crate::Status>;\n</span></code></pre>\n<p><code>http_body</code> itself provides its own <code>BoxBody</code>, which is parameterized over the <em>data</em> and <em>error</em>. Tonic uses the <code>Status</code> type for errors, and represents the different status codes a gRPC service can return. For those not familiar with <code>Bytes</code>, here's a quick excerpt from <a href=\"https://docs.rs/bytes/1.1.0/bytes/\">the docs</a></p>\n<blockquote>\n<p><code>Bytes</code> is an efficient container for storing and operating on contiguous slices of memory. It is intended for use primarily in networking code, but could have applications elsewhere as well.</p>\n<p><code>Bytes</code> values facilitate zero-copy network programming by allowing multiple <code>Bytes</code> objects to point to the same underlying memory. This is managed by using a reference count to track when the memory is no longer needed and can be freed.</p>\n</blockquote>\n<p>When you see <code>Bytes</code>, you can semantically think of it as a byte slice or byte vector. The underlying <code>BoxBody</code> from the <code>http_body</code> crate represents some kind of implementation of the <a href=\"https://docs.rs/http-body/0.4.3/http_body/trait.Body.html\"><code>http_body::Body</code></a> trait. The <code>Body</code> trait represents a streaming HTTP body, and contains:</p>\n<ul>\n<li>Associated types for <code>Data</code> and <code>Error</code>, corresponding to the type parameters to <code>BoxBody</code></li>\n<li><code>poll_data</code> for asynchronously reading more data from the body</li>\n<li>Helper <code>map_data</code> and <code>map_err</code> methods for manipulating the <code>Data</code> and <code>Error</code> associated types</li>\n<li>A <code>boxed</code> method for some type erasure, allowing us to get back a <code>BoxBody</code></li>\n<li>A few other helper methods around size hints and HTTP/2 trailing data</li>\n</ul>\n<p>The important thing to note for our purposes is that "type erasure" here isn't really complete type erasure. When we use <code>boxed</code> to get a trait object representing the body, we still have type parameters to represent the <code>Data</code> and <code>Error</code>. Therefore, if we end up with two different representations of <code>Data</code> or <code>Error</code>, they won't be compatible with each other. And let me ask you: do you think Axum will use the same <code>Status</code> error type to represent errors that Tonic does? (Hint: it doesn't.) So when we get to it next time, we'll have some footwork to do around unifying error types.</p>\n<h2 id=\"almost-there\">Almost there!</h2>\n<p>We'll tie up next week with the final post in this series, tying together all the different things we've seen so far.</p>\n<p class=\"text-center\"><a class=\"btn btn-info\" href=\"/blog/axum-hyper-tonic-tower-part4\">Read part 4 now</a></p>\n<p>If you're looking for more Rust content, check out:</p>\n<ul>\n<li><a href=\"/tags/rust/\">Rust tagged blog posts</a></li>\n<li><a href=\"https://www.fpcomplete.com/rust/\">Rust homepage</a></li>\n<li><a href=\"https://www.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a></li>\n</ul>\n",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/",
"slug": "axum-hyper-tonic-tower-part3",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 3",
"description": "Part 3 of a blog post series examining the Hyper/Tower web ecosystem in Rust, and specifically combining the Axum framework and Tonic gRPC servers.",
"updated": null,
"date": "2021-09-13",
"year": 2021,
"month": 9,
"day": 13,
"taxonomies": {
"categories": [
"functional programming"
],
"tags": [
"rust"
]
},
"extra": {
"author": "Michael Snoyman",
"author_avatar": "/images/leaders/michael-snoyman.png",
"image": "images/blog/thumbs/axum-hyper-tonic-tower-part3.png",
"blogimage": "/images/blog-listing/rust.png"
},
"path": "blog/axum-hyper-tonic-tower-part3/",
"components": [
"blog",
"axum-hyper-tonic-tower-part3"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "tonic-and-grpc",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/#tonic-and-grpc",
"title": "Tonic and gRPC",
"children": []
},
{
"level": 2,
"id": "the-server",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/#the-server",
"title": "The server",
"children": []
},
{
"level": 2,
"id": "into-service",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/#into-service",
"title": "into_service",
"children": []
},
{
"level": 2,
"id": "boxbody",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/#boxbody",
"title": "BoxBody",
"children": []
},
{
"level": 2,
"id": "almost-there",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/#almost-there",
"title": "Almost there!",
"children": []
}
],
"word_count": 1583,
"reading_time": 8,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/axum-hyper-tonic-tower-part2.md",
"content": "<p>This is the second of four posts in a series on combining web and gRPC services into a single service using Tower, Hyper, Axum, and Tonic. The full four parts are:</p>\n<ol>\n<li><a href=\"https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/\">Overview of Tower</a></li>\n<li>Today's post: Understanding Hyper, and first experiences with Axum</li>\n<li><a href=\"https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/\">Demonstration of Tonic for a gRPC client/server</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/\">How to combine Axum and Tonic services into a single service</a></li>\n</ol>\n<p>I recommend checking out the first post in the series if you haven't already.</p>\n<p class=\"text-center\" style=\"border: 1px solid #000;border-radius:1rem;padding:1rem;background-color:#f1f1f1\">\n <a class=\"btn btn-primary\" href=\"https://blogtrottr.com/?subscribe=https://www.fpcomplete.com/feed/\" target=\"_blank\">\n Subscribe to our blog via email\n </a>\n <br>\n <small>Email subscriptions come from our <a target=\"_blank\" href=\"/feed/\">Atom feed</a> and are handled by <a target=\"_blank\" href=\"https://blogtrottr.com\">Blogtrottr</a>. You will only receive notifications of blog posts, and can unsubscribe any time.</small>\n</p>\n<h2 id=\"quick-recap\">Quick recap</h2>\n<ul>\n<li>Tower provides a <code>Service</code> trait, which is basically an asynchronous function from requests to responses</li>\n<li><code>Service</code> is parameterized on the request type, and has an associated type for <code>Response</code></li>\n<li>It also has an associated <code>Error</code> type, and an associated <code>Future</code> type</li>\n<li><code>Service</code> allows async behavior in both checking whether the service is ready to accept a request, and for handling the request</li>\n<li>A web application ends up having two sets of async request/response behavior\n<ul>\n<li>Inner: a service that accepts HTTP requests and returns HTTP responses</li>\n<li>Outer: a service that accepts the incoming network connections and returns an inner service</li>\n</ul>\n</li>\n</ul>\n<p>With that in mind, let's look at Hyper.</p>\n<h2 id=\"services-in-hyper\">Services in Hyper</h2>\n<p>Now that we've got Tower under our belts a bit, it's time to dive into the specific world of Hyper. Much of what we saw above will apply directly to Hyper. But Hyper has a few additional curveballs to deal with:</p>\n<ul>\n<li>Both the <code>Request</code> and <code>Response</code> types are parameterized over the representation of the request/response bodies</li>\n<li>There are a bunch of additional traits and type parameterized in the public API, some not appearing in the docs at all, and many that are unclear</li>\n</ul>\n<p>In place of the <code>run</code> function we had in our previous fake server example, Hyper follows a builder pattern for initializing HTTP servers. After providing configuration values, you create an active <code>Server</code> value from your <code>Builder</code> with the <a href=\"https://docs.rs/hyper/0.14.12/hyper/server/struct.Builder.html#method.serve\"><code>serve</code></a> method. Just to get it out of the way now, this is the type signature of <code>serve</code> from the public docs:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#586e75;\">pub </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">serve</span><span style=\"color:#657b83;\"><S, B>(</span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">, </span><span style=\"color:#268bd2;\">new_service</span><span style=\"color:#657b83;\">: S) -> Server<I, S, E>\n</span><span style=\"color:#859900;\">where</span><span style=\"color:#657b83;\">\n I: Accept,\n </span><span style=\"color:#268bd2;\">I::</span><span style=\"color:#657b83;\">Error: </span><span style=\"color:#859900;\">Into</span><span style=\"color:#657b83;\"><</span><span style=\"color:#859900;\">Box</span><span style=\"color:#657b83;\"><dyn StdError </span><span style=\"color:#859900;\">+ Send + Sync</span><span style=\"color:#657b83;\">>>,\n </span><span style=\"color:#268bd2;\">I::</span><span style=\"color:#657b83;\">Conn: AsyncRead + AsyncWrite + Unpin + Send + </span><span style=\"color:#586e75;\">'static</span><span style=\"color:#657b83;\">,\n S: MakeServiceRef<</span><span style=\"color:#268bd2;\">I::</span><span style=\"color:#657b83;\">Conn, Body, ResBody = B>,\n </span><span style=\"color:#268bd2;\">S::</span><span style=\"color:#657b83;\">Error: </span><span style=\"color:#859900;\">Into</span><span style=\"color:#657b83;\"><</span><span style=\"color:#859900;\">Box</span><span style=\"color:#657b83;\"><dyn StdError </span><span style=\"color:#859900;\">+ Send + Sync</span><span style=\"color:#657b83;\">>>,\n B: HttpBody + </span><span style=\"color:#586e75;\">'static</span><span style=\"color:#657b83;\">,\n </span><span style=\"color:#268bd2;\">B::</span><span style=\"color:#657b83;\">Error: </span><span style=\"color:#859900;\">Into</span><span style=\"color:#657b83;\"><</span><span style=\"color:#859900;\">Box</span><span style=\"color:#657b83;\"><dyn StdError </span><span style=\"color:#859900;\">+ Send + Sync</span><span style=\"color:#657b83;\">>>,\n E: NewSvcExec<</span><span style=\"color:#268bd2;\">I::</span><span style=\"color:#657b83;\">Conn, </span><span style=\"color:#268bd2;\">S::</span><span style=\"color:#657b83;\">Future, </span><span style=\"color:#268bd2;\">S::</span><span style=\"color:#657b83;\">Service, E, NoopWatcher>,\n E: ConnStreamExec<<</span><span style=\"color:#268bd2;\">S::</span><span style=\"color:#657b83;\">Service </span><span style=\"color:#859900;\">as </span><span style=\"color:#657b83;\">HttpService<Body>>::Future, B>,\n</span></code></pre>\n<p>That's a lot of requirements, and not all of them are clear from the docs. Hopefully we can bring some clarity to this. But for now, let's start off with something simpler: the "Hello world" example from <a href=\"https://hyper.rs\">the Hyper homepage</a>:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#859900;\">use </span><span style=\"color:#657b83;\">std::{convert::Infallible, net::SocketAddr};\n</span><span style=\"color:#859900;\">use </span><span style=\"color:#657b83;\">hyper::{Body, Request, Response, Server};\n</span><span style=\"color:#859900;\">use </span><span style=\"color:#657b83;\">hyper::service::{make_service_fn, service_fn};\n\nasync </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">handle</span><span style=\"color:#657b83;\">(</span><span style=\"color:#859900;\">_</span><span style=\"color:#657b83;\">: Request<Body>) -> </span><span style=\"color:#859900;\">Result</span><span style=\"color:#657b83;\"><Response<Body>, Infallible> {\n </span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">(Response::new(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Hello, World!</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">.</span><span style=\"color:#859900;\">into</span><span style=\"color:#657b83;\">()))\n}\n\n#[</span><span style=\"color:#268bd2;\">tokio</span><span style=\"color:#657b83;\">::</span><span style=\"color:#268bd2;\">main</span><span style=\"color:#657b83;\">]\nasync </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">main</span><span style=\"color:#657b83;\">() {\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> addr = SocketAddr::from(([</span><span style=\"color:#6c71c4;\">127</span><span style=\"color:#657b83;\">, </span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\">, </span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\">, </span><span style=\"color:#6c71c4;\">1</span><span style=\"color:#657b83;\">], </span><span style=\"color:#6c71c4;\">3000</span><span style=\"color:#657b83;\">));\n\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> make_svc = </span><span style=\"color:#859900;\">make_service_fn</span><span style=\"color:#657b83;\">(|</span><span style=\"color:#268bd2;\">_conn</span><span style=\"color:#657b83;\">| async {\n </span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">::<</span><span style=\"color:#859900;\">_</span><span style=\"color:#657b83;\">, Infallible>(</span><span style=\"color:#859900;\">service_fn</span><span style=\"color:#657b83;\">(handle))\n });\n\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> server = Server::bind(</span><span style=\"color:#859900;\">&</span><span style=\"color:#657b83;\">addr).</span><span style=\"color:#859900;\">serve</span><span style=\"color:#657b83;\">(make_svc);\n\n </span><span style=\"color:#859900;\">if </span><span style=\"color:#268bd2;\">let </span><span style=\"color:#859900;\">Err</span><span style=\"color:#657b83;\">(e) = server.await {\n </span><span style=\"color:#859900;\">eprintln!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">server error: </span><span style=\"color:#cb4b16;\">{}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, e);\n }\n}\n</span></code></pre>\n<p>This follows the same pattern we established above:</p>\n<ul>\n<li><code>handle</code> is an async function from a <code>Request</code> to a <code>Response</code>, which may fail with an <code>Infallible</code> value.\n<ul>\n<li>Both <code>Request</code> and <code>Response</code> are parameterized with <code>Body</code>, a default HTTP body representation.</li>\n</ul>\n</li>\n<li><code>handle</code> gets wrapped up in <code>service_fn</code> to produce a <code>Service<Request<Body>></code>. This is like <code>app_fn</code> above.</li>\n<li>We use <code>make_service_fn</code>, like <code>app_factory_fn</code> above, to produce the <code>Service<&AddrStream></code> (we'll get to that <code>&AddrStream</code> shortly).\n<ul>\n<li>We don't care about the <code>&AddrStream</code> value, so we ignore it</li>\n<li>The return value from the function inside <code>make_service_fn</code> must be a <code>Future</code>, so we wrap with <code>async</code></li>\n<li>The output of that <code>Future</code> must be a <code>Result</code>, so we wrap with an <code>Ok</code></li>\n<li>We need to help the compiler out a bit and provide a type annotation of <code>Infallible</code>, otherwise it won't know the type of the <code>Ok(service_fn(handle))</code> expression</li>\n</ul>\n</li>\n</ul>\n<p>Using this level of abstraction for writing a normal web app is painful for (at least) three different reasons:</p>\n<ul>\n<li>Managing all of these <code>Service</code> pieces manually is a pain</li>\n<li>There's very little in the way high level helpers, like "parse the request body as a JSON value"</li>\n<li>Any kind of mistake in your types may lead to very large, non-local error messages that are difficult to diagnose</li>\n</ul>\n<p>So we'll be more than happy to move on from Hyper to Axum a bit later. But for now, let's continue exploring things at the Hyper layer.</p>\n<h2 id=\"bypassing-service-fn-and-make-service-fn\">Bypassing <code>service_fn</code> and <code>make_service_fn</code></h2>\n<p>What I found most helpful when trying to grok Hyper was implementing a simple app without <code>service_fn</code> and <code>make_service_fn</code>. So let's go through that ourselves here. We're going to create a simple counter app (I'm nothing if not predictable). We'll need two different data types: one for the "app factory", and one for the app itself. Let's start with the app itself:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">struct </span><span style=\"color:#b58900;\">DemoApp </span><span style=\"color:#657b83;\">{\n </span><span style=\"color:#268bd2;\">counter</span><span style=\"color:#657b83;\">: Arc<AtomicUsize>,\n}\n\n</span><span style=\"color:#268bd2;\">impl </span><span style=\"color:#657b83;\">Service<Request<Body>> </span><span style=\"color:#859900;\">for </span><span style=\"color:#b58900;\">DemoApp </span><span style=\"color:#657b83;\">{\n </span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Response </span><span style=\"color:#657b83;\">= Response<Body>;\n </span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Error </span><span style=\"color:#657b83;\">= hyper::http::Error;\n </span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Future </span><span style=\"color:#657b83;\">= Ready<</span><span style=\"color:#859900;\">Result</span><span style=\"color:#657b83;\"><</span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Response, </span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Error>>;\n\n </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">poll_ready</span><span style=\"color:#657b83;\">(</span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">, </span><span style=\"color:#268bd2;\">_cx</span><span style=\"color:#657b83;\">: </span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#657b83;\">std::task::Context) -> Poll<</span><span style=\"color:#859900;\">Result</span><span style=\"color:#657b83;\"><(), </span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Error>> {\n Poll::Ready(</span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">(()))\n }\n\n </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">call</span><span style=\"color:#657b83;\">(</span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">, </span><span style=\"color:#268bd2;\">_req</span><span style=\"color:#657b83;\">: Request<Body>) -> </span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Future {\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> counter = </span><span style=\"color:#d33682;\">self</span><span style=\"color:#657b83;\">.counter.</span><span style=\"color:#859900;\">fetch_add</span><span style=\"color:#657b83;\">(</span><span style=\"color:#6c71c4;\">1</span><span style=\"color:#657b83;\">, std::sync::atomic::Ordering::SeqCst);\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> res = Response::builder()\n .</span><span style=\"color:#859900;\">status</span><span style=\"color:#657b83;\">(</span><span style=\"color:#6c71c4;\">200</span><span style=\"color:#657b83;\">)\n .</span><span style=\"color:#859900;\">header</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Content-Type</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">text/plain; charset=utf-8</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">)\n .</span><span style=\"color:#859900;\">body</span><span style=\"color:#657b83;\">(</span><span style=\"color:#859900;\">format!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Counter is at: </span><span style=\"color:#cb4b16;\">{}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, counter).</span><span style=\"color:#859900;\">into</span><span style=\"color:#657b83;\">());\n std::future::ready(res)\n }\n}\n</span></code></pre>\n<p>This implementation uses the <code>std::future::Ready</code> struct to create a <code>Future</code> which is immediately ready. In other words, our application doesn't perform any async actions. I've set the <code>Error</code> associated type to <code>hyper::http::Error</code>. This error would be generated if, for example, you provided invalid strings to the <code>header</code> method call, such as non-ASCII characters. As we've seen multiple times, <code>poll_ready</code> just advertises that it's always ready to handle another request.</p>\n<p>The implementation of <code>DemoAppFactory</code> isn't terribly different:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">struct </span><span style=\"color:#b58900;\">DemoAppFactory </span><span style=\"color:#657b83;\">{\n </span><span style=\"color:#268bd2;\">counter</span><span style=\"color:#657b83;\">: Arc<AtomicUsize>,\n}\n\n</span><span style=\"color:#268bd2;\">impl </span><span style=\"color:#657b83;\">Service<</span><span style=\"color:#859900;\">&</span><span style=\"color:#657b83;\">AddrStream> </span><span style=\"color:#859900;\">for </span><span style=\"color:#b58900;\">DemoAppFactory </span><span style=\"color:#657b83;\">{\n </span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Response </span><span style=\"color:#657b83;\">= DemoApp;\n </span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Error </span><span style=\"color:#657b83;\">= Infallible;\n </span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Future </span><span style=\"color:#657b83;\">= Ready<</span><span style=\"color:#859900;\">Result</span><span style=\"color:#657b83;\"><</span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Response, </span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Error>>;\n\n </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">poll_ready</span><span style=\"color:#657b83;\">(</span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">, </span><span style=\"color:#268bd2;\">_cx</span><span style=\"color:#657b83;\">: </span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#657b83;\">std::task::Context) -> Poll<</span><span style=\"color:#859900;\">Result</span><span style=\"color:#657b83;\"><(), </span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Error>> {\n Poll::Ready(</span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">(()))\n }\n\n </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">call</span><span style=\"color:#657b83;\">(</span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">, </span><span style=\"color:#268bd2;\">conn</span><span style=\"color:#657b83;\">: </span><span style=\"color:#859900;\">&</span><span style=\"color:#657b83;\">AddrStream) -> </span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Future {\n </span><span style=\"color:#859900;\">println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Accepting a new connection from </span><span style=\"color:#cb4b16;\">{:?}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, conn);\n std::future::ready(</span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">(DemoApp {\n counter: </span><span style=\"color:#d33682;\">self</span><span style=\"color:#657b83;\">.counter.</span><span style=\"color:#859900;\">clone</span><span style=\"color:#657b83;\">()\n }))\n }\n}\n</span></code></pre>\n<p>We have a different parameter to <code>Service</code>, this time <code>&AddrStream</code>. I did initially find the naming here confusing. In Tower, a <code>Service</code> takes some <code>Request</code>. And with our <code>DemoApp</code>, the <code>Request</code> it takes is a Hyper <code>Request<Body></code>. But in the case of <code>DemoAppFactory</code>, the <code>Request</code> it's taking is a <code>&AddrStream</code>. Keep in mind that a <code>Service</code> is really just a generalization of failable, async functions from input to output. The input may be a <code>Request<Body></code>, or may be a <code>&AddrStream</code>, or something else entirely.</p>\n<p>Similarly, the "response" here isn't an HTTP response, but a <code>DemoApp</code>. I again find it easier to use the terms "input" and "output" to avoid the name overloading of request and response.</p>\n<p>Finally, our <code>main</code> function looks much the same as the original from the "Hello world" example:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">#[</span><span style=\"color:#268bd2;\">tokio</span><span style=\"color:#657b83;\">::</span><span style=\"color:#268bd2;\">main</span><span style=\"color:#657b83;\">]\nasync </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">main</span><span style=\"color:#657b83;\">() {\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> addr = SocketAddr::from(([</span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\">, </span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\">, </span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\">, </span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\">], </span><span style=\"color:#6c71c4;\">3000</span><span style=\"color:#657b83;\">));\n\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> factory = DemoAppFactory {\n counter: Arc::new(AtomicUsize::new(</span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\">)),\n };\n\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> server = Server::bind(</span><span style=\"color:#859900;\">&</span><span style=\"color:#657b83;\">addr).</span><span style=\"color:#859900;\">serve</span><span style=\"color:#657b83;\">(factory);\n\n </span><span style=\"color:#859900;\">if </span><span style=\"color:#268bd2;\">let </span><span style=\"color:#859900;\">Err</span><span style=\"color:#657b83;\">(e) = server.await {\n </span><span style=\"color:#859900;\">eprintln!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">server error: </span><span style=\"color:#cb4b16;\">{}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, e);\n }\n}\n</span></code></pre>\n<p>If you're looking to extend your understanding here, I'd recommend extending this example to perform some async actions within the app. How would you modify <code>Future</code>? If you use a trait object, how exactly do you pin?</p>\n<p>But now it's time to take a dive into a topic I've avoided for a while.</p>\n<h2 id=\"understanding-the-traits\">Understanding the traits</h2>\n<p>Let's refresh our memory from above on the signature of <code>serve</code>:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#586e75;\">pub </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">serve</span><span style=\"color:#657b83;\"><S, B>(</span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">, </span><span style=\"color:#268bd2;\">new_service</span><span style=\"color:#657b83;\">: S) -> Server<I, S, E>\n</span><span style=\"color:#859900;\">where</span><span style=\"color:#657b83;\">\n I: Accept,\n </span><span style=\"color:#268bd2;\">I::</span><span style=\"color:#657b83;\">Error: </span><span style=\"color:#859900;\">Into</span><span style=\"color:#657b83;\"><</span><span style=\"color:#859900;\">Box</span><span style=\"color:#657b83;\"><dyn StdError </span><span style=\"color:#859900;\">+ Send + Sync</span><span style=\"color:#657b83;\">>>,\n </span><span style=\"color:#268bd2;\">I::</span><span style=\"color:#657b83;\">Conn: AsyncRead + AsyncWrite + Unpin + Send + </span><span style=\"color:#586e75;\">'static</span><span style=\"color:#657b83;\">,\n S: MakeServiceRef<</span><span style=\"color:#268bd2;\">I::</span><span style=\"color:#657b83;\">Conn, Body, ResBody = B>,\n </span><span style=\"color:#268bd2;\">S::</span><span style=\"color:#657b83;\">Error: </span><span style=\"color:#859900;\">Into</span><span style=\"color:#657b83;\"><</span><span style=\"color:#859900;\">Box</span><span style=\"color:#657b83;\"><dyn StdError </span><span style=\"color:#859900;\">+ Send + Sync</span><span style=\"color:#657b83;\">>>,\n B: HttpBody + </span><span style=\"color:#586e75;\">'static</span><span style=\"color:#657b83;\">,\n </span><span style=\"color:#268bd2;\">B::</span><span style=\"color:#657b83;\">Error: </span><span style=\"color:#859900;\">Into</span><span style=\"color:#657b83;\"><</span><span style=\"color:#859900;\">Box</span><span style=\"color:#657b83;\"><dyn StdError </span><span style=\"color:#859900;\">+ Send + Sync</span><span style=\"color:#657b83;\">>>,\n E: NewSvcExec<</span><span style=\"color:#268bd2;\">I::</span><span style=\"color:#657b83;\">Conn, </span><span style=\"color:#268bd2;\">S::</span><span style=\"color:#657b83;\">Future, </span><span style=\"color:#268bd2;\">S::</span><span style=\"color:#657b83;\">Service, E, NoopWatcher>,\n E: ConnStreamExec<<</span><span style=\"color:#268bd2;\">S::</span><span style=\"color:#657b83;\">Service </span><span style=\"color:#859900;\">as </span><span style=\"color:#657b83;\">HttpService<Body>>::Future, B>,\n</span></code></pre>\n<p>Up until preparing this blog post, I have never tried to take a deep dive into understanding all of these bounds. So this will be an adventure for us all! (And perhaps it should end up with some documentation PRs by me...) Let's start off with the type variables. Altogether, we have four: two on the <code>impl</code> block itself, and two on this method:</p>\n<ul>\n<li><code>I</code> represents the incoming stream of connections.</li>\n<li><code>E</code> represents the executor.</li>\n<li><code>S</code> is the service we're going to run. Using our terminology from above, this would be the "app factory." Using Tower/Hyper terminology, this is the "make service."</li>\n<li><code>B</code> is the choice of response body the service returns (the "app", not the "app factory", using nomenclature above).</li>\n</ul>\n<h3 id=\"i-accept\"><code>I: Accept</code></h3>\n<p><code>I</code> needs to implement the <a href=\"https://docs.rs/hyper/0.14.12/hyper/server/accept/trait.Accept.html\"><code>Accept</code></a> trait, which represents the ability to accept a new connection from some a source. The only implementation out of the box is for <a href=\"https://docs.rs/hyper/0.14.12/hyper/server/conn/struct.AddrIncoming.html\"><code>AddrIncoming</code></a>, which can be created from a <code>SocketAddr</code>. And in fact, that's exactly what <a href=\"https://docs.rs/hyper/0.14.12/src/hyper/server/server.rs.html#66-71\"><code>Server::bind</code> does</a>.</p>\n<p><code>Accept</code> has two associated types. <code>Error</code> must be something that can be converted into an error object, or <code>Into<Box<dyn StdError + Send + Sync>></code>. This is the requirement of (almost?) every associated error type we look at, so from now on I'll just skip over them. We need to be able to convert whatever error happened into a uniform representation.</p>\n<p>The <code>Conn</code> associated type represents an individual connection. In the case of <code>AddrIncoming</code>, the associated type is <a href=\"https://docs.rs/hyper/0.14.12/hyper/server/conn/struct.AddrStream.html\"><code>AddrStream</code></a>. This type must implement <code>AsyncRead</code> and <code>AsyncWrite</code> for communication, <code>Send</code> and <code>'static</code> so it can be sent to different threads, and <code>Unpin</code>. The requirement for <code>Unpin</code> bubbles up from deeper in the stack, and I honestly don't know what drives it.</p>\n<h3 id=\"s-makeserviceref\"><code>S: MakeServiceRef</code></h3>\n<p><code>MakeServiceRef</code> is one of those traits that doesn't appear in the public documentation. This seems to be intentional. Reading the source:</p>\n<blockquote>\n<p>Just a sort-of "trait alias" of <code>MakeService</code>, not to be implemented by anyone, only used as bounds.</p>\n</blockquote>\n<p>Were you confused as to why we were receiving a reference with <code>&AddrStream</code>? This is the trait that powers that transformation. Overall, the trait bound <code>S: MakeServiceRef<I::Conn, Body, ResBody = B></code> means:</p>\n<ul>\n<li><code>S</code> must be a <code>Service</code></li>\n<li><code>S</code> will accept input of type <code>&I::Conn</code></li>\n<li>It will in turn produce a <em>new</em> <code>Service</code> as output</li>\n<li>That new service will accept <code>Request<Body></code> as input, and produce <code>Response<ResBody></code> as output</li>\n</ul>\n<p>And while we're talking about it: that <code>ResBody</code> has the restriction that it must implement <a href=\"https://docs.rs/hyper/0.14.12/hyper/body/trait.HttpBody.html\"><code>HttpBody</code></a>. As you might guess, the <code>Body</code> struct mentioned above implements <code>HttpBody</code>. There are a number of implementations too. When we get to Tonic and gRPC, we'll see that there are, in fact, other response bodies we have to deal with.</p>\n<h3 id=\"newsvcexec-and-connstreamexec\"><code>NewSvcExec</code> and <code>ConnStreamExec</code></h3>\n<p>The default value for the <code>E</code> parameter is <code>Exec</code>, which does not appear in the generated docs. But of course you can find it <a href=\"https://docs.rs/crate/hyper/0.14.12/source/src/common/exec.rs\">in the source</a>. The concept of <code>Exec</code> is to specify how tasks are spawned off. By default, it leverages <code>tokio::spawn</code>.</p>\n<p>I'm not entirely certain of how all of these plays out, but I believe the two traits in the heading allow for different handling of spawning for the connection service (app factory) versus the request service (app).</p>\n<h2 id=\"using-axum\">Using Axum</h2>\n<p>Axum is the new web framework that kicked off this whole blog post. Instead of dealing directly with Hyper like we did above, let's reimplement our counter web service using Axum. We'll be using <code>axum = "0.2"</code>. The <a href=\"https://docs.rs/axum/0.2.3/axum/index.html\">crate docs</a> provide a great overview of Axum, and I'm not going to try to replicate that information here. Instead, here's my rewritten code. We'll analyze a few key pieces below:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#859900;\">use </span><span style=\"color:#657b83;\">axum::extract::Extension;\n</span><span style=\"color:#859900;\">use </span><span style=\"color:#657b83;\">axum::handler::get;\n</span><span style=\"color:#859900;\">use </span><span style=\"color:#657b83;\">axum::{AddExtensionLayer, Router};\n</span><span style=\"color:#859900;\">use </span><span style=\"color:#657b83;\">hyper::{HeaderMap, Server, StatusCode};\n</span><span style=\"color:#859900;\">use </span><span style=\"color:#657b83;\">std::net::SocketAddr;\n</span><span style=\"color:#859900;\">use </span><span style=\"color:#657b83;\">std::sync::atomic::AtomicUsize;\n</span><span style=\"color:#859900;\">use </span><span style=\"color:#657b83;\">std::sync::Arc;\n\n#[</span><span style=\"color:#268bd2;\">derive</span><span style=\"color:#657b83;\">(Clone, Default)]\n</span><span style=\"color:#268bd2;\">struct </span><span style=\"color:#b58900;\">AppState </span><span style=\"color:#657b83;\">{\n </span><span style=\"color:#268bd2;\">counter</span><span style=\"color:#657b83;\">: Arc<AtomicUsize>,\n}\n\n#[</span><span style=\"color:#268bd2;\">tokio</span><span style=\"color:#657b83;\">::</span><span style=\"color:#268bd2;\">main</span><span style=\"color:#657b83;\">]\nasync </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">main</span><span style=\"color:#657b83;\">() {\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> addr = SocketAddr::from(([</span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\">, </span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\">, </span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\">, </span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\">], </span><span style=\"color:#6c71c4;\">3000</span><span style=\"color:#657b83;\">));\n\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> app = Router::new()\n .</span><span style=\"color:#859900;\">route</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">/</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, </span><span style=\"color:#859900;\">get</span><span style=\"color:#657b83;\">(home))\n .</span><span style=\"color:#859900;\">layer</span><span style=\"color:#657b83;\">(AddExtensionLayer::new(AppState::default()));\n\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> server = Server::bind(</span><span style=\"color:#859900;\">&</span><span style=\"color:#657b83;\">addr).</span><span style=\"color:#859900;\">serve</span><span style=\"color:#657b83;\">(app.</span><span style=\"color:#859900;\">into_make_service</span><span style=\"color:#657b83;\">());\n\n </span><span style=\"color:#859900;\">if </span><span style=\"color:#268bd2;\">let </span><span style=\"color:#859900;\">Err</span><span style=\"color:#657b83;\">(e) = server.await {\n </span><span style=\"color:#859900;\">eprintln!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">server error: </span><span style=\"color:#cb4b16;\">{}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, e);\n }\n}\n\nasync </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">home</span><span style=\"color:#657b83;\">(</span><span style=\"color:#268bd2;\">state</span><span style=\"color:#657b83;\">: Extension<AppState>) -> (StatusCode, HeaderMap, String) {\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> counter = state\n .counter\n .</span><span style=\"color:#859900;\">fetch_add</span><span style=\"color:#657b83;\">(</span><span style=\"color:#6c71c4;\">1</span><span style=\"color:#657b83;\">, std::sync::atomic::Ordering::SeqCst);\n </span><span style=\"color:#268bd2;\">let </span><span style=\"color:#586e75;\">mut</span><span style=\"color:#657b83;\"> headers = HeaderMap::new();\n headers.</span><span style=\"color:#859900;\">insert</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Content-Type</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">text/plain; charset=utf-8</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">.</span><span style=\"color:#859900;\">parse</span><span style=\"color:#657b83;\">().</span><span style=\"color:#859900;\">unwrap</span><span style=\"color:#657b83;\">());\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> body = </span><span style=\"color:#859900;\">format!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Counter is at: </span><span style=\"color:#cb4b16;\">{}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, counter);\n (StatusCode::</span><span style=\"color:#cb4b16;\">OK</span><span style=\"color:#657b83;\">, headers, body)\n}\n</span></code></pre>\n<p>The first thing I'd like to get out of the way is this whole <code>AddExtensionLayer</code>/<code>Extension</code> bit. This is how we're managing shared state within our application. It's not directly relevant to our overall analysis of Tower and Hyper, so I'll suffice with a <a href=\"https://docs.rs/axum/0.2.3/axum/index.html#sharing-state-with-handlers\">link to the docs demonstrating how this works</a>. Interestingly, you may notice that this implementation relies on middlewares, which does in fact leverage Tower, so it's not completely separate.</p>\n<p>Anyway, back to our point at hand. Within our <code>main</code> function, we're now using this <code>Router</code> concept to build up our application:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> app = Router::new()\n .</span><span style=\"color:#859900;\">route</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">/</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, </span><span style=\"color:#859900;\">get</span><span style=\"color:#657b83;\">(home))\n .</span><span style=\"color:#859900;\">layer</span><span style=\"color:#657b83;\">(AddExtensionLayer::new(AppState::default()));\n</span></code></pre>\n<p>This says, essentially, "please call the <code>home</code> function when you receive a request for <code>/</code>, and add a middleware that does that whole extension thing." The <code>home</code> function uses an extractor to get the <code>AppState</code>, and returns a value of type <code>(StatusCode, HeaderMap, String)</code> to represent the response. In Axum, any implementation of the appropriately named <a href=\"https://docs.rs/axum/0.2.3/axum/response/trait.IntoResponse.html\"><code>IntoResponse</code> trait</a> can be returned from handler functions.</p>\n<p>Anyway, our <code>app</code> value is now a <code>Router</code>. But a <code>Router</code> cannot be directly run by Hyper. Instead, we need to convert it into a <code>MakeService</code> (a.k.a. an app factory). Fortunately, that's easy: we call <code>app.into_make_service()</code>. Let's look at that method's signature:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">impl</span><span style=\"color:#657b83;\"><S> </span><span style=\"color:#b58900;\">Router</span><span style=\"color:#657b83;\"><S> {\n </span><span style=\"color:#586e75;\">pub </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">into_make_service</span><span style=\"color:#657b83;\">(</span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">) -> IntoMakeService<S>\n </span><span style=\"color:#859900;\">where</span><span style=\"color:#657b83;\">\n S: Clone;\n}\n</span></code></pre>\n<p>And going down the rabbit hole a bit further:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#586e75;\">pub </span><span style=\"color:#268bd2;\">struct </span><span style=\"color:#b58900;\">IntoMakeService</span><span style=\"color:#657b83;\"><S> { </span><span style=\"color:#93a1a1;\">/* fields omitted */ </span><span style=\"color:#657b83;\">}\n\n</span><span style=\"color:#268bd2;\">impl</span><span style=\"color:#657b83;\"><S: </span><span style=\"color:#859900;\">Clone</span><span style=\"color:#657b83;\">, T> Service<T> </span><span style=\"color:#859900;\">for </span><span style=\"color:#b58900;\">IntoMakeService</span><span style=\"color:#657b83;\"><S> {\n </span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Response </span><span style=\"color:#657b83;\">= S;\n </span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Error </span><span style=\"color:#657b83;\">= Infallible;\n </span><span style=\"color:#93a1a1;\">// other stuff omitted\n</span><span style=\"color:#657b83;\">}\n</span></code></pre>\n<p>The type <code>Router<S></code> is a value that can produce a service of type <code>S</code>. <code>IntoMakeService<S></code> will take some kind of connection info, <code>T</code>, and produce that service <code>S</code> asynchronously. And since <code>Error</code> is <code>Infallible</code>, we know it can't fail. But as much as we say "asynchronously", looking at the implementation of <code>Service</code> for <code>IntoMakeService</code>, we see a familiar pattern:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">poll_ready</span><span style=\"color:#657b83;\">(</span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">, </span><span style=\"color:#268bd2;\">_cx</span><span style=\"color:#657b83;\">: </span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#657b83;\">Context<'</span><span style=\"color:#859900;\">_</span><span style=\"color:#657b83;\">>) -> Poll<</span><span style=\"color:#859900;\">Result</span><span style=\"color:#657b83;\"><(), </span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Error>> {\n Poll::Ready(</span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">(()))\n}\n\n</span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">call</span><span style=\"color:#657b83;\">(</span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">, </span><span style=\"color:#268bd2;\">_target</span><span style=\"color:#657b83;\">: T) -> </span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Future {\n future::MakeRouteServiceFuture {\n future: </span><span style=\"color:#859900;\">ready</span><span style=\"color:#657b83;\">(</span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">(</span><span style=\"color:#d33682;\">self</span><span style=\"color:#657b83;\">.service.</span><span style=\"color:#859900;\">clone</span><span style=\"color:#657b83;\">())),\n }\n}\n</span></code></pre>\n<p>Also, notice how that <code>T</code> value for connection info doesn't actually have any bounds or other information. <code>IntoMakeService</code> just throws away the connection information. (If you need it for some reason, see <a href=\"https://docs.rs/axum/0.2.3/axum/routing/struct.Router.html#method.into_make_service_with_connect_info\"><code>into_make_service_with_connect_info</code></a>.) In other words:</p>\n<ul>\n<li><code>Router<S></code> is a type that lets us add routes and middleware layers</li>\n<li>You can convert a <code>Router<S></code> into an <code>IntoMakeService<S></code></li>\n<li>But <code>IntoMakeService<S></code> is really just a fancy wrapper around an <code>S</code> to appease the Hyper requirements around app factories</li>\n<li>So the real workhorse here is just <code>S</code></li>\n</ul>\n<p>So where does that <code>S</code> type come from? It's built up by all the <code>route</code> and <code>layer</code> calls you make. For example, check out the <code>get</code> function's signature:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#586e75;\">pub </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">get</span><span style=\"color:#657b83;\"><H, B, T>(</span><span style=\"color:#268bd2;\">handler</span><span style=\"color:#657b83;\">: H) -> OnMethod<H, B, T, EmptyRouter>\n</span><span style=\"color:#859900;\">where</span><span style=\"color:#657b83;\">\n H: Handler<B, T>,\n\npub struct OnMethod<H, B, T, F> { </span><span style=\"color:#93a1a1;\">/* fields omitted */ </span><span style=\"color:#657b83;\">}\n\n</span><span style=\"color:#268bd2;\">impl</span><span style=\"color:#657b83;\"><H, B, T, F> Service<Request<B>> </span><span style=\"color:#859900;\">for </span><span style=\"color:#b58900;\">OnMethod</span><span style=\"color:#657b83;\"><H, B, T, F>\n</span><span style=\"color:#859900;\">where</span><span style=\"color:#657b83;\">\n H: Handler<B, T>,\n F: Service<Request<B>, Response = Response<BoxBody>, Error = Infallible> + Clone,\n B: Send + </span><span style=\"color:#586e75;\">'static</span><span style=\"color:#657b83;\">,\n{\n </span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Response </span><span style=\"color:#657b83;\">= Response<BoxBody>;\n </span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Error </span><span style=\"color:#657b83;\">= Infallible;\n </span><span style=\"color:#93a1a1;\">// and more stuff\n</span><span style=\"color:#657b83;\">}\n</span></code></pre>\n<p><code>get</code> returns an <code>OnMethod</code> value. And <code>OnMethod</code> is a <code>Service</code> that takes a <code>Request<B></code> and returns a <code>Response<BoxBody></code>. There's some funny business at play regarding the representations of bodies, which we'll eventually dive into a bit more. But with our newfound understanding of Tower and Hyper, the types at play here are no longer inscrutable. In fact, they may even be scrutable!</p>\n<p>And one final note on the example above. Axum works directly with a lot of the Hyper machinery. And that includes the <code>Server</code> type. While the <code>axum</code> crate reexports many things from Hyper, you can use those types directly from Hyper instead if so desired. In other words, Axum is pretty close to the underlying libraries, simply providing some convenience on top. It's one of the reasons I'm pretty excited to get a bit deeper into my experiments with Axum.</p>\n<p>So to sum up at this point:</p>\n<ul>\n<li>Tower provides an abstraction for asynchronous functions from input to output, which may fail. This is called a service.</li>\n<li>HTTP servers have two levels of services. The lower level is a service from HTTP requests to HTTP responses. The upper level is a service from connection information to the lower level service.</li>\n<li>Hyper has a lot of additional traits floating around, some visible, some invisible, which allow for more generality, and also make things a bit more complicated to understand.</li>\n<li>Axum sits on top of Hyper and provides an easier to use interface for many common cases. It does this by providing the same kind of services that Hyper is expecting to see. And it seems to be doing a bunch of fancy footwork around HTTP body representations.</li>\n</ul>\n<p>Next step on our journey: let's look at another library for building Hyper services. We'll follow up on this in our next post.</p>\n<p class=\"text-center\"><a class=\"btn btn-info\" href=\"/blog/axum-hyper-tonic-tower-part3\">Read part 3 now</a></p>\n<p>If you're looking for more Rust content from FP Complete, check out:</p>\n<ul>\n<li><a href=\"/tags/rust/\">Rust tagged blog posts</a></li>\n<li><a href=\"https://www.fpcomplete.com/rust/\">Rust homepage</a></li>\n<li><a href=\"https://www.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a></li>\n</ul>\n",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/",
"slug": "axum-hyper-tonic-tower-part2",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 2",
"description": "Part 2 of a blog post series examining the Hyper/Tower web ecosystem in Rust, and specifically combining the Axum framework and Tonic gRPC servers.",
"updated": null,
"date": "2021-09-06",
"year": 2021,
"month": 9,
"day": 6,
"taxonomies": {
"tags": [
"rust"
],
"categories": [
"functional programming"
]
},
"extra": {
"author": "Michael Snoyman",
"author_avatar": "/images/leaders/michael-snoyman.png",
"image": "images/blog/thumbs/axum-hyper-tonic-tower-part2.png",
"blogimage": "/images/blog-listing/rust.png"
},
"path": "blog/axum-hyper-tonic-tower-part2/",
"components": [
"blog",
"axum-hyper-tonic-tower-part2"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "quick-recap",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/#quick-recap",
"title": "Quick recap",
"children": []
},
{
"level": 2,
"id": "services-in-hyper",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/#services-in-hyper",
"title": "Services in Hyper",
"children": []
},
{
"level": 2,
"id": "bypassing-service-fn-and-make-service-fn",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/#bypassing-service-fn-and-make-service-fn",
"title": "Bypassing service_fn and make_service_fn",
"children": []
},
{
"level": 2,
"id": "understanding-the-traits",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/#understanding-the-traits",
"title": "Understanding the traits",
"children": [
{
"level": 3,
"id": "i-accept",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/#i-accept",
"title": "I: Accept",
"children": []
},
{
"level": 3,
"id": "s-makeserviceref",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/#s-makeserviceref",
"title": "S: MakeServiceRef",
"children": []
},
{
"level": 3,
"id": "newsvcexec-and-connstreamexec",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/#newsvcexec-and-connstreamexec",
"title": "NewSvcExec and ConnStreamExec",
"children": []
}
]
},
{
"level": 2,
"id": "using-axum",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/#using-axum",
"title": "Using Axum",
"children": []
}
],
"word_count": 3119,
"reading_time": 16,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/axum-hyper-tonic-tower-part1.md",
"content": "<p>I've played around with various web server libraries and frameworks in Rust, and found various strengths and weaknesses with them. Most recently, I put together an FP Complete solution called Zehut (which I'll blog about another time) that needed to combine a web frontend and gRPC server. I used Hyper, Tonic, and a minimal library I put together called <a href=\"https://github.com/snoyberg/routetype-rs\">routetype</a>. It worked, but I was left underwhelmed. Working directly with Hyper, even with the minimal <code>routetype</code> layer, felt too ad-hoc.</p>\n<p>When I recently saw the release of <a href=\"https://lib.rs/crates/axum\">Axum</a>, it seemed to be speaking to many of the needs I had, especially calling out Tonic support. I decided to make an experiment of replacing the direct Hyper+<code>routetype</code> usage I'd used with Axum. Overall the approach works, but (like the <code>routetype</code> work I'd already done) involved some hairy business around the Hyper and Tower APIs.</p>\n<p>I've been meaning to write some blog post/tutorial/experience report for Hyper+Tower for a while now. So I decided to take this opportunity to step through these four libraries (Tower, Hyper, Axum, and Tonic), with the specific goal in mind of creating hybrid web/gRPC apps. It turned out that there was more information here than I'd anticipated. To make for easier reading, I've split this up into a four part blog post series:</p>\n<ol>\n<li>Today's post: overview of Tower</li>\n<li><a href=\"https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/\">Understanding Hyper, and first experiences with Axum</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part3/\">Demonstration of Tonic for a gRPC client/server</a></li>\n<li><a href=\"https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part4/\">How to combine Axum and Tonic services into a single service</a></li>\n</ol>\n<p>Let's dive in!</p>\n<p class=\"text-center\" style=\"border: 1px solid #000;border-radius:1rem;padding:1rem;background-color:#f1f1f1\">\n <a class=\"btn btn-primary\" href=\"https://blogtrottr.com/?subscribe=https://www.fpcomplete.com/feed/\" target=\"_blank\">\n Subscribe to our blog via email\n </a>\n <br>\n <small>Email subscriptions come from our <a target=\"_blank\" href=\"/feed/\">Atom feed</a> and are handled by <a target=\"_blank\" href=\"https://blogtrottr.com\">Blogtrottr</a>. You will only receive notifications of blog posts, and can unsubscribe any time.</small>\n</p>\n<h2 id=\"what-is-tower\">What is Tower?</h2>\n<p>The first stop on our journey is the <a href=\"https://lib.rs/crates/tower\">tower crate</a>. To quote the docs, which state this succinctly:</p>\n<blockquote>\n<p>Tower provides a simple core abstraction, the <code>Service</code> trait, which represents an asynchronous function taking a request and returning either a response or an error. This abstraction can be used to model both clients and servers.</p>\n</blockquote>\n<p>This sounds fairly straightforward. To express it in Haskell syntax, I'd probably say <code>Request -> IO Response</code>, leveraging the fact that <code>IO</code> handles both error handling and asynchronous I/O. But the <code>Service</code> trait is necessarily more complex than that simplified signature:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#586e75;\">pub </span><span style=\"color:#268bd2;\">trait </span><span style=\"color:#b58900;\">Service</span><span style=\"color:#657b83;\"><Request> {\n </span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Response</span><span style=\"color:#657b83;\">;\n </span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Error</span><span style=\"color:#657b83;\">;\n\n </span><span style=\"color:#93a1a1;\">// This is what it says in the generated docs\n </span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Future</span><span style=\"color:#657b83;\">: Future;\n\n </span><span style=\"color:#93a1a1;\">// But this more informative piece is in the actual source code\n </span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Future</span><span style=\"color:#657b83;\">: Future<Output = </span><span style=\"color:#859900;\">Result</span><span style=\"color:#657b83;\"><</span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Response, </span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Error>>;\n\n </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">poll_ready</span><span style=\"color:#657b83;\">(\n </span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">,\n </span><span style=\"color:#268bd2;\">cx</span><span style=\"color:#657b83;\">: </span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#657b83;\">Context<'</span><span style=\"color:#859900;\">_</span><span style=\"color:#657b83;\">>\n ) -> Poll<</span><span style=\"color:#859900;\">Result</span><span style=\"color:#657b83;\"><(), </span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Error>>;\n </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">call</span><span style=\"color:#657b83;\">(</span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">, </span><span style=\"color:#268bd2;\">req</span><span style=\"color:#657b83;\">: Request) -> </span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Future;\n}\n</span></code></pre>\n<p><code>Service</code> is a trait, parameterized on the types of <code>Request</code>s it can handle. There's nothing specific about HTTP in Tower, so <code>Request</code>s may be lots of different things. And even within Hyper, an HTTP library leveraging Tower, we'll see that there are at least two different types of <code>Request</code> we care about.</p>\n<p>Anyway, two of the associated types here are straightforward: <code>Response</code> and <code>Error</code>. Combining the parameterized <code>Request</code> with <code>Response</code> and <code>Error</code>, we basically have all the information we care about for a <code>Service</code>.</p>\n<p>But it's <em>not</em> all the information Rust cares about. To provide for asynchronous calls, we need to provide a <code>Future</code>. And the compiler needs to know the type of the <code>Future</code> we'll be returning. This isn't really useful information to use as a programmer, but there are <a href=\"https://lib.rs/crates/async-trait\">plenty of pain points already</a> around <code>async</code> code in traits.</p>\n<p>And finally, what about those last two methods? They are there to allow the <code>Service</code> itself to be asynchronous. It took me quite a while to fully wrap my head around this. We have two different components of async behavior going on here:</p>\n<ul>\n<li>The <code>Service</code> may not be immediately ready to handle a new incoming request. For example (coming from <a href=\"https://docs.rs/tower-service/0.3.1/src/tower_service/lib.rs.html#244-257\">the docs on <code>poll_ready</code></a>), the server may currently be at capacity. You need to check <code>poll_ready</code> to find out whether the <code>Service</code> is ready to accept a new request. Then, when it's ready, you use <code>call</code> to initiate handling of a new <code>Request</code>.</li>\n<li>The handling of the request itself is <em>also</em> async, returning a <code>Future</code>, which can be polled/awaited.</li>\n</ul>\n<p>Some of this complexity can be hidden away. For example, instead of giving a concrete type for <code>Future</code>, you can use a trait object (a.k.a. type erasure). Stealing again from the docs, the following is a perfectly valid associated type for <code>Future</code>:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Future </span><span style=\"color:#657b83;\">= Pin<</span><span style=\"color:#859900;\">Box</span><span style=\"color:#657b83;\"><dyn Future<Output = </span><span style=\"color:#859900;\">Result</span><span style=\"color:#657b83;\"><</span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Response, </span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Error>>>>;\n</span></code></pre>\n<p>However, this incurs some overhead for dynamic dispatch.</p>\n<p>Finally, these two layers of async behavior are often unnecessary. Many times, our server is <em>always</em> ready to handle a new incoming <code>Request</code>. In the wild, you'll often see code that hard-codes the idea that a service is always ready. To quote from those docs for the final time in this section:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">poll_ready</span><span style=\"color:#657b83;\">(</span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">, </span><span style=\"color:#268bd2;\">cx</span><span style=\"color:#657b83;\">: </span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#657b83;\">Context<'</span><span style=\"color:#859900;\">_</span><span style=\"color:#657b83;\">>) -> Poll<</span><span style=\"color:#859900;\">Result</span><span style=\"color:#657b83;\"><(), </span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Error>> {\n Poll::Ready(</span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">(()))\n}\n</span></code></pre>\n<p>This isn't saying that request handling is synchronous in our <code>Service</code>. It's saying that request <em>acceptance</em> always succeeds immediately.</p>\n<p>Going along with the two layers of async handling, there are similarly two layers of error handling. Both accepting the new request may fail, and processing the new request may fail. But as you can see in the code above, it's possible to hard-code something which always succeeds with <code>Ok(())</code>, which is fairly common for <code>poll_ready</code>. When processing the request itself also cannot fail, using <a href=\"https://doc.rust-lang.org/stable/std/convert/enum.Infallible.html\"><code>Infallible</code></a> (and eventually <a href=\"https://doc.rust-lang.org/stable/std/primitive.never.html\">the <code>never</code> type</a>) as the <code>Error</code> associated type is a good call.</p>\n<h2 id=\"fake-web-server\">Fake web server</h2>\n<p>That was all relatively abstract, which is part of the problem with understanding Tower (at least for me). Let's make it more concrete by implementing a fake web server and fake web application. My <code>Cargo.toml</code> file looks like:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">[</span><span style=\"color:#b58900;\">package</span><span style=\"color:#657b83;\">]\n</span><span style=\"color:#268bd2;\">name </span><span style=\"color:#657b83;\">= </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">learntower</span><span style=\"color:#839496;\">"\n</span><span style=\"color:#268bd2;\">version </span><span style=\"color:#657b83;\">= </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">0.1.0</span><span style=\"color:#839496;\">"\n</span><span style=\"color:#268bd2;\">edition </span><span style=\"color:#657b83;\">= </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">2018</span><span style=\"color:#839496;\">"\n\n</span><span style=\"color:#657b83;\">[</span><span style=\"color:#b58900;\">dependencies</span><span style=\"color:#657b83;\">]\n</span><span style=\"color:#268bd2;\">tower </span><span style=\"color:#657b83;\">= { </span><span style=\"color:#268bd2;\">version </span><span style=\"color:#657b83;\">= </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">0.4</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, </span><span style=\"color:#268bd2;\">features </span><span style=\"color:#657b83;\">= [</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">full</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">] }\n</span><span style=\"color:#268bd2;\">tokio </span><span style=\"color:#657b83;\">= { </span><span style=\"color:#268bd2;\">version </span><span style=\"color:#657b83;\">= </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">1</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, </span><span style=\"color:#268bd2;\">features </span><span style=\"color:#657b83;\">= [</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">full</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">] }\n</span><span style=\"color:#268bd2;\">anyhow </span><span style=\"color:#657b83;\">= </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">1</span><span style=\"color:#839496;\">"\n</span></code></pre>\n<p>I've uploaded <a href=\"https://gist.github.com/snoyberg/c6c54ed38ec8fac966e362eb212ab421\">the full source code as a Gist</a>, but let's walk through this example. First we define some helper types to represent HTTP request and response values:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#586e75;\">pub </span><span style=\"color:#268bd2;\">struct </span><span style=\"color:#b58900;\">Request </span><span style=\"color:#657b83;\">{\n </span><span style=\"color:#586e75;\">pub </span><span style=\"color:#268bd2;\">path_and_query</span><span style=\"color:#657b83;\">: String,\n </span><span style=\"color:#586e75;\">pub </span><span style=\"color:#268bd2;\">headers</span><span style=\"color:#657b83;\">: HashMap<</span><span style=\"color:#859900;\">String</span><span style=\"color:#657b83;\">, </span><span style=\"color:#859900;\">String</span><span style=\"color:#657b83;\">>,\n </span><span style=\"color:#586e75;\">pub </span><span style=\"color:#268bd2;\">body</span><span style=\"color:#657b83;\">: </span><span style=\"color:#859900;\">Vec</span><span style=\"color:#657b83;\"><</span><span style=\"color:#268bd2;\">u8</span><span style=\"color:#657b83;\">>,\n}\n\n#[</span><span style=\"color:#268bd2;\">derive</span><span style=\"color:#657b83;\">(Debug)]\n</span><span style=\"color:#586e75;\">pub </span><span style=\"color:#268bd2;\">struct </span><span style=\"color:#b58900;\">Response </span><span style=\"color:#657b83;\">{\n </span><span style=\"color:#586e75;\">pub </span><span style=\"color:#268bd2;\">status</span><span style=\"color:#657b83;\">: </span><span style=\"color:#268bd2;\">u32</span><span style=\"color:#657b83;\">,\n </span><span style=\"color:#586e75;\">pub </span><span style=\"color:#268bd2;\">headers</span><span style=\"color:#657b83;\">: HashMap<</span><span style=\"color:#859900;\">String</span><span style=\"color:#657b83;\">, </span><span style=\"color:#859900;\">String</span><span style=\"color:#657b83;\">>,\n </span><span style=\"color:#586e75;\">pub </span><span style=\"color:#268bd2;\">body</span><span style=\"color:#657b83;\">: </span><span style=\"color:#859900;\">Vec</span><span style=\"color:#657b83;\"><</span><span style=\"color:#268bd2;\">u8</span><span style=\"color:#657b83;\">>,\n}\n</span></code></pre>\n<p>Next we want to define a function, <code>run</code>, which:</p>\n<ul>\n<li>Accepts a web application as an argument</li>\n<li>Loops infinitely</li>\n<li>Generates fake <code>Request</code> values</li>\n<li>Prints out the <code>Response</code> values it gets from the application</li>\n</ul>\n<p>The first question is: how do you represent that web application? It's going to be an implementation of <code>Service</code>, with the <code>Request</code> and <code>Response</code> types being those we defined above. We don't need to know much about the errors, since we'll simply print them. These parts are pretty easy:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#586e75;\">pub</span><span style=\"color:#657b83;\"> async </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">run</span><span style=\"color:#657b83;\"><App>(</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#268bd2;\">app</span><span style=\"color:#657b83;\">: App)\n</span><span style=\"color:#859900;\">where</span><span style=\"color:#657b83;\">\n App: Service<crate::http::Request, Response = crate::http::Response>,\n </span><span style=\"color:#268bd2;\">App::</span><span style=\"color:#657b83;\">Error: std::fmt::Debug,\n</span></code></pre>\n<p>But there's one final bound we need to take into account. We want our fake web server to be able to handle requests concurrently. To do that, we'll use <code>tokio::spawn</code> to create new tasks for handling requests. Therefore, we need to be able to send the request handling to a separate task, which will require bounds of both <code>Send</code> and <code>'static</code>. There are at least two different ways of handling this:</p>\n<ul>\n<li>Cloning the <code>App</code> value in the main task and sending it to the spawned task</li>\n<li>Creating the <code>Future</code> in the main task and sending it to the spawned task</li>\n</ul>\n<p>There are different runtime impacts of making this decision, such as whether the main request accept loop will be blocked or not by the application reporting that it's not available for requests. I decided to go with the latter approach. So we've got one more bound on <code>run</code>:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">App::Future: </span><span style=\"color:#859900;\">Send </span><span style=\"color:#657b83;\">+ </span><span style=\"color:#586e75;\">'static</span><span style=\"color:#657b83;\">,\n</span></code></pre>\n<p>The body of <code>run</code> is wrapped inside a <code>loop</code> to allow simulating an infinitely running server. First we sleep for a bit and then generate our new fake request:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">tokio::time::sleep(tokio::time::Duration::from_secs(</span><span style=\"color:#6c71c4;\">1</span><span style=\"color:#657b83;\">)).await;\n\n</span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> req = </span><span style=\"color:#859900;\">crate</span><span style=\"color:#657b83;\">::http::Request {\n path_and_query: </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">/fake/path?page=1</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">.</span><span style=\"color:#859900;\">to_owned</span><span style=\"color:#657b83;\">(),\n headers: HashMap::new(),\n body: </span><span style=\"color:#859900;\">Vec</span><span style=\"color:#657b83;\">::new(),\n};\n</span></code></pre>\n<p>Next, we use the <code>ready</code> method (from the <code>ServiceExt</code> extension trait) to check whether the service is ready to accept a new request:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> app = </span><span style=\"color:#859900;\">match</span><span style=\"color:#657b83;\"> app.</span><span style=\"color:#859900;\">ready</span><span style=\"color:#657b83;\">().await {\n </span><span style=\"color:#859900;\">Err</span><span style=\"color:#657b83;\">(e) </span><span style=\"color:#859900;\">=> </span><span style=\"color:#657b83;\">{\n </span><span style=\"color:#859900;\">eprintln!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Service not able to accept requests: </span><span style=\"color:#cb4b16;\">{:?}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, e);\n </span><span style=\"color:#859900;\">continue</span><span style=\"color:#657b83;\">;\n }\n </span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">(app) </span><span style=\"color:#859900;\">=></span><span style=\"color:#657b83;\"> app,\n};\n</span></code></pre>\n<p>Once we know we can make another request, we get our <code>Future</code>, spawn the task, and then wait for the <code>Future</code> to complete:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> future = app.</span><span style=\"color:#859900;\">call</span><span style=\"color:#657b83;\">(req);\ntokio::spawn(async </span><span style=\"color:#586e75;\">move </span><span style=\"color:#657b83;\">{\n </span><span style=\"color:#859900;\">match</span><span style=\"color:#657b83;\"> future.await {\n </span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">(res) </span><span style=\"color:#859900;\">=> println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Successful response: </span><span style=\"color:#cb4b16;\">{:?}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, res),\n </span><span style=\"color:#859900;\">Err</span><span style=\"color:#657b83;\">(e) </span><span style=\"color:#859900;\">=> eprintln!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Error occurred: </span><span style=\"color:#cb4b16;\">{:?}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, e),\n }\n});\n</span></code></pre>\n<p>And just like that, we have a fake web server! Now it's time to implement our fake web application. I'll call it <code>DemoApp</code>, and give it an atomic counter to make things slightly interesting:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">#[</span><span style=\"color:#268bd2;\">derive</span><span style=\"color:#657b83;\">(Default)]\n</span><span style=\"color:#586e75;\">pub </span><span style=\"color:#268bd2;\">struct </span><span style=\"color:#b58900;\">DemoApp </span><span style=\"color:#657b83;\">{\n </span><span style=\"color:#268bd2;\">counter</span><span style=\"color:#657b83;\">: Arc<AtomicUsize>,\n}\n</span></code></pre>\n<p>Next comes the implementation of <code>Service</code>. The first few bits are relatively easy:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">impl </span><span style=\"color:#657b83;\">tower::Service<crate::http::Request> </span><span style=\"color:#859900;\">for </span><span style=\"color:#b58900;\">DemoApp </span><span style=\"color:#657b83;\">{\n </span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Response </span><span style=\"color:#657b83;\">= </span><span style=\"color:#859900;\">crate</span><span style=\"color:#657b83;\">::http::Response;\n </span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Error </span><span style=\"color:#657b83;\">= anyhow::Error;\n #[</span><span style=\"color:#268bd2;\">allow</span><span style=\"color:#657b83;\">(clippy::type_complexity)]\n </span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Future </span><span style=\"color:#657b83;\">= Pin<</span><span style=\"color:#859900;\">Box</span><span style=\"color:#657b83;\"><dyn Future<Output = </span><span style=\"color:#859900;\">Result</span><span style=\"color:#657b83;\"><</span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Response, </span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Error>> </span><span style=\"color:#859900;\">+ Send</span><span style=\"color:#657b83;\">>>;\n\n </span><span style=\"color:#93a1a1;\">// Still need poll_ready and call\n</span><span style=\"color:#657b83;\">}\n</span></code></pre>\n<p><code>Request</code> and <code>Response</code> get set to the types we defined, we'll use the wonderful <code>anyhow</code> crate's <code>Error</code> type, and we'll use a trait object for the <code>Future</code>. We're going to implement a <code>poll_ready</code> which is always ready for a <code>Request</code>:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">poll_ready</span><span style=\"color:#657b83;\">(\n </span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">,\n </span><span style=\"color:#268bd2;\">_cx</span><span style=\"color:#657b83;\">: </span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#657b83;\">std::task::Context<'</span><span style=\"color:#859900;\">_</span><span style=\"color:#657b83;\">>,\n) -> Poll<</span><span style=\"color:#859900;\">Result</span><span style=\"color:#657b83;\"><(), </span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Error>> {\n Poll::Ready(</span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">(())) </span><span style=\"color:#93a1a1;\">// always ready to accept a connection\n</span><span style=\"color:#657b83;\">}\n</span></code></pre>\n<p>And finally we get to our <code>call</code> method. We're going to implement some logic to increment the counter, fail 25% of the time, and otherwise echo back the request from the user, with an added <code>X-Counter</code> response header. Let's see it in action:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">call</span><span style=\"color:#657b83;\">(</span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">, </span><span style=\"color:#586e75;\">mut </span><span style=\"color:#268bd2;\">req</span><span style=\"color:#657b83;\">: crate::http::Request) -> </span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Future {\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> counter = </span><span style=\"color:#d33682;\">self</span><span style=\"color:#657b83;\">.counter.</span><span style=\"color:#859900;\">clone</span><span style=\"color:#657b83;\">();\n </span><span style=\"color:#859900;\">Box</span><span style=\"color:#657b83;\">::pin(async </span><span style=\"color:#586e75;\">move </span><span style=\"color:#657b83;\">{\n </span><span style=\"color:#859900;\">println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Handling a request for </span><span style=\"color:#cb4b16;\">{}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, req.path_and_query);\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> counter = counter.</span><span style=\"color:#859900;\">fetch_add</span><span style=\"color:#657b83;\">(</span><span style=\"color:#6c71c4;\">1</span><span style=\"color:#657b83;\">, std::sync::atomic::Ordering::SeqCst);\n anyhow::ensure</span><span style=\"color:#859900;\">!</span><span style=\"color:#657b83;\">(counter % </span><span style=\"color:#6c71c4;\">4 </span><span style=\"color:#657b83;\">!= </span><span style=\"color:#6c71c4;\">2</span><span style=\"color:#657b83;\">, </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Failing 25% of the time, just for fun</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">);\n req.headers\n .</span><span style=\"color:#859900;\">insert</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">X-Counter</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">.</span><span style=\"color:#859900;\">to_owned</span><span style=\"color:#657b83;\">(), counter.</span><span style=\"color:#859900;\">to_string</span><span style=\"color:#657b83;\">());\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> res = </span><span style=\"color:#859900;\">crate</span><span style=\"color:#657b83;\">::http::Response {\n status: </span><span style=\"color:#6c71c4;\">200</span><span style=\"color:#657b83;\">,\n headers: req.headers,\n body: req.body,\n };\n </span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">::<</span><span style=\"color:#859900;\">_</span><span style=\"color:#657b83;\">, anyhow::Error>(res)\n })\n}\n</span></code></pre>\n<p>With all that in place, running our fake web app on our fake web server is nice and easy:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">#[</span><span style=\"color:#268bd2;\">tokio</span><span style=\"color:#657b83;\">::</span><span style=\"color:#268bd2;\">main</span><span style=\"color:#657b83;\">]\nasync </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">main</span><span style=\"color:#657b83;\">() {\n fakeserver::run(app::DemoApp::default()).await;\n}\n</span></code></pre><h2 id=\"app-fn\"><code>app_fn</code></h2>\n<p>One thing that's particularly unsatisfying about the code above is how much ceremony it takes to write a web application. I need to create a new data type, provide a <code>Service</code> implementation for it, and futz around with all that <code>Pin<Box<Future>></code> business to make things line up. The core logic of our <code>DemoApp</code> is buried inside the <code>call</code> method. It would be nice to provide a helper of some kind that lets us define things more easily.</p>\n<p>You can check out <a href=\"https://gist.github.com/snoyberg/cb72a9cbefc608ec15e05ed70ced1a6b\">the full code as a Gist</a>. But let's talk through it here. We're going to implement a new helper <code>app_fn</code> function which takes a closure as its argument. That closure will take in a <code>Request</code> value, and then return a <code>Response</code>. But we want to make sure it asynchronously returns the <code>Response</code>. So we'll need our calls to look something like:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#859900;\">app_fn</span><span style=\"color:#657b83;\">(|</span><span style=\"color:#268bd2;\">req</span><span style=\"color:#657b83;\">| async { </span><span style=\"color:#859900;\">some_code</span><span style=\"color:#657b83;\">(req).await })\n</span></code></pre>\n<p>This <code>app_fn</code> function needs to return a type which provides our <code>Service</code> implementation. Let's call it <code>AppFn</code>. Putting these two things together, we get:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#586e75;\">pub </span><span style=\"color:#268bd2;\">struct </span><span style=\"color:#b58900;\">AppFn</span><span style=\"color:#657b83;\"><F> {\n </span><span style=\"color:#268bd2;\">f</span><span style=\"color:#657b83;\">: F,\n}\n\n</span><span style=\"color:#586e75;\">pub </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">app_fn</span><span style=\"color:#657b83;\"><F, Ret>(</span><span style=\"color:#268bd2;\">f</span><span style=\"color:#657b83;\">: F) -> AppFn<F>\n</span><span style=\"color:#859900;\">where</span><span style=\"color:#657b83;\">\n F: FnMut(crate::http::Request) -> Ret,\n Ret: Future<Output = </span><span style=\"color:#859900;\">Result</span><span style=\"color:#657b83;\"><crate::http::Response, anyhow::Error>>,\n{\n AppFn { f }\n}\n</span></code></pre>\n<p>So far, so good. We can see with the bounds on <code>app_fn</code> that we'll accept a <code>Request</code> and return some <code>Ret</code> type, and <code>Ret</code> must be a <code>Future</code> that produces a <code>Result<Response, Error></code>. Implementing <code>Service</code> for this isn't too bad:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">impl</span><span style=\"color:#657b83;\"><F, Ret> tower::Service<crate::http::Request> </span><span style=\"color:#859900;\">for </span><span style=\"color:#b58900;\">AppFn</span><span style=\"color:#657b83;\"><F>\n</span><span style=\"color:#859900;\">where</span><span style=\"color:#657b83;\">\n F: FnMut(crate::http::Request) -> Ret,\n Ret: Future<Output = </span><span style=\"color:#859900;\">Result</span><span style=\"color:#657b83;\"><crate::http::Response, anyhow::Error>>,\n{\n </span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Response </span><span style=\"color:#657b83;\">= </span><span style=\"color:#859900;\">crate</span><span style=\"color:#657b83;\">::http::Response;\n </span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Error </span><span style=\"color:#657b83;\">= anyhow::Error;\n </span><span style=\"color:#268bd2;\">type </span><span style=\"color:#b58900;\">Future </span><span style=\"color:#657b83;\">= Ret;\n\n </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">poll_ready</span><span style=\"color:#657b83;\">(\n </span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">,\n </span><span style=\"color:#268bd2;\">_cx</span><span style=\"color:#657b83;\">: </span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#657b83;\">std::task::Context<'</span><span style=\"color:#859900;\">_</span><span style=\"color:#657b83;\">>,\n ) -> Poll<</span><span style=\"color:#859900;\">Result</span><span style=\"color:#657b83;\"><(), </span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Error>> {\n Poll::Ready(</span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">(())) </span><span style=\"color:#93a1a1;\">// always ready to accept a connection\n </span><span style=\"color:#657b83;\">}\n\n </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">call</span><span style=\"color:#657b83;\">(</span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut </span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">, </span><span style=\"color:#268bd2;\">req</span><span style=\"color:#657b83;\">: crate::http::Request) -> </span><span style=\"color:#268bd2;\">Self::</span><span style=\"color:#657b83;\">Future {\n (</span><span style=\"color:#d33682;\">self</span><span style=\"color:#657b83;\">.f)(req)\n }\n}\n</span></code></pre>\n<p>We have the same bounds as on <code>app_fn</code>, the associated types <code>Response</code> and <code>Error</code> are straightforward, and <code>poll_ready</code> is the same as it was before. The first interesting bit is <code>type Future = Ret;</code>. We previously went the route of a trait object, which was more verbose and less performant. This time, we already have a type, <code>Ret</code>, that represents the <code>Future</code> the caller of our function will be providing. It's really nice that we get to simply use it here!</p>\n<p>The <code>call</code> method leverages the function provided by the caller to produce a new <code>Ret</code>/<code>Future</code> value per incoming request and hand it back to the web server for processing.</p>\n<p>And finally, our <code>main</code> function can now embed our application logic inside it as a closure. This looks like:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">#[</span><span style=\"color:#268bd2;\">tokio</span><span style=\"color:#657b83;\">::</span><span style=\"color:#268bd2;\">main</span><span style=\"color:#657b83;\">]\nasync </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">main</span><span style=\"color:#657b83;\">() {\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> counter = Arc::new(AtomicUsize::new(</span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\">));\n fakeserver::run(util::app_fn(</span><span style=\"color:#586e75;\">move </span><span style=\"color:#859900;\">|</span><span style=\"color:#586e75;\">mut</span><span style=\"color:#657b83;\"> req</span><span style=\"color:#859900;\">| </span><span style=\"color:#657b83;\">{\n </span><span style=\"color:#93a1a1;\">// need to clone this from the closure before moving it into the async block\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> counter = counter.</span><span style=\"color:#859900;\">clone</span><span style=\"color:#657b83;\">();\n async </span><span style=\"color:#586e75;\">move </span><span style=\"color:#657b83;\">{\n </span><span style=\"color:#859900;\">println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Handling a request for </span><span style=\"color:#cb4b16;\">{}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, req.path_and_query);\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> counter = counter.</span><span style=\"color:#859900;\">fetch_add</span><span style=\"color:#657b83;\">(</span><span style=\"color:#6c71c4;\">1</span><span style=\"color:#657b83;\">, std::sync::atomic::Ordering::SeqCst);\n anyhow::ensure</span><span style=\"color:#859900;\">!</span><span style=\"color:#657b83;\">(counter % </span><span style=\"color:#6c71c4;\">4 </span><span style=\"color:#657b83;\">!= </span><span style=\"color:#6c71c4;\">2</span><span style=\"color:#657b83;\">, </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Failing 25% of the time, just for fun</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">);\n req.headers\n .</span><span style=\"color:#859900;\">insert</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">X-Counter</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">.</span><span style=\"color:#859900;\">to_owned</span><span style=\"color:#657b83;\">(), counter.</span><span style=\"color:#859900;\">to_string</span><span style=\"color:#657b83;\">());\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> res = </span><span style=\"color:#859900;\">crate</span><span style=\"color:#657b83;\">::http::Response {\n status: </span><span style=\"color:#6c71c4;\">200</span><span style=\"color:#657b83;\">,\n headers: req.headers,\n body: req.body,\n };\n </span><span style=\"color:#859900;\">Ok</span><span style=\"color:#657b83;\">::<</span><span style=\"color:#859900;\">_</span><span style=\"color:#657b83;\">, anyhow::Error>(res)\n }\n }))\n .await;\n}\n</span></code></pre><h3 id=\"side-note-the-extra-clone\">Side note: the extra clone</h3>\n<p>From bitter experience, both my own and others I've spoken with, that <code>let counter = counter.clone();</code> above is likely the trickiest piece of the code above. It's all too easy to write code that looks something like:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> counter = Arc::new(AtomicUsize::new(</span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\">));\nfakeserver::run(util::app_fn(</span><span style=\"color:#586e75;\">move </span><span style=\"color:#859900;\">|</span><span style=\"color:#657b83;\">_req</span><span style=\"color:#859900;\">|</span><span style=\"color:#657b83;\"> async </span><span style=\"color:#586e75;\">move </span><span style=\"color:#657b83;\">{\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> counter = counter.</span><span style=\"color:#859900;\">fetch_add</span><span style=\"color:#657b83;\">(</span><span style=\"color:#6c71c4;\">1</span><span style=\"color:#657b83;\">, std::sync::atomic::Ordering::SeqCst);\n </span><span style=\"color:#859900;\">Err</span><span style=\"color:#657b83;\">(anyhow::anyhow</span><span style=\"color:#859900;\">!</span><span style=\"color:#657b83;\">(\n </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Just demonstrating the problem, counter is {}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">,\n counter\n ))\n}))\n.await;\n</span></code></pre>\n<p>This looks perfectly reasonable. We move the <code>counter</code> into the closure and then use it. However, the compiler isn't too happy with us:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">error[</span><span style=\"color:#cb4b16;\">E0507</span><span style=\"color:#657b83;\">]: cannot </span><span style=\"color:#586e75;\">move</span><span style=\"color:#657b83;\"> out of `counter`, a captured variable </span><span style=\"color:#859900;\">in</span><span style=\"color:#657b83;\"> an `</span><span style=\"color:#859900;\">FnMut</span><span style=\"color:#657b83;\">` closure\n --> src\\main.rs:</span><span style=\"color:#6c71c4;\">96</span><span style=\"color:#657b83;\">:</span><span style=\"color:#6c71c4;\">57\n </span><span style=\"color:#859900;\">|\n</span><span style=\"color:#6c71c4;\">95 </span><span style=\"color:#859900;\">| </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> counter = Arc::new(AtomicUsize::new(</span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\">));\n | ------- captured outer variable\n</span><span style=\"color:#6c71c4;\">96 </span><span style=\"color:#859900;\">| </span><span style=\"color:#657b83;\">fakeserver::run(util::app_fn(</span><span style=\"color:#586e75;\">move </span><span style=\"color:#859900;\">|</span><span style=\"color:#657b83;\">_req</span><span style=\"color:#859900;\">|</span><span style=\"color:#657b83;\"> async </span><span style=\"color:#586e75;\">move </span><span style=\"color:#657b83;\">{\n | _________________________________________________________^\n</span><span style=\"color:#6c71c4;\">97 </span><span style=\"color:#859900;\">| | </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> counter = counter.</span><span style=\"color:#859900;\">fetch_add</span><span style=\"color:#657b83;\">(</span><span style=\"color:#6c71c4;\">1</span><span style=\"color:#657b83;\">, std::sync::atomic::Ordering::SeqCst);\n | | -------\n </span><span style=\"color:#859900;\">| | |\n | | </span><span style=\"color:#586e75;\">move</span><span style=\"color:#657b83;\"> occurs because `counter` has </span><span style=\"color:#268bd2;\">type</span><span style=\"color:#657b83;\"> `Arc<AtomicUsize>`, which does not implement the `</span><span style=\"color:#859900;\">Copy</span><span style=\"color:#657b83;\">` </span><span style=\"color:#268bd2;\">trait\n </span><span style=\"color:#859900;\">| | </span><span style=\"color:#586e75;\">move</span><span style=\"color:#657b83;\"> occurs due to </span><span style=\"color:#859900;\">use in</span><span style=\"color:#657b83;\"> generator\n</span><span style=\"color:#6c71c4;\">98 </span><span style=\"color:#859900;\">| | Err</span><span style=\"color:#657b83;\">(anyhow::anyhow</span><span style=\"color:#859900;\">!</span><span style=\"color:#657b83;\">(\n</span><span style=\"color:#6c71c4;\">99 </span><span style=\"color:#859900;\">| | </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Just demonstrating the problem, counter is {}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">,\n</span><span style=\"color:#6c71c4;\">100 </span><span style=\"color:#859900;\">| |</span><span style=\"color:#657b83;\"> counter\n</span><span style=\"color:#6c71c4;\">101 </span><span style=\"color:#859900;\">| | </span><span style=\"color:#657b83;\">))\n</span><span style=\"color:#6c71c4;\">102 </span><span style=\"color:#859900;\">| | </span><span style=\"color:#657b83;\">}))\n </span><span style=\"color:#859900;\">| |</span><span style=\"color:#cb4b16;\">_____</span><span style=\"color:#859900;\">^ </span><span style=\"color:#586e75;\">move</span><span style=\"color:#657b83;\"> out of `counter` occurs here\n</span></code></pre>\n<p>It's a slightly confusing error message. In my opinion, it's confusing because of the formatting I've used. And I've used that formatting because (1) <code>rustfmt</code> encourages it, and (2) the Hyper docs encourage it. Let me reformat a bit, and then explain the issue:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> counter = Arc::new(AtomicUsize::new(</span><span style=\"color:#6c71c4;\">0</span><span style=\"color:#657b83;\">));\nfakeserver::run(util::app_fn(</span><span style=\"color:#586e75;\">move </span><span style=\"color:#859900;\">|</span><span style=\"color:#657b83;\">_req</span><span style=\"color:#859900;\">| </span><span style=\"color:#657b83;\">{\n async </span><span style=\"color:#586e75;\">move </span><span style=\"color:#657b83;\">{\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> counter = counter.</span><span style=\"color:#859900;\">fetch_add</span><span style=\"color:#657b83;\">(</span><span style=\"color:#6c71c4;\">1</span><span style=\"color:#657b83;\">, std::sync::atomic::Ordering::SeqCst);\n </span><span style=\"color:#859900;\">Err</span><span style=\"color:#657b83;\">(anyhow::anyhow</span><span style=\"color:#859900;\">!</span><span style=\"color:#657b83;\">(\n </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Just demonstrating the problem, counter is {}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">,\n counter\n ))\n }\n}))\n</span></code></pre>\n<p>The issue is that, in the argument to <code>app_fn</code>, we have two different control structures:</p>\n<ul>\n<li>A move closure, which takes ownership of <code>counter</code> and produces a <code>Future</code></li>\n<li>An <code>async move</code> block, which takes ownership of <code>counter</code></li>\n</ul>\n<p>The issue is that there's only one <code>counter</code> value. It gets moved first into the closure. That means we can't use <code>counter</code> again outside the closure, which we don't try to do. All good. The second thing is that, when that closure is called, the <code>counter</code> value will be moved from the closure into the <code>async move</code> block. That's also fine, but it's only fine once. If you try to call the closure a second time, it would fail, because the <code>counter</code> has already been moved. Therefore, this closure is a <code>FnOnce</code>, not a <code>Fn</code> or <code>FnMut</code>.</p>\n<p>And that's the problem here. As we saw above, we need at least a <code>FnMut</code> as our argument to the fake web server. This makes intuitive sense: we will call our application request handling function multiple times, not just once.</p>\n<p>The fix for this is to clone the <code>counter</code> inside the closure body, but before moving it into the <code>async move</code> block. That's easy enough:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">fakeserver::run(util::app_fn(</span><span style=\"color:#586e75;\">move </span><span style=\"color:#859900;\">|</span><span style=\"color:#657b83;\">_req</span><span style=\"color:#859900;\">| </span><span style=\"color:#657b83;\">{\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> counter = counter.</span><span style=\"color:#859900;\">clone</span><span style=\"color:#657b83;\">();\n async </span><span style=\"color:#586e75;\">move </span><span style=\"color:#657b83;\">{\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> counter = counter.</span><span style=\"color:#859900;\">fetch_add</span><span style=\"color:#657b83;\">(</span><span style=\"color:#6c71c4;\">1</span><span style=\"color:#657b83;\">, std::sync::atomic::Ordering::SeqCst);\n </span><span style=\"color:#859900;\">Err</span><span style=\"color:#657b83;\">(anyhow::anyhow</span><span style=\"color:#859900;\">!</span><span style=\"color:#657b83;\">(\n </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Just demonstrating the problem, counter is {}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">,\n counter\n ))\n }\n}))\n</span></code></pre>\n<p>This is a really subtle point, hopefully this demonstration will help make it clearer.</p>\n<h2 id=\"connections-and-requests\">Connections and requests</h2>\n<p>There's a simplification in our fake web server above. A real HTTP workflow starts off with a new connection, and then handles a stream of requests off of that connection. In other words, instead of having just one service, we really need two services:</p>\n<ol>\n<li>A service like we have above, which accepts <code>Request</code>s and returns <code>Response</code>s</li>\n<li>A service that accepts connection information and returns one of the above services</li>\n</ol>\n<p>Again, leaning on some terse Haskell syntax, we'd want:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#859900;\">type </span><span style=\"color:#cb4b16;\">InnerService </span><span style=\"color:#859900;\">= </span><span style=\"color:#cb4b16;\">Request </span><span style=\"color:#859900;\">-> </span><span style=\"color:#cb4b16;\">IO Response\n</span><span style=\"color:#859900;\">type </span><span style=\"color:#cb4b16;\">OuterService </span><span style=\"color:#859900;\">= </span><span style=\"color:#cb4b16;\">ConnectionInfo </span><span style=\"color:#859900;\">-> </span><span style=\"color:#cb4b16;\">IO InnerService\n</span></code></pre>\n<p>Or, to borrow some beautiful Java terminology, we want to create a <em>service factory</em> which will take some connection information and return a request handling service. Or, to use Tower/Hyper terminology, we have a <em>service</em>, and a <em>make service</em>. Which, if you've ever been confused by the Hyper tutorials like I was, may finally explain why "Hello World" requires both a <code>service_fn</code> and <code>make_service_fn</code> call.</p>\n<p>Anyway, it's too detailed to dive into all the changes necessary to the code above to replicate this concept, but I've <a href=\"https://gist.github.com/snoyberg/b574ef4ece5f23913c6c70b1f4f22ed5\">provided a Gist showing an <code>AppFactoryFn</code></a>.</p>\n<p>And with that... we've finally played around with fake stuff long enough that we can dive into real life Hyper code. Hurrah!</p>\n<h2 id=\"next-time\">Next time</h2>\n<p>Up until this point, we've only played with Tower. The next post in this series is available, where we try to <a href=\"https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part2/\">understand Hyper and experiment with Axum</a>.</p>\n<p class=\"text-center\"><a class=\"btn btn-info\" href=\"/blog/axum-hyper-tonic-tower-part2\">Read part 2 now</a></p>\n<p>If you're looking for more Rust content from FP Complete, check out:</p>\n<ul>\n<li><a href=\"/tags/rust/\">Rust tagged blog posts</a></li>\n<li><a href=\"https://www.fpcomplete.com/rust/\">Rust homepage</a></li>\n<li><a href=\"https://www.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a></li>\n</ul>\n",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/",
"slug": "axum-hyper-tonic-tower-part1",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 1",
"description": "Part 1 of a blog post series examining the Hyper/Tower web ecosystem in Rust, and specifically combining the Axum framework and Tonic gRPC servers.",
"updated": null,
"date": "2021-08-30",
"year": 2021,
"month": 8,
"day": 30,
"taxonomies": {
"categories": [
"functional programming"
],
"tags": [
"rust"
]
},
"extra": {
"author": "Michael Snoyman",
"author_avatar": "/images/leaders/michael-snoyman.png",
"image": "images/blog/thumbs/axum-hyper-tonic-tower-part1.png",
"blogimage": "/images/blog-listing/rust.png"
},
"path": "blog/axum-hyper-tonic-tower-part1/",
"components": [
"blog",
"axum-hyper-tonic-tower-part1"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "what-is-tower",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/#what-is-tower",
"title": "What is Tower?",
"children": []
},
{
"level": 2,
"id": "fake-web-server",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/#fake-web-server",
"title": "Fake web server",
"children": []
},
{
"level": 2,
"id": "app-fn",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/#app-fn",
"title": "app_fn",
"children": [
{
"level": 3,
"id": "side-note-the-extra-clone",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/#side-note-the-extra-clone",
"title": "Side note: the extra clone",
"children": []
}
]
},
{
"level": 2,
"id": "connections-and-requests",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/#connections-and-requests",
"title": "Connections and requests",
"children": []
},
{
"level": 2,
"id": "next-time",
"permalink": "https://www.fpcomplete.com/blog/axum-hyper-tonic-tower-part1/#next-time",
"title": "Next time",
"children": []
}
],
"word_count": 3168,
"reading_time": 16,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/rust-asref-asderef.md",
"content": "<p>What's wrong with this program?</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">main</span><span style=\"color:#657b83;\">() {\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> option_name: </span><span style=\"color:#859900;\">Option</span><span style=\"color:#657b83;\"><</span><span style=\"color:#859900;\">String</span><span style=\"color:#657b83;\">> = </span><span style=\"color:#859900;\">Some</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Alice</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">.</span><span style=\"color:#859900;\">to_owned</span><span style=\"color:#657b83;\">());\n </span><span style=\"color:#859900;\">match</span><span style=\"color:#657b83;\"> option_name {\n </span><span style=\"color:#859900;\">Some</span><span style=\"color:#657b83;\">(name) </span><span style=\"color:#859900;\">=> println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Name is </span><span style=\"color:#cb4b16;\">{}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, name),\n </span><span style=\"color:#859900;\">None => println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">No name provided</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">),\n }\n </span><span style=\"color:#859900;\">println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#cb4b16;\">{:?}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, option_name);\n}\n</span></code></pre>\n<p>The compiler gives us a wonderful error message, including a hint on how to fix it:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">error[E0382]: borrow of partially moved value: `option_name`\n --> src\\main.rs:7:22\n |\n4 | Some(name) => println!("Name is {}", name),\n | ---- value partially moved here\n...\n7 | println!("{:?}", option_name);\n | ^^^^^^^^^^^ value borrowed here after partial move\n |\n = note: partial move occurs because value has type `String`, which does not implement the `Copy` trait\nhelp: borrow this field in the pattern to avoid moving `option_name.0`\n |\n4 | Some(ref name) => println!("Name is {}", name),\n | ^^^\n</span></code></pre>\n<p>The issue here is that our pattern match on <code>option_name</code> moves the <code>Option<String></code> value into the match. We can then no longer use <code>option_name</code> after the <code>match</code>. But this is disappointing, because our usage of <code>option_name</code> and <code>name</code> inside the pattern match doesn't actually require moving the value at all! Instead, borrowing would be just fine.</p>\n<p>And that's exactly what the <code>note</code> from the compiler says. We can use the <code>ref</code> keyword in the <a href=\"https://doc.rust-lang.org/stable/reference/patterns.html#identifier-patterns\">identifier pattern</a> to change this behavior and, instead of <em>moving</em> the value, we'll borrow a reference to the value. Now we're free to reuse <code>option_name</code> after the <code>match</code>. That version of the code looks like:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">main</span><span style=\"color:#657b83;\">() {\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> option_name: </span><span style=\"color:#859900;\">Option</span><span style=\"color:#657b83;\"><</span><span style=\"color:#859900;\">String</span><span style=\"color:#657b83;\">> = </span><span style=\"color:#859900;\">Some</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Alice</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">.</span><span style=\"color:#859900;\">to_owned</span><span style=\"color:#657b83;\">());\n </span><span style=\"color:#859900;\">match</span><span style=\"color:#657b83;\"> option_name {\n </span><span style=\"color:#859900;\">Some</span><span style=\"color:#657b83;\">(</span><span style=\"color:#586e75;\">ref</span><span style=\"color:#657b83;\"> name) </span><span style=\"color:#859900;\">=> println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Name is </span><span style=\"color:#cb4b16;\">{}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, name),\n </span><span style=\"color:#859900;\">None => println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">No name provided</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">),\n }\n </span><span style=\"color:#859900;\">println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#cb4b16;\">{:?}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, option_name);\n}\n</span></code></pre>\n<p>For the curious, you can <a href=\"https://doc.rust-lang.org/std/keyword.ref.html\">read more about the <code>ref</code> keyword</a>.</p>\n<h2 id=\"more-idiomatic\">More idiomatic</h2>\n<p>While this is <em>working</em> code, in my opinion and experience, it's not idiomatic. It's far more common to put the borrow on <code>option_name</code>, like so:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">main</span><span style=\"color:#657b83;\">() {\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> option_name: </span><span style=\"color:#859900;\">Option</span><span style=\"color:#657b83;\"><</span><span style=\"color:#859900;\">String</span><span style=\"color:#657b83;\">> = </span><span style=\"color:#859900;\">Some</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Alice</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">.</span><span style=\"color:#859900;\">to_owned</span><span style=\"color:#657b83;\">());\n </span><span style=\"color:#859900;\">match &</span><span style=\"color:#657b83;\">option_name {\n </span><span style=\"color:#859900;\">Some</span><span style=\"color:#657b83;\">(name) </span><span style=\"color:#859900;\">=> println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Name is </span><span style=\"color:#cb4b16;\">{}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, name),\n </span><span style=\"color:#859900;\">None => println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">No name provided</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">),\n }\n </span><span style=\"color:#859900;\">println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#cb4b16;\">{:?}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, option_name);\n}\n</span></code></pre>\n<p>I like this version more, since it's blatantly obvious that we have no intention of moving <code>option_name</code> in the pattern match. Now <code>name</code> still remains as a reference, <code>println!</code> can use it as a reference, and everything is fine.</p>\n<p>The fact that this code works, however, is a specifically added feature of the language. Before <a href=\"https://rust-lang.github.io/rfcs/2005-match-ergonomics.html\">RFC 2005 "match ergonomics" landed in 2016</a>, the code above would have failed. That's because we tried to match the <code>Some</code> constructor against a <em>reference</em> to an <code>Option</code>, and those types don't match up. To borrow the RFC's terminology, getting that code to work would require "a bit of a dance":</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">main</span><span style=\"color:#657b83;\">() {\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> option_name: </span><span style=\"color:#859900;\">Option</span><span style=\"color:#657b83;\"><</span><span style=\"color:#859900;\">String</span><span style=\"color:#657b83;\">> = </span><span style=\"color:#859900;\">Some</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Alice</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">.</span><span style=\"color:#859900;\">to_owned</span><span style=\"color:#657b83;\">());\n </span><span style=\"color:#859900;\">match &</span><span style=\"color:#657b83;\">option_name {\n </span><span style=\"color:#859900;\">&Some</span><span style=\"color:#657b83;\">(</span><span style=\"color:#586e75;\">ref</span><span style=\"color:#657b83;\"> name) </span><span style=\"color:#859900;\">=> println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Name is </span><span style=\"color:#cb4b16;\">{}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, name),\n </span><span style=\"color:#859900;\">&None => println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">No name provided</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">),\n }\n </span><span style=\"color:#859900;\">println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#cb4b16;\">{:?}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, option_name);\n}\n</span></code></pre>\n<p>Now all of the types really line up explicitly:</p>\n<ul>\n<li>We have an <code>&Option<String></code></li>\n<li>We can therefore match on a <code>&Some</code> variant or a <code>&None</code> variant</li>\n<li>In the <code>&Some</code> variant, we need to make sure we borrow the inner value, so we add a <code>ref</code> keyword</li>\n</ul>\n<p>Fortunately, with RFC 2005 in place, this extra noise isn't needed, and we can simplify our pattern match as above. The Rust language is better for this change, and the masses can rejoice.</p>\n<h2 id=\"introducing-as-ref\">Introducing as_ref</h2>\n<p>But what if we didn't have RFC 2005? Would we be required to use the awkward syntax above forever? Thanks to a helper method, no. The problem in our code is that <code>&option_name</code> is a reference to an <code>Option<String></code>. And we want to pattern match on the <code>Some</code> and <code>None</code> constructors, and capture a <code>&String</code> instead of a <code>String</code> (avoiding the move). RFC 2005 implements that as a direct language feature. But there's also a method on <code>Option</code> that does just this: <code>as_ref</code>.</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">impl</span><span style=\"color:#657b83;\"><T> </span><span style=\"color:#b58900;\">Option</span><span style=\"color:#657b83;\"><T> {\n </span><span style=\"color:#586e75;\">pub </span><span style=\"color:#268bd2;\">const fn </span><span style=\"color:#b58900;\">as_ref</span><span style=\"color:#657b83;\">(</span><span style=\"color:#859900;\">&</span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">) -> </span><span style=\"color:#859900;\">Option</span><span style=\"color:#657b83;\"><</span><span style=\"color:#859900;\">&</span><span style=\"color:#657b83;\">T> {\n </span><span style=\"color:#859900;\">match </span><span style=\"color:#657b83;\">*</span><span style=\"color:#d33682;\">self </span><span style=\"color:#657b83;\">{\n </span><span style=\"color:#859900;\">Some</span><span style=\"color:#657b83;\">(</span><span style=\"color:#586e75;\">ref</span><span style=\"color:#657b83;\"> x) </span><span style=\"color:#859900;\">=> Some</span><span style=\"color:#657b83;\">(x),\n </span><span style=\"color:#859900;\">None => None</span><span style=\"color:#657b83;\">,\n }\n }\n}\n</span></code></pre>\n<p>This is another way of avoiding the "dance," by capturing it in the method definition itself. But thankfully, there's a great language ergonomics feature that captures this pattern, and automatically applies this rule for us. Meaning that <code>as_ref</code> isn't really necessary any more... right?</p>\n<h2 id=\"side-rant-ergonomics-in-rust\">Side rant: ergonomics in Rust</h2>\n<p>I absolutely love the ergonomics features of Rust. There is no "but" in my love for RFC 2005. There is, however, a concern around learning and teaching a language with these kinds of ergonomics. These kinds of features work 99% of the time. But when they fail, as we're about to see, it can come as a large shock.</p>\n<p>I'm guessing most Rustaceans, at least those that learned the language after 2016, never considered the fact that there was something weird about being able to pattern match a <code>Some</code> from an <code>&Option<String></code> value. It feels natural. It <em>is</em> natural. But because you were never forced to confront this while learning the language, at some point in the distant future you'll crash into a wall when this ergonomic feature doesn't kick in.</p>\n<p>I kind of wish there was a <code>--no-ergonomics</code> flag that we could turn on when learning the language to force us to confront all of these details. But there isn't. I'm hoping blog posts like this help out. Anyway, </rant>.</p>\n<h2 id=\"when-rfc-2005-fails\">When RFC 2005 fails</h2>\n<p>We can fairly easily create a contrived example of match ergonomics failing to solve our problem. Let's "improve" our program above by factoring out the greet logic to its own helper function:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">try_greet</span><span style=\"color:#657b83;\">(</span><span style=\"color:#268bd2;\">option_name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#859900;\">Option</span><span style=\"color:#657b83;\"><</span><span style=\"color:#859900;\">&String</span><span style=\"color:#657b83;\">>) {\n </span><span style=\"color:#859900;\">match</span><span style=\"color:#657b83;\"> option_name {\n </span><span style=\"color:#859900;\">Some</span><span style=\"color:#657b83;\">(name) </span><span style=\"color:#859900;\">=> println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Name is </span><span style=\"color:#cb4b16;\">{}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, name),\n </span><span style=\"color:#859900;\">None => println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">No name provided</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">),\n }\n}\n\n</span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">main</span><span style=\"color:#657b83;\">() {\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> option_name: </span><span style=\"color:#859900;\">Option</span><span style=\"color:#657b83;\"><</span><span style=\"color:#859900;\">String</span><span style=\"color:#657b83;\">> = </span><span style=\"color:#859900;\">Some</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Alice</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">.</span><span style=\"color:#859900;\">to_owned</span><span style=\"color:#657b83;\">());\n </span><span style=\"color:#859900;\">try_greet</span><span style=\"color:#657b83;\">(</span><span style=\"color:#859900;\">&</span><span style=\"color:#657b83;\">option_name);\n </span><span style=\"color:#859900;\">println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#cb4b16;\">{:?}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, option_name);\n}\n</span></code></pre>\n<p>This code won't compile:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">error[E0308]: mismatched types\n --> src\\main.rs:10:15\n |\n10 | try_greet(&option_name);\n | ^^^^^^^^^^^^\n | |\n | expected enum `Option`, found `&Option<String>`\n | help: you can convert from `&Option<T>` to `Option<&T>` using `.as_ref()`: `&option_name.as_ref()`\n |\n = note: expected enum `Option<&String>`\n found reference `&Option<String>`\n</span></code></pre>\n<p>Now we've bypassed any ability to use match ergonomics at the call site. With what we know about <code>as_ref</code>, it's easy enough to fix this. But, at least in my experience, the first time someone runs into this kind of error, it's a bit surprising, since most of us have never previously thought about the distinction between <code>Option<&T></code> and <code>&Option<T></code>.</p>\n<p>These kinds of errors tend to pop up when combining together other helper functions, such as <code>map</code>, which circumvent the need for explicit pattern matching.</p>\n<p>As an aside, you could solve this compile error pretty easily, without resorting to <code>as_ref</code>. Instead, you could change the type signature of <code>try_greet</code> to take a <code>&Option<String></code> instead of an <code>Option<&String></code>, and then allow the match ergonomics to kick in within the body of <code>try_greet</code>. One reason not to do this is that, as mentioned, this was all a contrived example to demonstrate a failure. But the other reason is more important: neither <code>&Option<String></code> nor <code>Option<&String></code> are good argument types. Let's explore that next.</p>\n<h2 id=\"when-as-ref-fails\">When as_ref fails</h2>\n<p>We're taught pretty early in our Rust careers that, when receiving an argument to a function, we should prefer taking references to slices instead of references to owned objects. In other words:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">greet_good</span><span style=\"color:#657b83;\">(</span><span style=\"color:#268bd2;\">name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#859900;\">&</span><span style=\"color:#268bd2;\">str</span><span style=\"color:#657b83;\">) {\n </span><span style=\"color:#859900;\">println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Name is </span><span style=\"color:#cb4b16;\">{}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, name);\n}\n\n</span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">greet_bad</span><span style=\"color:#657b83;\">(</span><span style=\"color:#268bd2;\">name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#859900;\">&</span><span style=\"color:#657b83;\">String) {\n </span><span style=\"color:#859900;\">println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Name is </span><span style=\"color:#cb4b16;\">{}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, name);\n}\n</span></code></pre>\n<p>And in fact, if you pass this code by <code>clippy</code>, it will tell you to change the signature of <code>greet_bad</code>. The <a href=\"https://rust-lang.github.io/rust-clippy/master/index.html#ptr_arg\">clippy lint description</a> provides a great explanation of this, but suffice it to say that <code>greet_good</code> is more general in what it accepts than <code>greet_bad</code>.</p>\n<p>The same logic applies to <code>try_greet</code>. Why should we accept <code>Option<&String></code> instead of <code>Option<&str></code>? And interestingly, clippy doesn't complain in this case like it did in <code>greet_bad</code>. To see why, let's change our signature like so and see what happens:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">try_greet</span><span style=\"color:#657b83;\">(</span><span style=\"color:#268bd2;\">option_name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#859900;\">Option</span><span style=\"color:#657b83;\"><</span><span style=\"color:#859900;\">&</span><span style=\"color:#268bd2;\">str</span><span style=\"color:#657b83;\">>) {\n </span><span style=\"color:#859900;\">match</span><span style=\"color:#657b83;\"> option_name {\n </span><span style=\"color:#859900;\">Some</span><span style=\"color:#657b83;\">(name) </span><span style=\"color:#859900;\">=> println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Name is </span><span style=\"color:#cb4b16;\">{}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, name),\n </span><span style=\"color:#859900;\">None => println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">No name provided</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">),\n }\n}\n\n</span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">main</span><span style=\"color:#657b83;\">() {\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> option_name: </span><span style=\"color:#859900;\">Option</span><span style=\"color:#657b83;\"><</span><span style=\"color:#859900;\">String</span><span style=\"color:#657b83;\">> = </span><span style=\"color:#859900;\">Some</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Alice</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">.</span><span style=\"color:#859900;\">to_owned</span><span style=\"color:#657b83;\">());\n </span><span style=\"color:#859900;\">try_greet</span><span style=\"color:#657b83;\">(option_name.</span><span style=\"color:#859900;\">as_ref</span><span style=\"color:#657b83;\">());\n </span><span style=\"color:#859900;\">println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#cb4b16;\">{:?}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, option_name);\n}\n</span></code></pre>\n<p>This code no longer compiles:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">error[E0308]: mismatched types\n --> src\\main.rs:10:15\n |\n10 | try_greet(option_name.as_ref());\n | ^^^^^^^^^^^^^^^^^^^^ expected `str`, found struct `String`\n |\n = note: expected enum `Option<&str>`\n found enum `Option<&String>`\n</span></code></pre>\n<p>This is another example of ergonomics failing. You see, when you call a function with an argument of type <code>&String</code>, but the function expects a <code>&str</code>, <a href=\"https://doc.rust-lang.org/book/ch15-02-deref.html#implicit-deref-coercions-with-functions-and-methods\">deref coercion</a> kicks in and will perform a conversion for you. This is a piece of Rust ergonomics that we all rely on regularly, and every once in a while it completely fails to help us. This is one of those times. The compiler will not automatically convert a <code>Option<&String></code> into an <code>Option<&str></code>.</p>\n<p>(You can also read more about <a href=\"https://doc.rust-lang.org/nomicon/coercions.html\">coercions in the nomicon</a>.)</p>\n<p>Fortunately, there's another helper method on <code>Option</code> that does this for us. <code>as_deref</code> works just like <code>as_ref</code>, but additionally performs a <code>deref</code> method call on the value. Its implementation in <code>std</code> is interesting:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">impl</span><span style=\"color:#657b83;\"><T: Deref> </span><span style=\"color:#b58900;\">Option</span><span style=\"color:#657b83;\"><T> {\n </span><span style=\"color:#586e75;\">pub </span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">as_deref</span><span style=\"color:#657b83;\">(</span><span style=\"color:#859900;\">&</span><span style=\"color:#268bd2;\">self</span><span style=\"color:#657b83;\">) -> </span><span style=\"color:#859900;\">Option</span><span style=\"color:#657b83;\"><</span><span style=\"color:#859900;\">&</span><span style=\"color:#268bd2;\">T::</span><span style=\"color:#657b83;\">Target> {\n </span><span style=\"color:#d33682;\">self</span><span style=\"color:#657b83;\">.</span><span style=\"color:#859900;\">as_ref</span><span style=\"color:#657b83;\">().</span><span style=\"color:#859900;\">map</span><span style=\"color:#657b83;\">(|</span><span style=\"color:#268bd2;\">t</span><span style=\"color:#657b83;\">| t.</span><span style=\"color:#859900;\">deref</span><span style=\"color:#657b83;\">())\n }\n}\n</span></code></pre>\n<p>But we can also implement it more explicitly to see the behavior spelled out:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#859900;\">use </span><span style=\"color:#657b83;\">std::ops::Deref;\n\n</span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">try_greet</span><span style=\"color:#657b83;\">(</span><span style=\"color:#268bd2;\">option_name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#859900;\">Option</span><span style=\"color:#657b83;\"><</span><span style=\"color:#859900;\">&</span><span style=\"color:#268bd2;\">str</span><span style=\"color:#657b83;\">>) {\n </span><span style=\"color:#859900;\">match</span><span style=\"color:#657b83;\"> option_name {\n </span><span style=\"color:#859900;\">Some</span><span style=\"color:#657b83;\">(name) </span><span style=\"color:#859900;\">=> println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Name is </span><span style=\"color:#cb4b16;\">{}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, name),\n </span><span style=\"color:#859900;\">None => println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">No name provided</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">),\n }\n}\n\n</span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">my_as_deref</span><span style=\"color:#657b83;\"><T: Deref>(</span><span style=\"color:#268bd2;\">x</span><span style=\"color:#657b83;\">: </span><span style=\"color:#859900;\">&Option</span><span style=\"color:#657b83;\"><T>) -> </span><span style=\"color:#859900;\">Option</span><span style=\"color:#657b83;\"><</span><span style=\"color:#859900;\">&</span><span style=\"color:#268bd2;\">T::</span><span style=\"color:#657b83;\">Target> {\n </span><span style=\"color:#859900;\">match </span><span style=\"color:#657b83;\">*x {\n </span><span style=\"color:#859900;\">None => None</span><span style=\"color:#657b83;\">,\n </span><span style=\"color:#859900;\">Some</span><span style=\"color:#657b83;\">(</span><span style=\"color:#586e75;\">ref</span><span style=\"color:#657b83;\"> t) </span><span style=\"color:#859900;\">=> Some</span><span style=\"color:#657b83;\">(t.</span><span style=\"color:#859900;\">deref</span><span style=\"color:#657b83;\">())\n }\n}\n\n</span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">main</span><span style=\"color:#657b83;\">() {\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> option_name: </span><span style=\"color:#859900;\">Option</span><span style=\"color:#657b83;\"><</span><span style=\"color:#859900;\">String</span><span style=\"color:#657b83;\">> = </span><span style=\"color:#859900;\">Some</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Alice</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">.</span><span style=\"color:#859900;\">to_owned</span><span style=\"color:#657b83;\">());\n </span><span style=\"color:#859900;\">try_greet</span><span style=\"color:#657b83;\">(</span><span style=\"color:#859900;\">my_as_deref</span><span style=\"color:#657b83;\">(</span><span style=\"color:#859900;\">&</span><span style=\"color:#657b83;\">option_name));\n </span><span style=\"color:#859900;\">println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#cb4b16;\">{:?}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, option_name);\n}\n</span></code></pre>\n<p>And to bring this back to something closer to real world code, here's a case where combining <code>as_deref</code> and <code>map</code> leads to much cleaner code than you'd otherwise have:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">greet</span><span style=\"color:#657b83;\">(</span><span style=\"color:#268bd2;\">name</span><span style=\"color:#657b83;\">: </span><span style=\"color:#859900;\">&</span><span style=\"color:#268bd2;\">str</span><span style=\"color:#657b83;\">) {\n </span><span style=\"color:#859900;\">println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Name is </span><span style=\"color:#cb4b16;\">{}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, name);\n}\n\n</span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">main</span><span style=\"color:#657b83;\">() {\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> option_name: </span><span style=\"color:#859900;\">Option</span><span style=\"color:#657b83;\"><</span><span style=\"color:#859900;\">String</span><span style=\"color:#657b83;\">> = </span><span style=\"color:#859900;\">Some</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">Alice</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">.</span><span style=\"color:#859900;\">to_owned</span><span style=\"color:#657b83;\">());\n option_name.</span><span style=\"color:#859900;\">as_deref</span><span style=\"color:#657b83;\">().</span><span style=\"color:#859900;\">map</span><span style=\"color:#657b83;\">(greet);\n </span><span style=\"color:#859900;\">println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#cb4b16;\">{:?}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, option_name);\n}\n</span></code></pre><h2 id=\"real-ish-life-example\">Real-ish life example</h2>\n<p>Like most of my blog posts, this one was inspired by some real world code. To simplify the concept down a bit, I was parsing a config file, and ended up with an <code>Option<String></code>. I needed some code that would either provide the value from the config, or default to a static string in the source code. Without <code>as_deref</code>, I could have used <code>STATIC_STRING_VALUE.to_string()</code> to get types to line up, but that would have been ugly and inefficient. Here's a somewhat intact representation of that code:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#859900;\">use </span><span style=\"color:#657b83;\">serde::Deserialize;\n\n#[</span><span style=\"color:#268bd2;\">derive</span><span style=\"color:#657b83;\">(Deserialize)]\n</span><span style=\"color:#268bd2;\">struct </span><span style=\"color:#b58900;\">Config </span><span style=\"color:#657b83;\">{\n </span><span style=\"color:#268bd2;\">some_value</span><span style=\"color:#657b83;\">: </span><span style=\"color:#859900;\">Option</span><span style=\"color:#657b83;\"><</span><span style=\"color:#859900;\">String</span><span style=\"color:#657b83;\">>\n}\n\n</span><span style=\"color:#268bd2;\">const </span><span style=\"color:#cb4b16;\">DEFAULT_VALUE</span><span style=\"color:#657b83;\">: </span><span style=\"color:#859900;\">&</span><span style=\"color:#268bd2;\">str </span><span style=\"color:#657b83;\">= </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">my-default-value</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">;\n\n</span><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">main</span><span style=\"color:#657b83;\">() {\n </span><span style=\"color:#268bd2;\">let </span><span style=\"color:#586e75;\">mut</span><span style=\"color:#657b83;\"> file = std::fs::File::open(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">config.yaml</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">).</span><span style=\"color:#859900;\">unwrap</span><span style=\"color:#657b83;\">();\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> config: Config = serde_yaml::from_reader(</span><span style=\"color:#859900;\">&</span><span style=\"color:#586e75;\">mut</span><span style=\"color:#657b83;\"> file).</span><span style=\"color:#859900;\">unwrap</span><span style=\"color:#657b83;\">();\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> value = config.some_value.</span><span style=\"color:#859900;\">as_deref</span><span style=\"color:#657b83;\">().</span><span style=\"color:#859900;\">unwrap_or</span><span style=\"color:#657b83;\">(</span><span style=\"color:#cb4b16;\">DEFAULT_VALUE</span><span style=\"color:#657b83;\">);\n </span><span style=\"color:#859900;\">println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">value is </span><span style=\"color:#cb4b16;\">{}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, value);\n}\n</span></code></pre>\n<p>Want to learn more Rust with FP Complete? Check out these links:</p>\n<ul>\n<li><a href=\"https://www.fpcomplete.com/training/\">Training courses</a></li>\n<li><a href=\"https://www.fpcomplete.com/rust/crash-course/\">Rust Crash Course</a></li>\n<li><a href=\"/tags/rust/\">Rust tagged articles</a></li>\n<li><a href=\"https://www.fpcomplete.com/rust/\">FP Complete Rust homepage</a></li>\n</ul>\n",
"permalink": "https://www.fpcomplete.com/blog/rust-asref-asderef/",
"slug": "rust-asref-asderef",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Rust's as_ref vs as_deref",
"description": "A short analysis of when to use the Option methods as_ref and as_deref",
"updated": null,
"date": "2021-07-05",
"year": 2021,
"month": 7,
"day": 5,
"taxonomies": {
"tags": [
"rust"
],
"categories": [
"functional programming",
"rust"
]
},
"extra": {
"author": "Michael Snoyman",
"blogimage": "/images/blog-listing/rust.png",
"author_avatar": "/images/leaders/michael-snoyman.png",
"image": "images/blog/thumbs/rust-asref-asderef.png"
},
"path": "blog/rust-asref-asderef/",
"components": [
"blog",
"rust-asref-asderef"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "more-idiomatic",
"permalink": "https://www.fpcomplete.com/blog/rust-asref-asderef/#more-idiomatic",
"title": "More idiomatic",
"children": []
},
{
"level": 2,
"id": "introducing-as-ref",
"permalink": "https://www.fpcomplete.com/blog/rust-asref-asderef/#introducing-as-ref",
"title": "Introducing as_ref",
"children": []
},
{
"level": 2,
"id": "side-rant-ergonomics-in-rust",
"permalink": "https://www.fpcomplete.com/blog/rust-asref-asderef/#side-rant-ergonomics-in-rust",
"title": "Side rant: ergonomics in Rust",
"children": []
},
{
"level": 2,
"id": "when-rfc-2005-fails",
"permalink": "https://www.fpcomplete.com/blog/rust-asref-asderef/#when-rfc-2005-fails",
"title": "When RFC 2005 fails",
"children": []
},
{
"level": 2,
"id": "when-as-ref-fails",
"permalink": "https://www.fpcomplete.com/blog/rust-asref-asderef/#when-as-ref-fails",
"title": "When as_ref fails",
"children": []
},
{
"level": 2,
"id": "real-ish-life-example",
"permalink": "https://www.fpcomplete.com/blog/rust-asref-asderef/#real-ish-life-example",
"title": "Real-ish life example",
"children": []
}
],
"word_count": 1822,
"reading_time": 10,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/intermediate-training-courses.md",
"content": "<p>I'm happy to announce that over the next few months, FP Complete will be offering intermediate training courses on both Haskell and Rust. This is a follow up to our previous beginner courses on both languages as well. I'm excited to get to teach both of these courses.</p>\n<p>More details below, but cutting to the chase: if you'd like to sign up, or just get more information on these courses, please <a href=\"mailto:training@fpcomplete.com\">email training@fpcomplete.com</a>.</p>\n<h2 id=\"overall-structure\">Overall structure</h2>\n<p>Each course consists of:</p>\n<ul>\n<li>Four sessions, held on Sunday, 1500 UTC, 8am Pacific time, 5pm Central European</li>\n<li>Each session is three hours, with a ten minute break</li>\n<li>Slides, exercises, and recordings will be provided to all participants</li>\n<li>Private Discord chat room is available to those interested to interact with other students and the teacher, kept open after the course finishes</li>\n</ul>\n<h2 id=\"dates\">Dates</h2>\n<p>We'll be holding these courses on the following dates</p>\n<ul>\n<li>Haskell\n<ul>\n<li>June 13</li>\n<li>June 20</li>\n<li>July 11</li>\n<li>July 25</li>\n</ul>\n</li>\n<li>Rust\n<ul>\n<li>August 8</li>\n<li>August 15</li>\n<li>August 22</li>\n<li>August 29</li>\n</ul>\n</li>\n</ul>\n<h2 id=\"cost-and-signup\">Cost and signup</h2>\n<p>Each course costs $150 per participant. Please register and arrange payment (via PayPal or Venmo) by contacting <a href=\"mailto:training@fpcomplete.com\">training@fpcomplete.com</a>.</p>\n<h2 id=\"topics-covered\">Topics covered</h2>\n<p>Before the course begins, and throughout the course, I'll ask participants for feedback on additional topics to cover, and tune the course appropriately. Below is the basis of the course which we'll focus on:</p>\n<ul>\n<li>Haskell (based largely on our <a href=\"https://www.fpcomplete.com/haskell/syllabus/\">Applied Haskell syllabus</a>)\n<ul>\n<li>Data structures (<code>bytestring</code>, <code>text</code>, <code>containers</code> and <code>vector</code>)</li>\n<li>Evaluation order</li>\n<li>Mutable variables</li>\n<li>Concurrent programming (<code>async</code> and <code>stm</code>)</li>\n<li>Exception safety</li>\n<li>Testing</li>\n<li>Data serialization</li>\n<li>Web clients and servers</li>\n<li>Streaming data</li>\n</ul>\n</li>\n<li>Rust\n<ul>\n<li>Error handling</li>\n<li>Closures</li>\n<li>Multithreaded programming</li>\n<li><code>async</code>/<code>.await</code> and Tokio</li>\n<li>Basics of <code>unsafe</code></li>\n<li>Macros</li>\n<li>Testing and benchmarks</li>\n</ul>\n</li>\n</ul>\n<h2 id=\"want-to-learn-more\">Want to learn more?</h2>\n<p>Not sure if this is right for you? Feel free to <a href=\"https://twitter.com/snoyberg\">hit me up on Twitter</a> for more information, or <a href=\"mailto:training@fpcomplete.com\">contact training@fpcomplete.com</a>.</p>\n",
"permalink": "https://www.fpcomplete.com/blog/intermediate-training-courses/",
"slug": "intermediate-training-courses",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Intermediate Training Courses - Haskell and Rust",
"description": "Announcing two more training courses, covering intermediate Haskell and Rust topics. Sign up today!",
"updated": null,
"date": "2021-06-03",
"year": 2021,
"month": 6,
"day": 3,
"taxonomies": {
"categories": [
"functional programming"
],
"tags": [
"haskell",
"rust"
]
},
"extra": {
"author": "Michael Snoyman",
"author_avatar": "/images/leaders/michael-snoyman.png",
"blogimage": "/images/blog-listing/functional.png",
"image": "images/blog/thumbs/intermediate-training-courses.png"
},
"path": "blog/intermediate-training-courses/",
"components": [
"blog",
"intermediate-training-courses"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "overall-structure",
"permalink": "https://www.fpcomplete.com/blog/intermediate-training-courses/#overall-structure",
"title": "Overall structure",
"children": []
},
{
"level": 2,
"id": "dates",
"permalink": "https://www.fpcomplete.com/blog/intermediate-training-courses/#dates",
"title": "Dates",
"children": []
},
{
"level": 2,
"id": "cost-and-signup",
"permalink": "https://www.fpcomplete.com/blog/intermediate-training-courses/#cost-and-signup",
"title": "Cost and signup",
"children": []
},
{
"level": 2,
"id": "topics-covered",
"permalink": "https://www.fpcomplete.com/blog/intermediate-training-courses/#topics-covered",
"title": "Topics covered",
"children": []
},
{
"level": 2,
"id": "want-to-learn-more",
"permalink": "https://www.fpcomplete.com/blog/intermediate-training-courses/#want-to-learn-more",
"title": "Want to learn more?",
"children": []
}
],
"word_count": 312,
"reading_time": 2,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/tying-the-knot-haskell.md",
"content": "<p>This post has nothing to do with marriage. Tying the knot is, in my opinion at least, a relatively obscure technique you can use in Haskell to address certain corner cases. I've used it myself only a handful of times, one of which I'll reference below. I preface it like this to hopefully make clear: tying the knot is a fine technique to use in certain cases, but don't consider it a general technique that you should need regularly. It's not nearly as generally useful as something like <a href=\"https://www.fpcomplete.com/haskell/library/stm/\">Software Transactional Memory</a>.</p>\n<p>That said, you're still interested in this technique, and are still reading this post. Great! Let's get started where all bad Haskell code starts: C++.</p>\n<h2 id=\"doubly-linked-lists\">Doubly linked lists</h2>\n<p>Typically I'd demonstrate imperative code in Rust, but <a href=\"https://rust-unofficial.github.io/too-many-lists/\">it's not a good idea for this case</a>. So we'll start off with a very simple doubly linked list implementation in C++. And by "very simple" I should probably say "very poorly written," since I'm out of practice.</p>\n<p><img src=\"/images/haskell/cpp-is-rusty.png\" alt=\"Rusty C++\" /></p>\n<p>Anyway, reading the entire code isn't necessary to get the point across. Let's look at some relevant bits. We define a node of the list like this, including a nullable pointer to the previous and next node in the list:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">template </span><span style=\"color:#657b83;\"><</span><span style=\"color:#268bd2;\">typename</span><span style=\"color:#657b83;\"> T> </span><span style=\"color:#268bd2;\">class </span><span style=\"color:#b58900;\">Node </span><span style=\"color:#657b83;\">{\n</span><span style=\"color:#859900;\">public</span><span style=\"color:#657b83;\">:\n </span><span style=\"color:#b58900;\">Node</span><span style=\"color:#657b83;\">(T </span><span style=\"color:#268bd2;\">value</span><span style=\"color:#657b83;\">) : </span><span style=\"color:#268bd2;\">value</span><span style=\"color:#657b83;\">(value), </span><span style=\"color:#268bd2;\">prev</span><span style=\"color:#657b83;\">(</span><span style=\"color:#b58900;\">NULL</span><span style=\"color:#657b83;\">), </span><span style=\"color:#268bd2;\">next</span><span style=\"color:#657b83;\">(</span><span style=\"color:#b58900;\">NULL</span><span style=\"color:#657b83;\">) {}\n Node </span><span style=\"color:#859900;\">*</span><span style=\"color:#657b83;\">prev;\n T value;\n Node </span><span style=\"color:#859900;\">*</span><span style=\"color:#657b83;\">next;\n};\n</span></code></pre>\n<p>When you add the first node to the list, you set the new node's previous and next values to <code>NULL</code>, and the list's first and last values to the new node. The more interesting case is when you already have something in the list. To add a new node to the back of the list, you need some code that looks like the following:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">node-></span><span style=\"color:#268bd2;\">prev </span><span style=\"color:#657b83;\">= </span><span style=\"color:#d33682;\">this</span><span style=\"color:#657b83;\">-></span><span style=\"color:#268bd2;\">last</span><span style=\"color:#657b83;\">;\n</span><span style=\"color:#d33682;\">this</span><span style=\"color:#657b83;\">-></span><span style=\"color:#268bd2;\">last</span><span style=\"color:#657b83;\">-></span><span style=\"color:#268bd2;\">next </span><span style=\"color:#657b83;\">= node;\n</span><span style=\"color:#d33682;\">this</span><span style=\"color:#657b83;\">-></span><span style=\"color:#268bd2;\">last </span><span style=\"color:#657b83;\">= node;\n</span></code></pre>\n<p>For those (like me) not fluent in C++, I'm making three mutations:</p>\n<ol>\n<li>Mutating the new node's <code>prev</code> member to point to the currently last node of the list.</li>\n<li>Mutating the currently last node's <code>next</code> member to point at the new node.</li>\n<li>Mutating the list itself so that its <code>last</code> member points to the new node.</li>\n</ol>\n<p>Point being in all of this: there's a lot of mutation going on in order to create a double linked list. Contrast that with singly linked lists in Haskell, which are immutable data structures and require no mutation at all.</p>\n<p>Anyway, I've written my annual quota of C++ at this point, it's time to go back to Haskell.</p>\n<h2 id=\"riih-rewrite-it-in-haskell\">RIIH (Rewrite it in Haskell)</h2>\n<p>Using <code>IORef</code>s and lots of <code>IO</code> calls everywhere, it's possible to reproduce the C++ concept of a mutable doubly linked list in Haskell. Full code is <a href=\"https://gist.github.com/snoyberg/5de410aba87a4208b7c701e954c61d9d\">available in a Gist</a>, but let's step through the important bits. Our core data types look quite like the C++ version, but with <code>IORef</code> and <code>Maybe</code> sprinkled in for good measure:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#859900;\">data </span><span style=\"color:#cb4b16;\">Node</span><span style=\"color:#657b83;\"> a </span><span style=\"color:#859900;\">= </span><span style=\"color:#cb4b16;\">Node</span><span style=\"color:#657b83;\">\n { prev </span><span style=\"color:#859900;\">:: </span><span style=\"color:#cb4b16;\">IORef</span><span style=\"color:#657b83;\"> (</span><span style=\"color:#cb4b16;\">Maybe</span><span style=\"color:#657b83;\"> (</span><span style=\"color:#cb4b16;\">Node</span><span style=\"color:#657b83;\"> a))\n , value </span><span style=\"color:#859900;\">::</span><span style=\"color:#657b83;\"> a\n , next </span><span style=\"color:#859900;\">:: </span><span style=\"color:#cb4b16;\">IORef</span><span style=\"color:#657b83;\"> (</span><span style=\"color:#cb4b16;\">Maybe</span><span style=\"color:#657b83;\"> (</span><span style=\"color:#cb4b16;\">Node</span><span style=\"color:#657b83;\"> a))\n }\n\n</span><span style=\"color:#859900;\">data </span><span style=\"color:#cb4b16;\">List</span><span style=\"color:#657b83;\"> a </span><span style=\"color:#859900;\">= </span><span style=\"color:#cb4b16;\">List</span><span style=\"color:#657b83;\">\n { first </span><span style=\"color:#859900;\">:: </span><span style=\"color:#cb4b16;\">IORef</span><span style=\"color:#657b83;\"> (</span><span style=\"color:#cb4b16;\">Maybe</span><span style=\"color:#657b83;\"> (</span><span style=\"color:#cb4b16;\">Node</span><span style=\"color:#657b83;\"> a))\n , last </span><span style=\"color:#859900;\">:: </span><span style=\"color:#cb4b16;\">IORef</span><span style=\"color:#657b83;\"> (</span><span style=\"color:#cb4b16;\">Maybe</span><span style=\"color:#657b83;\"> (</span><span style=\"color:#cb4b16;\">Node</span><span style=\"color:#657b83;\"> a))\n }\n</span></code></pre>\n<p>And adding a new value to a non-empty list looks like this:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">node </span><span style=\"color:#859900;\"><- </span><span style=\"color:#cb4b16;\">Node </span><span style=\"color:#859900;\"><$></span><span style=\"color:#657b83;\"> newIORef (</span><span style=\"color:#cb4b16;\">Just</span><span style=\"color:#657b83;\"> last') </span><span style=\"color:#859900;\"><</span><span style=\"color:#657b83;\">*</span><span style=\"color:#859900;\">></span><span style=\"color:#657b83;\"> pure value </span><span style=\"color:#859900;\"><</span><span style=\"color:#657b83;\">*</span><span style=\"color:#859900;\">></span><span style=\"color:#657b83;\"> newIORef </span><span style=\"color:#cb4b16;\">Nothing</span><span style=\"color:#657b83;\">\nwriteIORef (next last') (</span><span style=\"color:#cb4b16;\">Just</span><span style=\"color:#657b83;\"> node)\nwriteIORef (last list) (</span><span style=\"color:#cb4b16;\">Just</span><span style=\"color:#657b83;\"> node)\n</span></code></pre>\n<p>Notice that, like in the C++ code, we need to perform mutations on the existing node and the <code>last</code> member of the list.</p>\n<p>This certainly works, but it probably feels less than satisfying to a Haskeller:</p>\n<ul>\n<li>I don't love the idea of mutations all over the place.</li>\n<li>The code looks and feels ugly.</li>\n<li>I can't access the values of the list from pure code.</li>\n</ul>\n<p>So the challenge is: can we write a doubly linked list in Haskell in pure code?</p>\n<h2 id=\"defining-our-data\">Defining our data</h2>\n<p>I'll warn you in advance. Every single time I've written code that "ties the knot" in Haskell, I've gone through at least two stages:</p>\n<ol>\n<li>This doesn't make any sense, there's no way this is going to work, what exactly am I doing?</li>\n<li>Oh, it's done, how exactly did that work?</li>\n</ol>\n<p>It happened while writing the code below. You're likely to have the same feeling while reading this of "wait, what? I don't get it, huh?"</p>\n<p>Anyway, let's start off by defining our data types. We didn't like the fact that we had <code>IORef</code> all over the place. So let's just get rid of it!</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#859900;\">data </span><span style=\"color:#cb4b16;\">Node</span><span style=\"color:#657b83;\"> a </span><span style=\"color:#859900;\">= </span><span style=\"color:#cb4b16;\">Node</span><span style=\"color:#657b83;\">\n { prev </span><span style=\"color:#859900;\">:: </span><span style=\"color:#cb4b16;\">Maybe</span><span style=\"color:#657b83;\"> (</span><span style=\"color:#cb4b16;\">Node</span><span style=\"color:#657b83;\"> a)\n , value </span><span style=\"color:#859900;\">::</span><span style=\"color:#657b83;\"> a\n , next </span><span style=\"color:#859900;\">:: </span><span style=\"color:#cb4b16;\">Maybe</span><span style=\"color:#657b83;\"> (</span><span style=\"color:#cb4b16;\">Node</span><span style=\"color:#657b83;\"> a)\n }\n\n</span><span style=\"color:#859900;\">data </span><span style=\"color:#cb4b16;\">List</span><span style=\"color:#657b83;\"> a </span><span style=\"color:#859900;\">= </span><span style=\"color:#cb4b16;\">List</span><span style=\"color:#657b83;\">\n { first </span><span style=\"color:#859900;\">:: </span><span style=\"color:#cb4b16;\">Maybe</span><span style=\"color:#657b83;\"> (</span><span style=\"color:#cb4b16;\">Node</span><span style=\"color:#657b83;\"> a)\n , last </span><span style=\"color:#859900;\">:: </span><span style=\"color:#cb4b16;\">Maybe</span><span style=\"color:#657b83;\"> (</span><span style=\"color:#cb4b16;\">Node</span><span style=\"color:#657b83;\"> a)\n }\n</span></code></pre>\n<p>We still have <code>Maybe</code> to indicate the presence or absence of nodes before or after our own. That translation is pretty easy. The problem is going to arise when we try to build such a structure, since we've seen that we need mutation to make it happen. We'll need to rethink our API to get going.</p>\n<h2 id=\"non-mutable-api\">Non-mutable API</h2>\n<p>The first change we need to consider is getting rid of the <em>concept</em> of mutation in the API. Previously, we had functions like <code>pushBack</code> and <code>popBack</code>, which were inherently mutating. Instead, we should be thinking in terms of immutable data structures and APIs.</p>\n<p>We already know all about singly linked lists, the venerable <code>[]</code> data type. Let's see if we can build a function that will let us construct a doubly linked list from a singly linked list. In other words:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#b58900;\">buildList </span><span style=\"color:#859900;\">::</span><span style=\"color:#657b83;\"> [</span><span style=\"color:#268bd2;\">a</span><span style=\"color:#657b83;\">] </span><span style=\"color:#859900;\">-> </span><span style=\"color:#268bd2;\">List a\n</span></code></pre>\n<p>Let's knock out two easy cases first. An empty list should end up with no nodes at all. That clause would be:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">buildList </span><span style=\"color:#b58900;\">[] </span><span style=\"color:#859900;\">= </span><span style=\"color:#cb4b16;\">List Nothing Nothing\n</span></code></pre>\n<p>The next easy case is a single value in the list. This ends up with a single node with no pointers to other nodes, and a <code>first</code> and <code>last</code> field that both point to that one node. Again, fairly easy, no knot tying required:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">buildList [x] </span><span style=\"color:#859900;\">=\n let</span><span style=\"color:#657b83;\"> node </span><span style=\"color:#859900;\">= </span><span style=\"color:#cb4b16;\">Node Nothing</span><span style=\"color:#657b83;\"> x </span><span style=\"color:#cb4b16;\">Nothing\n </span><span style=\"color:#859900;\">in </span><span style=\"color:#cb4b16;\">List</span><span style=\"color:#657b83;\"> (</span><span style=\"color:#cb4b16;\">Just</span><span style=\"color:#657b83;\"> node) (</span><span style=\"color:#cb4b16;\">Just</span><span style=\"color:#657b83;\"> node)\n</span></code></pre>\n<p>OK, that's too easy. Let's kick it up a notch.</p>\n<h2 id=\"two-element-list\">Two-element list</h2>\n<p>To get into things a bit more gradually, let's handle the two element case next, instead of the general case of "2 or more", which is a bit more complicated. We need to:</p>\n<ol>\n<li>Construct a first node that points at the last node</li>\n<li>Construct a last node that points at the first node</li>\n<li>Construct a list that points at both the first and last nodes</li>\n</ol>\n<p>Step (3) isn't too hard. Step (2) doesn't sound too bad either, since presumably the first node already exists at that point. The problem appears to be step (1). How can we construct a first node that points at the second node, when we haven't constructed the second node yet? Let me show you how:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">buildList [x, y] </span><span style=\"color:#859900;\">=\n let</span><span style=\"color:#657b83;\"> firstNode </span><span style=\"color:#859900;\">= </span><span style=\"color:#cb4b16;\">Node Nothing</span><span style=\"color:#657b83;\"> x (</span><span style=\"color:#cb4b16;\">Just</span><span style=\"color:#657b83;\"> lastNode)\n lastNode </span><span style=\"color:#859900;\">= </span><span style=\"color:#cb4b16;\">Node</span><span style=\"color:#657b83;\"> (</span><span style=\"color:#cb4b16;\">Just</span><span style=\"color:#657b83;\"> firstNode) y </span><span style=\"color:#cb4b16;\">Nothing\n </span><span style=\"color:#859900;\">in </span><span style=\"color:#cb4b16;\">List</span><span style=\"color:#657b83;\"> (</span><span style=\"color:#cb4b16;\">Just</span><span style=\"color:#657b83;\"> firstNode) (</span><span style=\"color:#cb4b16;\">Just</span><span style=\"color:#657b83;\"> lastNode)\n</span></code></pre>\n<p>If that code doesn't confuse or bother you you've probably already learned about tying the knot. This seems to make no sense. I'm referring to <code>lastNode</code> while constructing <code>firstNode</code>, and referring to <code>firstNode</code> while constructing <code>lastNode</code>. This kind of makes me think of an <a href=\"https://en.wikipedia.org/wiki/Ouroboros\">Ouroboros</a>, or a snake eating its own tail:</p>\n<p><img src=\"/images/haskell/ouroboros.jpeg\" alt=\"Ouroboros\" /></p>\n<p>In a normal programming language, this concept wouldn't make sense. We'd need to define <code>firstNode</code> first with a null pointer for <code>next</code>. Then we could define <code>lastNode</code>. And then we could mutate <code>firstNode</code>'s <code>next</code> to point to the last node. But not in Haskell! Why? Because of <em>laziness</em>. Thanks to laziness, both <code>firstNode</code> and <code>lastNode</code> are initially created as thunks. Their contents need not exist yet. But thankfully, we can still create pointers to these not-fully-evaluated values.</p>\n<p>With those pointers available, we can then define an expression for each of these that leverages the pointer of the other. And we have now, successfully, tied the knot.</p>\n<h2 id=\"expanding-beyond-two\">Expanding beyond two</h2>\n<p>Expanding beyond two elements follows the exact same pattern, but (at least in my opinion) is significantly more complicated. I implemented it by writing a helper function, <code>buildNodes</code>, which (somewhat spookily) takes the previous node in the list as a parameter, and returns back the next node and the final node in the list. Let's see all of this in action:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">buildList (x</span><span style=\"color:#859900;\">:</span><span style=\"color:#657b83;\">y</span><span style=\"color:#859900;\">:</span><span style=\"color:#657b83;\">ys) </span><span style=\"color:#859900;\">=\n let</span><span style=\"color:#657b83;\"> firstNode </span><span style=\"color:#859900;\">= </span><span style=\"color:#cb4b16;\">Node Nothing</span><span style=\"color:#657b83;\"> x (</span><span style=\"color:#cb4b16;\">Just</span><span style=\"color:#657b83;\"> secondNode)\n (secondNode, lastNode) </span><span style=\"color:#859900;\">=</span><span style=\"color:#657b83;\"> buildNodes firstNode y ys\n </span><span style=\"color:#859900;\">in </span><span style=\"color:#cb4b16;\">List</span><span style=\"color:#657b83;\"> (</span><span style=\"color:#cb4b16;\">Just</span><span style=\"color:#657b83;\"> firstNode) (</span><span style=\"color:#cb4b16;\">Just</span><span style=\"color:#657b83;\"> lastNode)\n\n</span><span style=\"color:#93a1a1;\">-- | Takes the previous node in the list, the current value, and all following\n-- values. Returns the current node as well as the final node constructed in\n-- this list.\n</span><span style=\"color:#b58900;\">buildNodes </span><span style=\"color:#859900;\">:: </span><span style=\"color:#268bd2;\">Node a </span><span style=\"color:#859900;\">-> </span><span style=\"color:#268bd2;\">a </span><span style=\"color:#859900;\">-></span><span style=\"color:#657b83;\"> [</span><span style=\"color:#268bd2;\">a</span><span style=\"color:#657b83;\">] </span><span style=\"color:#859900;\">-></span><span style=\"color:#657b83;\"> (</span><span style=\"color:#268bd2;\">Node a</span><span style=\"color:#657b83;\">, </span><span style=\"color:#268bd2;\">Node a</span><span style=\"color:#657b83;\">)\nbuildNodes prevNode value </span><span style=\"color:#b58900;\">[] </span><span style=\"color:#859900;\">=\n let</span><span style=\"color:#657b83;\"> node </span><span style=\"color:#859900;\">= </span><span style=\"color:#cb4b16;\">Node</span><span style=\"color:#657b83;\"> (</span><span style=\"color:#cb4b16;\">Just</span><span style=\"color:#657b83;\"> prevNode) value </span><span style=\"color:#cb4b16;\">Nothing\n </span><span style=\"color:#859900;\">in</span><span style=\"color:#657b83;\"> (node, node)\nbuildNodes prevNode value (x</span><span style=\"color:#859900;\">:</span><span style=\"color:#657b83;\">xs) </span><span style=\"color:#859900;\">=\n let</span><span style=\"color:#657b83;\"> node </span><span style=\"color:#859900;\">= </span><span style=\"color:#cb4b16;\">Node</span><span style=\"color:#657b83;\"> (</span><span style=\"color:#cb4b16;\">Just</span><span style=\"color:#657b83;\"> prevNode) value (</span><span style=\"color:#cb4b16;\">Just</span><span style=\"color:#657b83;\"> nextNode)\n (nextNode, lastNode) </span><span style=\"color:#859900;\">=</span><span style=\"color:#657b83;\"> buildNodes node x xs\n </span><span style=\"color:#859900;\">in</span><span style=\"color:#657b83;\"> (node, lastNode)\n</span></code></pre>\n<p>Notice that in <code>buildList</code>, we're using the same kind of trick to use <code>secondNode</code> to construct <code>firstNode</code>, and <code>firstNode</code> is a parameter passed to <code>buildNodes</code> that is used to construct <code>secondNode</code>.</p>\n<p>Within <code>buildNodes</code>, we have two clauses. The first clause is one of those simpler cases: we've only got one value left, so we create a terminal node that points back at previous. No knot tying required. The second clause, however, once again uses the knot tying technique, together with a recursive call to <code>buildNodes</code> to build up the rest of the nodes in the list.</p>\n<p>The full code is <a href=\"https://gist.github.com/snoyberg/876ad1ad0f106c80239bf098a6965a53\">available as a Gist</a>. I recommend reading through the code a few times until you feel comfortable with it. When you have a good grasp on what's going on, try implementing it from scratch yourself.</p>\n<h2 id=\"limitation\">Limitation</h2>\n<p>It's important to understand a limitation of this approach versus both mutable doubly linked lists and singly linked lists. With singly linked lists, I can easily construct a new singly linked list by <code>cons</code>ing a new value to the front. Or I can drop a few values from the front and cons some new values in front of that new tail. In other words, I can construct new values based on old values as much as I want.</p>\n<p>Similarly, with mutable doubly linked lists, I'm free to mutate at will, changing my existing data structure. This behaves slightly different from constructing new singly linked lists, and falls into the same category of mutable-vs-immutable data structures that Haskellers know and love so well. If you want a refresher, check out:</p>\n<ul>\n<li><a href=\"https://www.fpcomplete.com/haskell/tutorial/data-structures/\">Data structures</a></li>\n<li><a href=\"https://www.fpcomplete.com/haskell/library/vector/\">vector</a></li>\n<li><a href=\"https://www.fpcomplete.com/haskell/tutorial/mutable-variables/\">Mutable variables</a></li>\n</ul>\n<p>None of these apply with a tie-the-knot approach to data structures. Once you construct this doubly linked list, it is locked in place. If you try to prepend a new node to the front of this list, you'll find that you cannot update the <code>prev</code> pointer in the old first node.</p>\n<p>There is a workaround. You can construct a brand new doubly linked list using the values in the original. A common way to do this would be to provide a conversion function back from your <code>List a</code> to a <code>[a]</code>. Then you could append a value to a doubly linked list with some code like:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#859900;\">let</span><span style=\"color:#657b83;\"> oldList </span><span style=\"color:#859900;\">=</span><span style=\"color:#657b83;\"> buildList [</span><span style=\"color:#6c71c4;\">2</span><span style=\"color:#859900;\">..</span><span style=\"color:#6c71c4;\">10</span><span style=\"color:#657b83;\">]\n newList </span><span style=\"color:#859900;\">=</span><span style=\"color:#657b83;\"> buildList </span><span style=\"color:#859900;\">$ </span><span style=\"color:#6c71c4;\">1 </span><span style=\"color:#859900;\">:</span><span style=\"color:#657b83;\"> toSinglyLinkedList oldList\n</span></code></pre>\n<p>However, unlike singly linked lists, we lose all possibilities of data sharing, at least at the structure level (the values themselves can still be shared).</p>\n<h2 id=\"why-tie-the-knot\">Why tie the knot?</h2>\n<p>That's a cool trick, but is it actually useful? In some situations, absolutely! One example I've worked on is in the <a href=\"https://www.stackage.org/package/xml-conduit\">xml-conduit</a> package. Some people may be familiar with XPath, a pretty nice standard for XML traversals. It allows you to say things like "find the first <code>ul</code> tag in document, then find the <code>p</code> tag before that, and tell me its <code>id</code> attribute."</p>\n<p>A simple implementation of an XML data type in Haskell may look like this:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#859900;\">data </span><span style=\"color:#cb4b16;\">Element </span><span style=\"color:#859900;\">= </span><span style=\"color:#cb4b16;\">Element Name</span><span style=\"color:#657b83;\"> (</span><span style=\"color:#cb4b16;\">Map Name AttributeValue</span><span style=\"color:#657b83;\">) [</span><span style=\"color:#cb4b16;\">Node</span><span style=\"color:#657b83;\">]\n</span><span style=\"color:#859900;\">data </span><span style=\"color:#cb4b16;\">Node\n </span><span style=\"color:#859900;\">= </span><span style=\"color:#cb4b16;\">NodeElement Element\n </span><span style=\"color:#859900;\">| </span><span style=\"color:#cb4b16;\">NodeContent Text\n</span></code></pre>\n<p>Using this kind of data structure, it would be pretty difficult to implement the traversal that I just described. You would need to write logic to keep track of where you are in the document, and then implement logic to say "OK, given that I was in the third child of the second child of the sixth child, what are all of the nodes that came before me?"</p>\n<p>Instead, in <code>xml-conduit</code>, we use knot tying to create a data structure called a <a href=\"https://www.stackage.org/haddock/nightly-2021-05-23/xml-conduit-1.9.1.1/Text-XML-Cursor.html#t:Cursor\"><code>Cursor</code></a>. A <code>Cursor</code> not only keeps track of its own contents, but also contains a pointer to its parent cursor, its predecessor cursors, its following cursors, and its child cursors. You can then traverse the tree with ease. The traversal above would be implemented as:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">#</span><span style=\"color:#859900;\">!/</span><span style=\"color:#657b83;\">usr</span><span style=\"color:#859900;\">/</span><span style=\"color:#657b83;\">bin</span><span style=\"color:#859900;\">/</span><span style=\"color:#657b83;\">env stack\n</span><span style=\"color:#93a1a1;\">-- stack --resolver lts-17.12 script\n</span><span style=\"color:#b58900;\">{-# </span><span style=\"color:#859900;\">LANGUAGE</span><span style=\"color:#b58900;\"> OverloadedStrings #-}\n</span><span style=\"color:#cb4b16;\">import qualified </span><span style=\"color:#859900;\">Text.XML </span><span style=\"color:#cb4b16;\">as </span><span style=\"color:#859900;\">X\n</span><span style=\"color:#cb4b16;\">import </span><span style=\"color:#859900;\">Text.XML.Cursor\n\n</span><span style=\"color:#b58900;\">main </span><span style=\"color:#859900;\">:: </span><span style=\"color:#268bd2;\">IO </span><span style=\"color:#859900;\">()\n</span><span style=\"color:#657b83;\">main </span><span style=\"color:#859900;\">= do</span><span style=\"color:#657b83;\">\n doc </span><span style=\"color:#859900;\"><- </span><span style=\"color:#cb4b16;\">X</span><span style=\"color:#859900;\">.</span><span style=\"color:#657b83;\">readFile </span><span style=\"color:#cb4b16;\">X</span><span style=\"color:#859900;\">.</span><span style=\"color:#657b83;\">def </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">input.xml</span><span style=\"color:#839496;\">"\n </span><span style=\"color:#859900;\">let</span><span style=\"color:#657b83;\"> cursor </span><span style=\"color:#859900;\">=</span><span style=\"color:#657b83;\"> fromDocument doc\n print </span><span style=\"color:#859900;\">$</span><span style=\"color:#657b83;\"> cursor </span><span style=\"color:#859900;\">$//</span><span style=\"color:#657b83;\"> element </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">ul</span><span style=\"color:#839496;\">" </span><span style=\"color:#859900;\">>=></span><span style=\"color:#657b83;\"> precedingSibling </span><span style=\"color:#859900;\">>=></span><span style=\"color:#657b83;\"> element </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">p</span><span style=\"color:#839496;\">" </span><span style=\"color:#859900;\">>=></span><span style=\"color:#657b83;\"> attribute </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">id</span><span style=\"color:#839496;\">"\n</span></code></pre>\n<p>You can test this out yourself with this sample input document:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#93a1a1;\"><</span><span style=\"color:#268bd2;\">foo</span><span style=\"color:#93a1a1;\">>\n <</span><span style=\"color:#268bd2;\">bar</span><span style=\"color:#93a1a1;\">>\n <</span><span style=\"color:#268bd2;\">baz</span><span style=\"color:#93a1a1;\">>\n <</span><span style=\"color:#268bd2;\">p </span><span style=\"color:#b58900;\">id</span><span style=\"color:#657b83;\">=</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">hello</span><span style=\"color:#839496;\">"</span><span style=\"color:#93a1a1;\">></span><span style=\"color:#657b83;\">Something</span><span style=\"color:#93a1a1;\"></</span><span style=\"color:#268bd2;\">p</span><span style=\"color:#93a1a1;\">>\n <</span><span style=\"color:#268bd2;\">ul</span><span style=\"color:#93a1a1;\">>\n <</span><span style=\"color:#268bd2;\">li</span><span style=\"color:#93a1a1;\">></span><span style=\"color:#657b83;\">Bye!</span><span style=\"color:#93a1a1;\"></</span><span style=\"color:#268bd2;\">li</span><span style=\"color:#93a1a1;\">>\n </</span><span style=\"color:#268bd2;\">ul</span><span style=\"color:#93a1a1;\">>\n </</span><span style=\"color:#268bd2;\">baz</span><span style=\"color:#93a1a1;\">>\n </</span><span style=\"color:#268bd2;\">bar</span><span style=\"color:#93a1a1;\">>\n</</span><span style=\"color:#268bd2;\">foo</span><span style=\"color:#93a1a1;\">>\n</span></code></pre><h2 id=\"should-i-tie-the-knot\">Should I tie the knot?</h2>\n<p><em>Insert bad marriage joke here</em></p>\n<p>Like most techniques in programming in general, and Haskell in particular, it can be tempting to go off and look for a use case to throw this technique at. The use cases definitely exist. I think <code>xml-conduit</code> is one of them. But let me point out that it's the <em>only</em> example I can think of in my career as a Haskeller where tying the knot was a great solution to the problem. There are similar cases out there that I'd include too (such as JSON document traversal).</p>\n<p>Is it worth learning the technique? Yeah, definitely. It's a mind-expanding move. It helps you internalize concepts of laziness just a bit better. It's really fun and mind-bending. But don't rush off to rewrite your code to use a relatively niche technique.</p>\n<p>If anyone's wondering, this blog post came out of a question that popped up during a Haskell training course. If you'd like to come learn some Haskell and dive into weird topics like this, come find out more about <a href=\"https://www.fpcomplete.com/training/\">FP Complete's training programs</a>. We're gearing up for some intermediate Haskell and Rust courses soon, so add your name to the list if you want to get more information.</p>\n",
"permalink": "https://www.fpcomplete.com/blog/tying-the-knot-haskell/",
"slug": "tying-the-knot-haskell",
"ancestors": [
"_index.md",
"blog/_index.md"
],
"title": "Tying the Knot in Haskell",
"description": "An overview of a somewhat obscure technique in Haskell code, when you can use it, and its limitations.",
"updated": null,
"date": "2021-05-25",
"year": 2021,
"month": 5,
"day": 25,
"taxonomies": {
"categories": [
"functional programming"
],
"tags": [
"haskell"
]
},
"extra": {
"author": "Michael Snoyman",
"author_avatar": "/images/leaders/michael-snoyman.png",
"blogimage": "/images/blog-listing/functional.png",
"image": "images/blog/tying-the-knot-haskell.png"
},
"path": "blog/tying-the-knot-haskell/",
"components": [
"blog",
"tying-the-knot-haskell"
],
"summary": null,
"toc": [
{
"level": 2,
"id": "doubly-linked-lists",
"permalink": "https://www.fpcomplete.com/blog/tying-the-knot-haskell/#doubly-linked-lists",
"title": "Doubly linked lists",
"children": []
},
{
"level": 2,
"id": "riih-rewrite-it-in-haskell",
"permalink": "https://www.fpcomplete.com/blog/tying-the-knot-haskell/#riih-rewrite-it-in-haskell",
"title": "RIIH (Rewrite it in Haskell)",
"children": []
},
{
"level": 2,
"id": "defining-our-data",
"permalink": "https://www.fpcomplete.com/blog/tying-the-knot-haskell/#defining-our-data",
"title": "Defining our data",
"children": []
},
{
"level": 2,
"id": "non-mutable-api",
"permalink": "https://www.fpcomplete.com/blog/tying-the-knot-haskell/#non-mutable-api",
"title": "Non-mutable API",
"children": []
},
{
"level": 2,
"id": "two-element-list",
"permalink": "https://www.fpcomplete.com/blog/tying-the-knot-haskell/#two-element-list",
"title": "Two-element list",
"children": []
},
{
"level": 2,
"id": "expanding-beyond-two",
"permalink": "https://www.fpcomplete.com/blog/tying-the-knot-haskell/#expanding-beyond-two",
"title": "Expanding beyond two",
"children": []
},
{
"level": 2,
"id": "limitation",
"permalink": "https://www.fpcomplete.com/blog/tying-the-knot-haskell/#limitation",
"title": "Limitation",
"children": []
},
{
"level": 2,
"id": "why-tie-the-knot",
"permalink": "https://www.fpcomplete.com/blog/tying-the-knot-haskell/#why-tie-the-knot",
"title": "Why tie the knot?",
"children": []
},
{
"level": 2,
"id": "should-i-tie-the-knot",
"permalink": "https://www.fpcomplete.com/blog/tying-the-knot-haskell/#should-i-tie-the-knot",
"title": "Should I tie the knot?",
"children": []
}
],
"word_count": 2453,
"reading_time": 13,
"assets": [],
"draft": false,
"lang": "en",
"lighter": null,
"heavier": null,
"earlier": null,
"later": null,
"translations": []
},
{
"relative_path": "blog/pains-path-parsing.md",
"content": "<p>I've spent a considerable amount of coding time getting into the weeds of path parsing and generation in web applications. First with <a href=\"https://www.yesodweb.com/\">Yesod in Haskell</a>, and more recently with a side project for <a href=\"https://github.com/snoyberg/routetype-rs\">routetypes in Rust</a>. (Side note: I'll likely do some blogging and/or videos about that project in the future, stay tuned.) My recent work reminded me of a bunch of the pain points involved here. And as so often happens, I was complaining to my wife about these pain points, and decided to write a blog post about it.</p>\n<p>First off, there are plenty of pain points I'm not going to address. For example, the insane world of percent encoding, and the different rules for what part of the URL you're in, is a constant source of misery and mistakes. Little things like required leading forward slashes, or whether query string parameters should differentiate between "no value provided" (e.g. <code>?foo</code>) versus "empty value provided" (e.g. <code>?foo=</code>). But I'll restrict myself to just one aspect: <strong>roundtripping path segments and rendered paths</strong>.</p>\n<h2 id=\"what-s-a-path\">What's a path?</h2>\n<p>Let's take this blog post's URL: <code>https://www.fpcomplete.com/blog/pains-path-parsing/</code>. We can break it up into four logical pieces:</p>\n<ul>\n<li><code>https</code> is the <em>scheme</em></li>\n<li><code>://</code> is a required part of the URL syntax</li>\n<li><code>www.fpcomplete.com</code> is the <em>authority</em>. You may be wondering: isn't it just the domain name? Well, yes. But the authority may contain additional information too, like port number, username, password</li>\n<li><code>/blog/pains-path-parsing/</code> is the path, including the leading and trailing forward slashes</li>\n</ul>\n<p>This URL doesn't include them, but URLs may also include query strings, like <code>?source=rss</code>, and fragments, like <code>#what-s-a-path</code>. But we just care about that <code>path</code> component.</p>\n<p>The first way to think of a path is as a string. And by string, I mean a sequence of characters. And by sequence of characters, I really mean Unicode code points. (See how ridiculously pedantic I'm getting? Yeah, that's important.) But that's not true at all. To demonstrate, here's some Rust code that uses Hebrew letters in the path:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#268bd2;\">fn </span><span style=\"color:#b58900;\">main</span><span style=\"color:#657b83;\">() {\n </span><span style=\"color:#268bd2;\">let</span><span style=\"color:#657b83;\"> uri = http::Uri::builder().</span><span style=\"color:#859900;\">path_and_query</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">/hello/מיכאל/</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">).</span><span style=\"color:#859900;\">build</span><span style=\"color:#657b83;\">();\n </span><span style=\"color:#859900;\">println!</span><span style=\"color:#657b83;\">(</span><span style=\"color:#839496;\">"</span><span style=\"color:#cb4b16;\">{:?}</span><span style=\"color:#839496;\">"</span><span style=\"color:#657b83;\">, uri);\n}\n</span></code></pre>\n<p>And while that looks nice and simple, it fails spectacularly with the error message:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">Err(http::Error(InvalidUri(InvalidUriChar)))\n</span></code></pre>\n<p>In reality, according to <a href=\"https://tools.ietf.org/html/rfc3986#section-2\">the RFC</a>, paths are made up of a limited set of ASCII characters, represented as octets (raw bytes). And we somehow have to use percent encoding to represent other characters.</p>\n<p>But before we can really talk about encoding and representing, we have to ask another orthogonal question.</p>\n<h2 id=\"what-do-paths-represent\">What do paths represent?</h2>\n<p>While a path is technically a sequence of a reserved number of ASCII octets, that's not how our applications treat them. Instead, we <em>want</em> to be able to talk about the full range of Unicode code points. But it's more than just that. We want to be able to talk about <em>groupings</em> of <em>sequences</em>. We call these <em>segments</em> typically. The raw path <code>/hello/world</code> can be thought of as the segments <code>["hello", "world"]</code>. I would call this <em>parsing</em> the path. And, in reverse, we can <em>render</em> those segments back into the original raw path.</p>\n<p>With these kinds of parse/render pairs, it's always nice to have complete roundtripping abilities. In other words, <code>parse(render(x)) == x</code> and <code>render(parse(x)) == x</code>. Generally these rules fail for a variety of reasons, such as:</p>\n<ol>\n<li>Multiple valid representations. For example, with the percent encoding we'll mention below, <code>%2a</code> and <code>%2A</code> mean the same thing.</li>\n<li>Often unimportant whitespace details get lost during parsing. This applies to formats like JSON, where <code>[true, false]</code> and <code>[ true, false ]</code> have the same meaning.</li>\n<li>Parsing can fail, so that it's invalid to call <code>render</code> on <code>parse(x)</code>.</li>\n</ol>\n<p>Because of this, we often end up reducing our goals to something like: for all <code>x</code>, <code>parse(render(x))</code> is successful, and produces output identical to <code>x</code>.</p>\n<p>In path parsing, we definitely have problem (1) above (multiple valid representations). But by using this simplified goal, we no longer worry about that problem. Paths in URLs also don't have unimportant whitespace details (every octet has meaning), so (2) isn't a problem to be concerned with. Even if it was, our <code>parse(render(x))</code> step would end up "fixing" it.</p>\n<p>The final point is interesting, and is going to be crucial to our complete solution. What exactly does it mean for path parsing to fail? I can think of two ideas in basic path parsing:</p>\n<ul>\n<li>It includes an octet outside of the allowed range</li>\n<li>It includes a percent encoding which is invalid, e.g. <code>%@@</code></li>\n</ul>\n<p>Let's assume for the rest of this post, however, that those have been dealt with at a previous step, and we know for a fact that those error conditions will not occur. Are there any other ways for parsing to fail? In a basic sense: no. In a more sophisticated parsing: absolutely.</p>\n<h2 id=\"basic-rendering\">Basic rendering</h2>\n<p>The basic rendering steps are fairly straightforward:</p>\n<ul>\n<li>Perform percent encoding on each segment</li>\n<li>Interpolate the segments with a slash separator</li>\n<li>Prepend a slash to the entire string</li>\n</ul>\n<p>To allow roundtripping, we need to ensure that each <em>input</em> to the <code>render</code> function generates a unique output. Unfortunately, with these basic rendering steps, we immediately run into an error:</p>\n<pre style=\"background-color:#fdf6e3;\">\n<code><span style=\"color:#657b83;\">render segs </span><span style=\"color:#859900;\">= </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">/</span><span style=\"color:#839496;\">" </span><span style=\"color:#859900;\">++</span><span style=\"color:#657b83;\"> interpolate </span><span style=\"color:#839496;\">'</span><span style=\"color:#2aa198;\">/</span><span style=\"color:#839496;\">'</span><span style=\"color:#657b83;\"> (map percentEncode segs)\n\nrender </span><span style=\"color:#b58900;\">[]\n </span><span style=\"color:#859900;\">= </span><span style=\"color:#839496;\">"</span><span style=\"color:#2aa198;\">/</span><span style=\"color:#839496;\">" </span><span style=\"color:#859900;\">++</span><span style=\"color:#657b83;\"> interpolate </span><span style=\"color:#839496;\">'</span><span style=\"color:#2aa198;\">/</span><span style=\"color:#839496;\">'</span><span style=\"color:#657b83;\"> (map percentEncode </span><span style=\"color:#b58900;\">[]</span><span style=\"color:#657b83;\">)\n </span><span style=\"color:#859900;\">= </span&g