Stop Using Docker and Local Kubernetes for Dev Environments! (feat. Okteto)
Estimated read time: 1:20
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.
Summary
In this video, the DevOps Toolkit explores why relying on Docker and local Kubernetes for development environments may not be the best practice. It discusses the challenges of replicating production environments locally, the inefficiencies of using Docker and Kubernetes in these scenarios, and introduces Okteto as a tool to achieve production-like environments remotely. With real-world examples, the video elaborates on how developers can keep their code synced with production settings efficiently while avoiding the cost and complexity of traditional setups.
Highlights
Docker and local Kubernetes might not be the best for setting up dev environments due to complexity and maintenance issues. π οΈ
Developing remotely with Okteto can mirror your production setup and ease the coding process. π
The video emphasizes the importance of having a dev environment close to production to catch issues early on. π
Challenges in integrating databases and other dependencies in remote environments are acknowledged, but solutions exist to overcome these. π‘οΈ
Key Takeaways
Don't cage your development with local environments. Explore remote development using Okteto for production-like setups effortlessly. π
Local Kubernetes clusters and Docker setups can be too complex and costly to accurately simulate production environments locally. πΈ
Okteto enables a seamless sync between your local code changes and remote dev environments, so you can 'set it and forget it.' βοΈ
Remote environments provide a more realistic testing ground for your apps, saving you from nasty surprises in production. π
While thereβs a small cost increase using remote setups like Okteto, itβs often outweighed by the benefits of reliability and consistency in testing. π‘
Overview
Developing locally using Docker or Kubernetes can often lead developers down a path of more headaches than solutions, especially when trying to replicate complex production environments. The video deconstructs why these popular tools might not be the best fit for local development due to their complexity and time requirements.
Okteto offers a fresh perspective by allowing code to be developed locally while running it in a production-like remote environment. This hybrid approach removes the hassle and resource strain of trying to set up and maintain identical environments locally, instead syncing code changes directly to the cloud.
However, the video also highlights considerations such as the cost implications and dependencies challenges while promoting remote setups as a viable and often preferable alternative for developers seeking robust dev environments. By using Okteto, developers can balance ease of use with maintaining consistency between development and production.
Chapters
00:00 - 01:30: Introduction and Problem Statement The chapter introduces the common scenario faced by developers who are setting up a development environment. It outlines the challenge of configuring such environments, whether for user-facing applications, infrastructure, or other projects. The chapter suggests the use of tools like Docker or local Kubernetes clusters (e.g., kind) for local development, emphasizing that this is a typical approach for many developers.
03:00 - 05:00: The Two Types of Developers In this chapter titled 'The Two Types of Developers', the discussion centers around common misconceptions regarding development environments. Specifically, it challenges the use of Docker for setting up development environments, highlighting that it is not the intended purpose of Docker. The chapter also touches on alternatives like Kind and local Kubernetes clusters, noting their complexities and the impracticality for local reproduction of production environments. The takeaway is a caution against misusing these tools and acknowledging the challenges faced by developers in replicating production environments locally.
07:00 - 09:00: Creating Production-Like Environments This chapter discusses the challenges of creating production-like environments for development. It highlights the issues with using Docker and local Kubernetes clusters, where the development environment may differ significantly from the production environment. The chapter emphasizes the importance of having development environments that closely mimic production and suggests that there are straightforward methods available to achieve this.
11:00 - 13:00: Demo Setup Explanation In the 'Demo Setup Explanation' chapter, the speaker introduces a technique for local code development that runs in a production-like environment. This method allows developers to observe how their application will perform in production conditions. The aim is to advocate for a change in development practices, targeting developers who are open to updating their methodologies to improve application development and testing processes.
15:00 - 30:00: Demo Execution In the chapter titled 'Demo Execution', the focus is on introducing the sponsor of the video, Post Hog. Post Hog is described as an all-in-one suite of product and data tools designed to help founders and engineers understand user interactions with their products. The capabilities of Post Hog include product analytics, web analytics, session replay, A/B testing, surveys, feature flags, and error tracking, all aimed at tracking the success of features and the journey of visitors.
32:00 - 39:00: Challenges and Considerations This chapter discusses the challenges and considerations involved in integrating various data sources, such as Stripe or HubSpot, with product data. It highlights the ease of setup using snippets or SDKs, demonstrating the simplicity of embedding LLM observability tools. Additionally, it mentions the sponsorship of the video by Post Hawk, emphasizing the importance of these integrations in enhancing data analysis capabilities.
40:00 - 43:00: Conclusion and Future Topics The chapter discusses the mindset of developers in the industry. It describes two types of developers: those who base their careers on hope, trusting their code works without thorough checks, and those who presumably follow a more diligent and careful approach (though the latter is not detailed in the provided text). It highlights the risks of relying solely on hope, such as crashing production and losing users. Furthermore, it touches upon a blame culture where developers might deflect responsibility by blaming testers for not catching flaws in their code. The overall theme seems to reflect on the importance of responsibility and quality assurance in development work.
Stop Using Docker and Local Kubernetes for Dev Environments! (feat. Okteto) Transcription
00:00 - 00:30 [Music] You're developing something. Everybody is. No matter whether that something is a userfacing application, infrastructure, or anything else. Now, while developing, you're trying to set up a development environment, probably locally since that's where your code is. you're likely trying to do all that using Docker or a local Kubernetes cluster like kind. If that's what you're
00:30 - 01:00 doing, I am here to tell you that you're doing it wrong. Docker is great, but not, and I repeat, not for setting up a development environment. That's not what you're supposed to use it for. On the other hand, Kind and other local Kubernetes clusters are also great, but they are too complicated and too time demanding for you to reproduce production locally. As a result, you're left without the option to reproduce
01:00 - 01:30 production locally and you resort to workarounds that result in your development environment being very very different from production in case of Docker at least or your local environment being a stripped down version of production in case of local Kubernetes clusters. Both are bad options, especially since there are just as easy or even easier ways to get production-like development environments than what you might be doing right now.
01:30 - 02:00 I'm here to show how you can write code locally while still running that code in a production-like environment so that you can see your application behave in almost the same way as it will behave in production. By the end of this video, I will convince you to change the way you develop whatever you're developing. Assuming that you're the right type of a developer. So, first we need to establish which type of a developer you
02:00 - 02:30 are. We'll take a quick break to introduce you to Post Hog, the sponsor of this video. Poshog is an all-in-one suit of product and data tools focused on helping founders and engineers understand how users are using their product, the success of their features and the journeys of their visitors. With Post Hog, you get product analytics, web analytics, session replay, AB testing, surveys, feature flags, error tracking,
02:30 - 03:00 LLM observability, and so much more. You can even connect your Stripe or HubSpot data to query alongside your product data. You can sign up and get started with all the features for free right away. If you do, you will see that the setup is as simple as pasting a snippet into your site header or using SDKs for whichever language you prefer. The link is on the screen and in the description. Big thanks to Post Hawk for sponsoring this video. Now, let's get back to the
03:00 - 03:30 main subject. There are two types of developers. Some base their career on hope. They have faith that what they develop works. They spend time writing code and pushing it to Git. That's all they do. What's the worst thing that can happen? It might not work, right? It might crash production. Users might go somewhere else. If there is something wrong with your code, you can always blame testers for not finding out how shitty the code you just wrote is. It is
03:30 - 04:00 certainly not your fault. Your job is to write code, not to asssure that it works, right? The second group of developers does more than just write code. That's the good group. Those are the people that are writing and running tests. Those are the people that are doing their best to ensure that the code they're writing meets certain standards. They're ensuring that the changes they're making are meeting the quality expectations. They don't wait for others to find out how bad their code is. So,
04:00 - 04:30 which type of a developer are you? If you're in the first group, go away. There's nothing for you here. All you need is an IDE and the browser so that you can copy and paste code from, let's say, Stack Overflow. So, go away. By the way, you might be on the opposite side of the equation thinking that all this is not your concern. If that's the case, bear with me because as you will see later, you might be just the person that can fix all this. You might be the
04:30 - 05:00 person that converts bad developers into good ones. So stay. So you dear developer or software engineer or ops or whatever you are, are you still here? If you're still watching this, I can only assume that you belong to or at least want to belong to the second group. You're the developer that wants to ensure that your code will work in production and meet the expectations. And I have a question for you. How do we ensure that the code we are writing will work correctly when it reaches
05:00 - 05:30 production? One answer to that question can be testing in production-like environments. We write code, we push it to Git repo, and we execute workflows that deploy that code to a production-like environment and run all sorts of tests automated or manual that validate it. That's a wrong answer. It's too late to wait until everything is finished to discover whether it works. If it doesn't, we need to go back and fix issues. That will happen sometimes, and we certainly need to test releases
05:30 - 06:00 before they reach production. Still, we should strive towards detecting issues the moment they're created. They're created on our laptops. The moment we write the line of code that does not work as we expect, is the moment we created a problem. And it's only logical that we detect it right away and fix it the moment later. Now, this whole pep talk might lead you to think that I'm trying to convince you to adopt testdriven development or extreme programming or behavioral development or
06:00 - 06:30 something similar. That's not the case. At least not today. Right now, the only point I'm trying to make is that we should be running our application while we are developing it. That we should test it continuously. Write a bit of code, deploy the app, test it, and if there are no issues, repeat. But that's a problem. How do we run the application while developing it? Do we simply execute go run or whatever is the command to run the application locally? Do we build it into a container image
06:30 - 07:00 and run it with Docker? Neither of those are good enough because neither of those methods are how we run it in production. As a result, the way the application behaves locally might be very different from the way it will run in production. What works locally might not work in production or if it works it might behave very differently. You see applications today are much more than just code alone. There are servers with operating systems other than what we have on our laptops.
07:00 - 07:30 There is networking. There is storage. There are databases and other applications we connect to and so on and so forth. Production systems are complex and that complexity can greatly influence the behavior of applications. What we need while developing is to run our application in the same or a similar way as it is running in production. There's a balance though. It does not have to be exactly the same as production. That might be too complicated, inefficient and costly.
07:30 - 08:00 Still not being able or not wanting our development environment to be exactly the same as production does not mean that it should be completely different. So let's say that we want it to be production-like. How do we make development environments similar to production? Let's take a look at a simple example. Let's say that we have a backend application used by a front end up and connected to a database. We would need all three of those running. For the drone, let's say that those two ops are
08:00 - 08:30 running in Kubernetes while we are using a posgrql database running as a service in a hyperscaler like AWS, Google or Russia. The credentials for the database are in a secret store and retrieved with external secrets operator. Database schema might be managed by let's say Atlas operator. Both the application and the database might be defined and managed through let's say crossplane compositions because that's what I like. They might be accessible through contour renres and internal communication might
08:30 - 09:00 be going through a service mesh like there might be many other components that result in the magical magic experience our users are getting. Now you might be a developer who does not care about any of those things and that's fair. You want to write code and you want to see that code running and test it and I understand that. But they also assume that you want it running in a similar way it is running in production so that you can confirm that it behaves correctly. You want to
09:00 - 09:30 develop something and to test it under similar conditions as if it's running in production. If that's what you want, that is a reasonable request. So here's the question. Can you set up all that locally? You probably can't. And even if you can, reproducing production locally might be too much effort. It takes days, weeks, or even months to set it all up in production. So, it would be silly to expect we do all that every morning before we start working on whatever
09:30 - 10:00 we're supposed to be working. It would be unrealistic to expect everyone to start their workday by spinning up a kind cluster, installing and crossplane and contour and external secrets and everything else, deploying all the dependent applications and databases and whatever else is needed. just to run the application under development. That's why developers love Docker. It's easy. There's almost nothing to do and almost nothing to learn. Still, if applications are not managed by Docker in production,
10:00 - 10:30 our local environments will be very different than what is in production. That might not be such a bad thing if there is no other option, but there is. So, here it goes. We can develop remotely. Actually, it's not that we can, but that we probably should. Remote development solves many of the problems we might have during development. So that's what we'll explore today. We'll see how we can write code locally but to run our application based on that code
10:30 - 11:00 remotely and have it connected to the rest of the system in the same way it is connected in production. We'll see how we can have a development environment that is very similar to production and at the same time very very very easy to set up. By the end of this video, you will be running your application in a development environment in almost the same way as if it's running in production while still being able to develop it as if you're working locally.
11:00 - 11:30 I have a repo with a code of an application on my laptop. It's a Go application, but that does not matter. Whatever I'm going to show works with any type of application. The application is simple. It is a backend application connected to a database. There is also a front end that uses that back end. I'm not working on that one. Just to be clear, someone else is. Still both need to work together. That front end depends on my back end just as back end depends on a posgrql database. In the real world
11:30 - 12:00 situation, the system would be much more complex. There would be many other applications. There might be a pop sub quue for events. There might be dozens of other backend applications and my app talks to. There might be so many other things. Real systems are complex. Nevertheless, three tier application like the one I'm using today should be more than enough to demonstrate whatever I'm about to demonstrate. You just need to use a bit of imagination and multiply the issues I might be facing to see how
12:00 - 12:30 you are in much deeper trouble than I am. Here's the front end up. It's a simple one. It is a front end running in the staging environment. It sends requests to the back end which in turn reads and writes data to the database and responds back to the front end. So far, that might be the simplest app you ever saw. Let's see what they have in that cluster. There is Atlas operator used to manage the scheme of the database. Contouringress which handles
12:30 - 13:00 incoming external traffic, crossplay compositions which provide obstructions for the applications and the databases and external secrets that push and pull secrets to and from a secret store and probably a few other things. The database itself is running in AWS and is managed through crossplane compositions. As a side note, the link to the post with the complete manuscript as well as the instructions how to reproduce everything I'm doing today is in the description of the video. I'm using AWS
13:00 - 13:30 today, but you will find instructions how to reproduce the same scenario in Google Cloud as well. Now, write a comment if you would like me to add other providers to the mix. As I said before, this is a simple demo. Your production will be more complex. Also, it does not matter whether you use the same tools as I do. You might be using CMPG to manage databases inside the cluster. You might prefer vault operator for secrets. You might prefer Benx ingress and you might prefer something
13:30 - 14:00 other than Atlas to manage database schemas. You might dislike crossplane and prefer Terraform for managing infrastructure and Helm to define applications. That's okay. Today, it does not matter whether you prefer the same tools as I do. What matters is that your production is complex and that your staging or pre-production environment is just as complex as well as long as you want the parity between those. Now let's take a quick look at how I defined the
14:00 - 14:30 backend application. It is a custom resource based on a CRD detects as an abstraction that removes all the complexity and focuses only on what matters to developers. The parameter section should be mostly selfexplanatory. We are defining the image, the tag and the port of the application. There is also codes that will be managed by contour ingress. Since that database is defined elsewhere and the resource that made it might be in a different name space or even in a different cluster, we are instructing
14:30 - 15:00 the application to pull the secret with the credentials from the AWS secret store. The assumption is that whichever process is managing the database already pushed a secret with the creds to that secret store. Right? That's the assumption I'm making here. So that's the application running in the staging environment which contains quite a few other things or dependencies that are needed for that application to run. That's the application we'll be developing today. Here's the question.
15:00 - 15:30 Can you set it all up locally? I don't think you can. And even if you disagree with that statement, even if you think that you can, you should ask yourself whether doing everything all that needs to be done is worth the time and the effort. Even if the answer to that one is also yes, you should ask yourself whether you can and should maintain all that locally and ensure that whatever changes are made to that staging
15:30 - 16:00 environment are replicated locally. Can you really really really ensure the parity between what you have on your laptop and what is in the staging environment or pre-production or production or whichever environment looks like production. Now if I convinced you to answer with no to at least one of those questions, the next thing you should ask yourself is whether there is an alternative. The short answer is that there is. As a matter of fact, there are many and most of them
16:00 - 16:30 will lead you to the inevitable conclusion that you should develop remotely or that parts of what you're doing should be remote. Simply put, we should use what we already have in that cluster. So, let's do just that. The easiest way to solve the problem is to deploy the application we're developing to a namespace inside the staging or the development or whatever you call the cluster that already has everything set up. Here's the modified
16:30 - 17:00 version of the manifest we saw earlier. Since that manifest might be used by others, I might need to make a few changes to ensure that application does not clashes with those deployed by my teammates. Specifically, I might need to change the ID to ensure that the cluster scoped resources are unique and it has a unique host so that it is accessible through the URL that is different from the one used by others. Everything else can stay the same everything else.
17:00 - 17:30 Another important note is that today I will be connecting the application I'm developing to the share database. So I will pull the silly demo DB credentials from the secret store. Those credentials point to the staging database. And I don't really care even how those credentials are made. They're they're made by a different process. Right? If my interaction with the database would be in any form destructive or interrupt address working with that shared database, I would need to spin up my own database. Since all the tools are
17:30 - 18:00 already running in that cluster, such an operation should be trivial since all I would have to do is make uh the database manifest or take copy just adopt existing database manifest and make a few modifications and apply it to my nameace. Now let's make the changes I mentioned earlier by changing the spec ID and the spec parameters host values. All that's left is to apply that manifest and the application is now running in my own namespace. As a side
18:00 - 18:30 note, you can do the same through Helm customizer, whichever way you prefer to define and deploy applications. You don't have to use crossplane claims as I do. I'm not trying to convince you to use crossplane today. I'm using it only because that's the way I prefer to define applications, infrastructure, and everything else. If you're interested in crossplane but not yet using it, feel free to check out the crossplane tutorial. By the way, it's in the description. The link is there. And if
18:30 - 19:00 you do check it out or you do want to check it out, please wait until you finish watching this video since it does not this video does not require any experience with crossplane. Doesn't matter. Now, let's get back to the subject. Let's see what we got. There's the result. The custom resource created a bunch of Kubernetes resources needed to run an application. The interesting part is the external secret which was configured to pull database credentials from the secrets manager and store it as a Kubernetes secret. My application is
19:00 - 19:30 now running in my nameace. Instead of using an app deployed with the intention to be used by everyone, this one is mine and only mine. I can now send request to my app. Since the app is connected to a shared database, I can, for example, send a few post requests that will insert some data to the database. There are a couple of problems though. My application is using a shared database. Now, that is not always a bad thing though. However, I need to be careful
19:30 - 20:00 not to mess up the data in that database since others might be using the same. More importantly, I need to be extra careful when making changes to the schema. A better solution would be for me to create my own database, apply a schema and copy the data I might need. That however might be costly both from the time investment and resource utilization perspectives. The alternative could be to use ephemeral databases like for example Neon. We won't go into it in this video. I
20:00 - 20:30 already made one and I don't want to repeat myself. Check it out. The link is in the description as well. Now I will ignore the database because we have a much bigger problem though. How do we keep the app running in my personal namespace up to date with the code I'm writing on my laptop? H I certainly do not want to build an image, push it to the registry and update and apply the manifest every time I make a change to the code. That process is tedious and
20:30 - 21:00 more importantly that process can take minutes. I want something more agile. I want every single change to my code to be reflected in that application. Even if I make a change to a single line of code, I want that change to be deployed. It should not matter whether I spend a few seconds or minutes on writing code. The moment I save changes, I want them to be included in the application running in that cluster. H now I cannot
21:00 - 21:30 always get what I want. There are days when my desires are unrealistic. Today is not one of those days. We can do that. Let's do a personality change. I'm not Victor anymore. Now I am Eva. Unlike Victor, she knows what she's doing. So let's see how would she approach the problem. Now she clones the repo Victor was working on and takes a look at the
21:30 - 22:00 application manifest. She makes similar changes to those Victor made by changing the ID and the host. Finally, she applies the manifest into her own personal namespace. So far, Eva hasn't done anything differently from Victor from the alterme. Now, this is the part where she faces the problem. If she sends a request to her app running in her nameace in the shared cluster, she gets the old output. This is a silly
22:00 - 22:30 demo. She needs that application to reflect the changes she made to the code. And unlike Victor, she knows what to do. She knows that there is octetto yaml file in the repo. That is octetto manifest that specifies that the project needs go image and that air go run should be executed. That is the same command one would execute to run that application locally with the addition of air. That is a handy utility that will
22:30 - 23:00 rerun go run command every time the source code changes. As a result, every time we save changes to the code, the application will be reloaded. The important note is that the command is not special to octetto. We would run it the same way if we would be running that application locally. Further on, there is the sync instruction that will ensure that any file in the current local directory is synchronized into the USR src up directory inside the container. Now, I will skip the explanation of the
23:00 - 23:30 rest of that manifest since it's probably self-explanatory and if it's not, you can check the documentation. Now, to be clear, we could have used octetto to get rid of the previous instructions, but we would need to get the commercial version to do that. Today we are sticking with octetto open source. So you can think of the previous steps as a workaround to avoid opening our wallets. That is not to say that you should not purchase octetto license but rather that today we are focusing only on open source. Now let's see it in
23:30 - 24:00 action. First we'll tell octetto which context to use so that it knows which cluster to talk to. Next, we'll execute octetto up and ensure that whatever is doing is done in a name space. The first time we run octetto, it might take a while until the dev environment is set up. The main culprit is the go image which is on the large size of the spectrum. We let it run and open a second terminal session. This project
24:00 - 24:30 uses stepbox to deploy all the tools we might need for the project when running locally. So, let's start the shell. By the way, if you're not familiar with Devbox, please watch that video over there. That's there's the video somewhere. Link is in the description. Watch it. We'll also source environment variables used in the project. Next, Eva would start working on the code. She's eager to change the output. This is a silly demo in the root.go file. So, that's what she does. And now comes the moment of truth. Let's send the same
24:30 - 25:00 request to the app again. This time the response of the request sent to the application running inside the cluster reflects the changes done locally. That's the magic behind Octetto and other similar tools. We'll take a closer look at how it works and what it does. But before we do that, let's make one more change to the source code and also send another request to the app. Boom. We got the response that reflects the current version of the code located on a
25:00 - 25:30 laptop even though the application is running in a remote cluster. Now let's stop octetto before we take a look at what happened. Here are all the resources in the eagleixen namespace. We can see that there is a new deployment silly demo octetto. When we executed octetto app quite a few things happened but among all those three are very very important. First, octet made a copy of the deployment of the application. It modified it and it scaled it down the
25:30 - 26:00 original one to zero replicas. So from that moment on, the original app was replaced with a variation that has two important modifications. The service is now sending requests to the pods of the new deployment and ingress through that service enables external access. Whatever else we might have had is still there. Functionally, everything is the same except that the new deployment replaced the old one and that there there were a few surgical changes made
26:00 - 26:30 to the copy of the original deployment. First, in our case, it changed the image to the one based on go. The original image is a slim one that could only run the up of the binary. It does not have go or air or any of the tools we might need. And that's a good thing. We want to have slim images instead of fat ones full of stuff that is not needed in production. We want production to be lean and secure. However, when developing, we need more. We need to be
26:30 - 27:00 able to build binaries to do hot reloading and whatever else is needed, at least when running code that is not even compiled. The original image does not even have shell. All in all, octet replace our slim production ready image with the one that has what is needed to build and run source code. Second, it changed the command itself. While in this case, all we need in production is to execute the pre-ompile binary. In development, it executed air command
27:00 - 27:30 that builds and runs the app from the source code whenever that source code changes. Finally, it added synchronization from local disk into the containers in the pods. That way, whatever is on our laptop is synced into that cluster. So, the source code is the same in both places. Whichever changes are done locally, those changes get almost immediately into the container where our application is running. Now,
27:30 - 28:00 besides those and a few other changes, everything else is the same. The service that enables incluster communication is still there. Ingress is still redirecting external traffic to the app. The secret with the database credential is still mounted. Whatever else we might be using is the same. As a result, that app is part of the system just as it would be a part of the system in the staging environment. The resources of the app itself are all the same except
28:00 - 28:30 that the binary of the app is now always compiled and running using the source code we are working on. So it is not exactly the same as production. It is production-like. It contains only the changes needed to develop the app while everything else is the same and the new app still talks to the rest of the system in the staging environment as if nothing happened. Once we are finished developing, all we have to do is to instruct octetto to bring it down. Now
28:30 - 29:00 let's take another look at the resources in the eulixen namespace. As we can see the octetto variation of the deployment is now gone and the original deployment is scaled back up. We are back to where we were as if nothing happened. Once we are done done, we should probably delete the application itself so that we do not waste resources for no good reason. Now, it would be even better if you define the application to scale to zero replicas when not in use. If you did
29:00 - 29:30 that, there would even be no need to delete the app once we're finished working on whatever we working on. Nevertheless, that would be a subject of another video. Let me know in the comments if you're interested in seeing something like that. Now, wasn't that easy and efficient? H we had to make a few modifications to the app manifest, apply it, and then run octetto. I think that was easy, right? But I think that we can make it even easier. We can wrap all that up into a
29:30 - 30:00 script. Let's go back from Eva to Victor. He just saw that whatever was doing, it decided to improve it even more. He wrapped all of the commands ever executed into a nell script. Now, if you're not familiar with nshell, please watch that video over there. Just don't do it now. Finish this one first. Now, I will not go into details why nell is a good choice nor what's inside the script we're about to execute. Mainly because I already explored new shell and I do not want to repeat myself. Instead,
30:00 - 30:30 we'll just run it and see the effect. We'll apply a dev environment for the user with RC and in this case tell it that we want it to be connected to the database silly demo. That's it. That's all that needed to be done to have a fully operational remote production-like development environment that runs the app that is always based on the latest code on Victor's laptop. Now, let's say that Eva pushed her changes to the repo and that Victor pulled them into his
30:30 - 31:00 clone of the repo. Now, if that happened, what happens when he sends a request to the app? H judging by the response, we can see that his version of the application contains the changes Eva made. Now, let's say that he's not happy with Eva leaving her mark in the application and removes her changes. Now, what happens when he sends a request to his version of the app? There we go. The message is this is a silly demo. Again, both of them can work independently from each other. both have an easy setup and both are seeing the
31:00 - 31:30 results of the work in production-like development environment. The last part is important. It's very important and we can prove it by for example sending a request to the app to retrieve data from the database. We can see from the output that this particular instance of the app not only contains the current version of the code on Victor's laptop but also that it is connected to everything else in the cluster including the database. Finally, we can also improve the removal of the dev environment just as we
31:30 - 32:00 improve the creation by wrapping it all up into scripts or we can execute something like platform uninstall dev environment and say that that's for the user vars and that's it for the demo. Now let's jump into pros and cons. Octet was not the focus of this video. It was something else. First of all, I wanted to demonstrate the benefits of remote development environments and what I believe is needed to make them useful. We already saw in that video a different way to
32:00 - 32:30 create environments with development containers and devot. Those are not something I recommend because they fail to deliver environments that mimic production. Those are focused on giving us more or less the same experience when working remotely as when working locally. I need more than that. I do not want to spend time wondering why the application that worked perfectly in my development environment failed when it was deployed to production. I want my
32:30 - 33:00 development environments to be as close to production as possible while at the same time giving me all the benefits of working locally. I feel that the solution we just explored gives us both. It is production-like while at the same time it ensures that we can write code locally and any changes we make on a laptop are reflected in remote environments. It's brilliant. I love it. But I do have one complaint only one. This solution is more expensive than
33:00 - 33:30 local development. It results in the increase of CPU and memory consumption in remote clusters. That application Victor and ever run was using CPU and memory and other resources while it was running. Now to be fair that did not cost much but could even claim that the price was negligible since we did not have to create a development environment with full system but only with the app we are working on. The database was
33:30 - 34:00 shared, the front end was shared and everything else was shared. So there is an increase in costs but not a big one. Everything else is positive. It was simple. It was fast. Okay, relatively fast. If you ignore the fact that I'm using huge Go image, it is production-like and it always contains the code we are working on locally. It is always in sync. That being said, there are a few issues we did not solve. That is not to say that there are cons
34:00 - 34:30 or negative things. The only negative thing I can think of is cost. I see the issues I'm about to discuss as issues that would equally affect local as remote development. So there are not cons really. They're only issues we did not yet solve in this video. Using a shared database is sometimes a good thing, but in others it is a very bad practice. If you performed some destructive operation or altered the schema or did anything else that would prevent others working with the
34:30 - 35:00 database, we would be in big trouble. That can be solved by spinning up separate database instance. But that would only increase the cost and at the same time increase setup complexity related to data replication since more often than not an empty database is not what we need. We could also use ephemeral databases and data branching solutions like neon for example but that neon specifically sus only solution and some organizations might have issues
35:00 - 35:30 contrusting third parties with their data it's not easy the same can be said for popsub and any other similar third party solution that we might be running in our clusters and that deal with data another potentially bigger issue are dependencies that lead into our apps. It is fairly easy to configure our app to talk to other apps. That's easy. We just need to change the addresses it points to typically by adding namespace suffix. But if some other app needs to talk to
35:30 - 36:00 us, that is a problem. Here's an example. Imagine that we're working on a backend application that there is a front end that talks to that back end. We cannot reconfigure that front end to talk only to our back end since that would mean that anyone else working with that front end will start getting responses from our back end. We might not want to run the same front end in our dedicated namespace since that would increase the cost and complexity of the setup itself. The solution to that problem is very very different than what
36:00 - 36:30 octetto does at least the open source version and it is related to networking. Now I know that when I say networking most of the people watching this channel get bored, annoyed and move somewhere else. Still I feel that would be a very interesting subject to explore. So what do you think? Should they dive into it in one of the upcoming videos? Let me know in the comments. Thank you for watching. See you in the next one. Cheers.